0 votes

when I create a flow, for example taking records from table A  -> do some blending -> group by -> implementing a model

I see that each step in the process is crating a recipe.

Does this mean that each step saves the data to - csv if we read from csv the data / sql table if we read from sql DB?

or the process is InMemory ?



1 Answer

0 votes
Best answer


A recipe will read from the input (upstream) dataset and write into its output dataset every time you run it, including if you run several recipes to build a final dataset. Some recipes can be executed directly in the SQL database for instance, depending on where your data is and the Engine you set for that recipe, see Execution Engines. Depending on the engine and recipe, the data may be streamed (and not need a lot of memory), or loaded fully into memory. DSS will try to advise by default-selecting the best available engine for your recipes.

Under certain conditions, you can skip the writing of intermediate datasets you don't need using Spark pipelines.

selected by
So if I load data from MS SQL Server table it will create a table for each step of the process ( after grouping, cleansing...) ?
A table after each recipe, yes. Each recipe has a Dataset as output, and that dataset is written in a SQL table (or other storage backend you configure).
1,319 questions
1,339 answers
11,888 users

©Dataiku 2012-2018 - Privacy Policy