+7 votes
Hi

I created a flow sequencing recipes for a first dataset. The goal is to create a prediction model at the end of the flow.

Next I need to apply my model to an input dataset that has the same schema as my first dataset.

I could not figure out how to apply the whole flow to the 2nd dataset and I had to copy the flow steps (analysis, joins, python) and apply them to the 2nd dataset.

Is there a more straight forward way? The issue is that if I add a new step or modify a recipe to enrich the dataset, it must be done twice.

Best regards

Geoff
by

1 Answer

+1 vote
Hi,

This is a good question. There is no out-of-the-box feature to do that today. But we are thinking about it.

Actually, it works in a simple case since v2.0.0: when you have your data preparation in a script of an Analysis (called "Analyse"), deploying the model will also reproduce the script (ie. all the processors).

If you have more a complex flow (with Python recipes, etc.), there is a solution: you can stack your two sources in a single dataset, then apply your flow of transformation, then split in two before modelling.

Jeremy
by
Hi, I am also interested in this feature. Is there any support for it in Dataiku v2.3?

Thanks.
Isn't it possible to export your project (project home page > actions > export this project), reimport it (DSS home page > mouse on the left > import) with a different name, and then change the input dataset ?
This leads however to very strange behaviour in the new project
Is there any update ? I've downloaded DSS 4.3.1 to evaluate whether it will support a project that will have serial data tables that will be inserted as records in a database over time. I'd like to run a code recipe on each new 'record'. Note that a database record will reflect a file/table containing multiple columns and rows.  Is this possible currently?
1,233 questions
1,204 answers
1,349 comments
11,676 users

┬ęDataiku 2012-2018 - Privacy Policy