0 votes

I'm currently trying to keep track of metrics on a partitioned S3 dataset. I've used the "Autocompute after build" option, but the metrics are not computed on dataset change. I've also tried to compute the metrics using the api (here I'm interested in the record count) :

client = dataiku.api_client()
current_project = client.get_project('project')
sales_s3 = current_project.get_dataset('dataset')
sales_s3.compute_metrics('records:COUNT_RECORDS')

But I get the following error :

DataikuException: java.lang.IllegalArgumentException: For `records:COUNT_RECORDS': Invalid partition identifier, has 1 dimensions, expected 2

Is there a workaround?

Here is what the partition screen looks like :

asked by
edited by
Hello,
The error means that there is something wrong with your partitioning settings. Could you add a screenshot of the content of the S3 dataset and the partition setting screen please?
Cheers,
Alex
I can't upload a screenshot of the content in here, but I uploaded a screenshot of the partition screen.

1 Answer

0 votes
Best answer

Hi,

Ah! This is an interesting topic. There was one small thing missing in your code :) 

When working on partitioned datasets, the compute_metrics method expects to know which partitions to work on. Hence the correct syntax is:

sales_s3.compute_metrics(partition='2018-03-10', 
                        metric_ids=['records:COUNT_RECORDS'])

Note the use of [] around metric_ids. It has to be a list, which means you can compute several metrics in one go for a given partition.  To get the current list of partitions, you should use:

sales_s3.list_partitions()

If you wanted to compute the metric for the whole dataset, then simply pass the partition keyword "ALL"

sales_s3.compute_metrics(partition='ALL', 
                        metric_ids=['records:COUNT_RECORDS'])

Pro-tip: when prototyping code inside a Jupyter notebook, the shortcut Shift+Tab will open a tooltip box with the documentation of the classes and method you are using. There are many useful tricks in Jupyter, have a look at https://www.cheatography.com/weidadeyue/cheat-sheets/jupyter-notebook/pdf_bw/.

Cheers,

Alex

answered by
selected by
Great! Thanks Alex! Does that mean that passing "ALL" as partition argument computes the metrics globally and for each partition independently?
"ALL" means computing the metrics for the whole dataset. But it will not compute metrics for each partition. For that, you need to list partitions and compute metric explicitly for each partition.
Alright, thanks a lot!
991 questions
1,024 answers
1,075 comments
3,140 users

┬ęDataiku 2012-2018 - Privacy Policy