Partitioning might indeed be an answer to your case. Beware that it requires to learn a bit about it first, and it needs some practice (don't be discouraged if things don't work on the first attempt !).
> synchronize an Oracle table of 1 billion rows to another Oracle table
If both datasets are partitioned, and you set the partition dependency to be "Equal" (which is the default), then DSS will indeed run the recipe partition by partition, as “multiple little queries”.
>How can I run several repartition key in one go
Specifying which partitions you want DSS to sync is done on the recipe page, just above the "run" button.
to specify a (list of) partitions and more generaly http://doc.dataiku.com/dss/latest/partitions/index.html
to start learning about partitioning.
> How can run all partitions in one go without listing the 200 possible values
This isn't directly supported, but there is a workaround for now: add a recipe from the dataset for which you want to build all partitions to a dummy unpartitioned dataset. Define partition dependencies as “all available”.