0 votes
hello, I was working in a HDInsights cluster where I have Dataiku DSS and whenever I try to run a hive query I get this error: "com.microsoft.azure.storage.blob.CloudBlob.startCopyFromBlob(Lcom/microsoft/azure/storage/blob/CloudBlob;Lcom/microsoft/azure/storage/AccessCondition;Lcom/microsoft/azure/storage/AccessCondition;Lcom/microsoft/azure/storage/blob/BlobRequestOptions;Lcom/microsoft/azure/storage/OperationContext;)Ljava/lang/String;"

Spark is runing very well, I am not sure, but it seemed to me that the problem arrises from within DSS infrastructure since I recieve the same problem when I run a DSS visual, Python/R recipies and tell them to create the new dataset as hdfs, with the visual recipies the problem goes away when I tell it to run in spark engine instead of DSS. I have gone around the problem with R and Python if I tell DSS to print the new table as a filesystem, but with hive I cannot do the same I even cannot use pig nor impala. have you got any idea why this is happening, should I change a parameter or do something else?

Any help will be greatly appreciated.

ps. if you need me to send you logs or something I will.
closed with the note: closed with the note: found answer by reading the doc: I enter to read documentations and chenged my HDI settings and now it works perfectly... Also I noticed Impala does not work with HDI haha have a nice day :D (the easiest way is to do it as in the video of DSS with HSInsights put spark 1.6 (HDI 3.5) and it will do all the configuration, so you do not have to do the config by hand)
asked by
closed by
I am quite stupid... I enter to read documentations and chenged my HDI settings and now it works perfectly... Also I noticed Impala does not work with HDI haha have a nice day :D
563 questions
575 answers
421 comments
320 users