Survey banner
Switching to Dataiku - a new area to help users who are transitioning from other tools and diving into Dataiku! CHECK IT OUT

Connection to locally hosted LLM on Apple Silicon

ron_dk
Level 2
Connection to locally hosted LLM on Apple Silicon

Is there any way to setup a connection that connects to a locally running LLM? On Apple Silicon it is possible to run huggingface models locally with some different open source solutions, which make use of Metal for the GPU-compute. The solution I have running can be said to function as a server accessible as localhost. Would be great to be able to setup a connection to run AI recipes on that. 


Operating system used: MacOS

12 Replies
ron_dk
Level 2
Author

Thanks, Turri. But I think I was unclear. I do have metal running on MacOS, and I do have a local LLM running (via LM Studio). What I am looking for is a no-code way to setup an LLM connection (using the tools available under the "connections" tab of the admin section) to that local LLM. Have tried different workarounds (using the Azure connection, installing the deprecated GPT plugin) but none of them will let me direct the AI recipe to an LLM running locally (but outside of Dataiku).

Of course, this aveniue of inquiry begins with the fact that the Huggingface connection (also under the admin) won't run om my Mac, since the setup requires Nvidia GPUs to run. If there is a solution to get around that limitation, that would also be great. Is that maybe what you are suggesting could be achieved with the Apple PluggableDevice?

0 Kudos
Turribeach

OK I got what you want to do now. I am guessing Dataiku hasn't opened the local avenue yet. I haven't played much with the LLM Mesh but my simple understanding is that it lets you connect to different LLMs providers. I wonder if will be possible to duplicate one of the LLM provider APIs locally and fake the DNS using your local hosts file so that it connects to a local webserver that execute your local LLM.

0 Kudos
Marlan

Hi @ron_dk,

I understand that a LLM Mesh connection plugin is in the works. This would enable creating custom connections for specific use cases (like yours). You may want to contact Dataiku directly to get more information.

We will be using this when it is available to connect to an API that we are putting in front of an LLM.

 

Marlan

0 Kudos
ron_dk
Level 2
Author

Thanks, Marlan. Would be a bit of revolution for us non-coders. Fingers crossed!

0 Kudos
ron_dk
Level 2
Author

Bump for this thread.

Has anyone managed to wrangle any of the connections in the LLM Mesh (e.g., the OpenAI connection or the HuggingFace connection) to connect to a compatible local LLM host running on MacOS/Apple Silicon?

I am thinking of LM Studio specifically, which seems to be the perfect partner for Dataiku. When running in server mode, LM Studio gives you a localhost adress to send prompts to. All you need to do is to ensure that your prompts are formated as OpenAI prompts. The program runs on Metal to offload inferencing to the GPUs. I have a Python script to run classification that way, but it would be way easier if I could just use Prompt Studio.

Ollama is another alternative with a similar setup, except using the Llama.cpp prompt format.

Any wrangling suggestions would be much appreciated.

0 Kudos

@ron_dk 

Did you ever get ollama working on Apple Silicon with the Dataiku llm mesh?

--Tom
0 Kudos

Would this in any way help this discussion?

https://docs.titanml.co/docs/Docs/integrations/dataiku/

 

--Tom
0 Kudos

Hey! @ron_dk Meryem from TitanML here - we might be able to help here, we can run on apple silicon and have a fully maintained connection within Dataiku. Happy to chat if you want to explore ๐Ÿ˜Š

@m_arik 

Welcome to the Dataiku Community.  We are so glad to have you join us.

You and I have spoken about this subject.  And I look forward to getting back to you.  Unfortunately, my personal and work life have gotten very busy over the past few weeks since we spoke.

Iโ€™ll get back to you later this week.  Here is the States we are on holiday today.  

--Tom
0 Kudos
ron_dk
Level 2
Author

Hi @tgb417 and @m_arik,

Wow, the TitanML plugin that tgb417 linked to above seems like just the ticket for running LLMs locally with Dataiku. Thanks for the heads-up.

For my case especially (and anyone else running this on Apple Silicon) it would be very useful to know how TitanML can work on that platform. @m_arik: The supported hardware page doesn't yet list Apple Silicon/Metal as a platform, but sounds like you're working hard to catch up with the ecosystem of local platforms. If you could share any news on and maybe even experimental configs that would be much appreciated.

/ron_dk

Hey! We actually do unoffically support apple silicon - Its not on the website because most of our clients tend to try and optimize for data centre hardware (you can check out this blog on it https://cobusgreyling.medium.com/develop-generative-apps-locally-08bbe338d020). 

Would be happy to give you guys a licence to these versions and help you set it up within Dataiku - just reach out to me by email (meryem@titanml.co