this post was submitted on 19 Jul 2023
26 points (93.3% liked)

LocalLLaMA

2237 readers
7 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

Things are still moving fast. It's mid/late july now and i've spent some time outside, enjoying the summer. It's been a few weeks since things exploded in the month of may this year. Have you people settled down in the meantime?

I've since then moved from reddit and i miss the LocalLlama over there, that was/is buzzing with activity and AI news (and discussions) every day.

What are you people up to? Have you gotten tired of your AI waifus? Or finished indexing all of your data into some vector database? Have you discovered new applications for AI? Or still toying around and evaluating all the latest fine-tuned variations in constant pursuit of the best llama?

you are viewing a single comment's thread
view the rest of the comments
[–] noneabove1182@sh.itjust.works 2 points 1 year ago (1 children)

Yeah I'm using it with home assistant :)

Basically I'm using oobabooga for inference and providing an API endpoint as if it were OpenAI, and then plugging that into Microsoft's guidance, which I then give a tool. The tool takes as input the device and the state, and then calls my home assistant rest endpoint to execute the command!

[–] rufus@discuss.tchncs.de 2 points 1 year ago (1 children)

Thank you for pointing that out. I was completely unaware of microsoft guidance. Once they merge/implement llama.cpp support, i'm definitely going to try it, too.

[–] noneabove1182@sh.itjust.works 1 points 1 year ago

That will certainly be amazing, but for now it's actually not bad to use either oobabooga web UI or koboldcpp to run the inferencing and provide a rest endpoint, cause you can trick basically any program into treating it as if it's OpenAI and use it the same way