this post was submitted on 19 Jul 2023
26 points (93.3% liked)

LocalLLaMA

2249 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

Things are still moving fast. It's mid/late july now and i've spent some time outside, enjoying the summer. It's been a few weeks since things exploded in the month of may this year. Have you people settled down in the meantime?

I've since then moved from reddit and i miss the LocalLlama over there, that was/is buzzing with activity and AI news (and discussions) every day.

What are you people up to? Have you gotten tired of your AI waifus? Or finished indexing all of your data into some vector database? Have you discovered new applications for AI? Or still toying around and evaluating all the latest fine-tuned variations in constant pursuit of the best llama?

you are viewing a single comment's thread
view the rest of the comments
[–] zephyrvs@lemmy.ml 3 points 1 year ago* (last edited 1 year ago) (1 children)

I'm building an assistant for Jungian shadow work with persistent storage, but I'm a terrible programmer so it's taking longer than expected.

Since shadow work is very intimate and personal, I wouldn't trust a ChatGPT integration and I'd never be fully open in conversations.

[–] rufus@discuss.tchncs.de 3 points 1 year ago* (last edited 1 year ago) (1 children)

Wow. I'm always amazed by what - previously unknown (to me) stuff - people do. I had to look that one up. Is this some kind of leisure activity? self-improvement or -therapy? or are you just pushing the boundaries of psychology?

[–] zephyrvs@lemmy.ml 0 points 1 year ago (1 children)

I was fascinated by Jung's works after tripping on shrooms and becoming obsessed with understanding conciousness. I already stumbled upon llama.cpp and started playing around with LLMs and just decided to build a prototype for myself, because I've doing shadow work for self-therapy reasons anways.

It's not really that useful yet, but making it into a product is unlikely because most people who wouldn't trust ChatGPT won't trust an open source model on my machine(s) either. Also shipping a product glued together from multiple open source components with rather strict GPU requirements seems like a terrible experience for potential customers and I don't think I'd be able to handle the effort of supporting others to properly set it up. Dunno, we'll see. :D

[–] rufus@discuss.tchncs.de 3 points 1 year ago* (last edited 1 year ago)

Hehe. People keep highjacking the term 'open source'. If you mean free software... I have faith and trust in that concept. Once your software gets to a point where it is useful and you start attracting other contributors, people will start to realize your software is legit. At least I would do that.

I use KoboldCPP and llama.cpp, because i don't own a gpu. I believe you could implement a fallback to something like this, and you could eliminate your strict gpu requirements. (people would need at least 16 - 32gb of RAM though. and a bit of patience because this is slower.)