this post was submitted on 25 Sep 2023
557 points (96.0% liked)

Technology

59377 readers
4179 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Running AI is so expensive that Amazon will probably charge you to use Alexa in future, says outgoing exec::In an interview with Bloomberg, Dave Limp said that he "absolutely" believes that Amazon will soon start charging a subscription fee for Alexa

you are viewing a single comment's thread
view the rest of the comments
[–] wer2@lemm.ee 18 points 1 year ago (6 children)

My Home Assistant Voice is getting really close to displacing Alexa.

[–] stevedidwhat_infosec@infosec.pub 7 points 1 year ago (5 children)

Same. I’ve already got an entire setup between gpt with customizable system level prompting capabilities and it uses custom voice models I’ve trained over at eleven labs

Now I just gotta slap my lil monsters phat ass into a raspberry pi and then destroy the fuck out of my Alexa devices and ship em to Jeff bozo

[–] StubbornCassette8@feddit.nl 10 points 1 year ago (2 children)

Can you share details? Been thinking of doing this with a new PC build. Curious what your performance and specs are.

[–] Silentrizz@lemmy.world 5 points 1 year ago
[–] stevedidwhat_infosec@infosec.pub 1 points 1 year ago (1 children)

You shouldn’t need anything really, all the components run via cloud services so you just need a network connection.

That’s why it’ll run just fine on a cheap pi model

Essentially the script in Python just sends api requests directly to OpenAI and returns the AI response. Next I just pass that response to the elevenlabs api and play that audio binary stream via any library that supports audio playback.

(That last bit is what I’ll have to toy around with on a pi but, I’m not worried about finding a suitable option, there’s lots of libraries out there)

[–] StubbornCassette8@feddit.nl 1 points 1 year ago* (last edited 1 year ago) (1 children)

Oh wait, I think I misunderstood. I thought you had local language models running on your computer. I have seen that be discussed before with varying results.

Last time I tried running my own model was in the early days of the Llama release and ran it on an RTX 3060. The speed of delivery was much slower than OpenAI's API and the material was way off.

It doesn't have to be perfect, but I'd like to do my own API calls from a remote device phoning home instead of OpenAI's servers. Using my own documents as a reference would be a plus to, just to keep my info private and still accessible by the LLM.

Didn't know about Elevenlabs. Checking them out soon.

Edit because writing is hard.

That could be fun! I’ve made and trained my own models prior but I find that getting the right amount of data (in terms of both size and diversity to ensure features are orthogonal out of the gate) can be pretty tough.

If you don’t get that right balance of size and diversity in your data, that efficacy upper limit is gonna be way lower than you’d like, but you might have some good data sets laying around I got no clue ^_^

Lemmy know how it goes!

load more comments (2 replies)
load more comments (2 replies)