this post was submitted on 12 Jun 2023
6 points (100.0% liked)

Technology

37724 readers
623 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

cross-posted from: https://sh.itjust.works/post/55351

This is a cool project for an ESP32-S3-Box which can give really good voice support to Homeassistant or Openhab. Once installed on supported hardware, you can host the Inference Server yourself, use their cloud based version, or perform local actions on the device.

you are viewing a single comment's thread
view the rest of the comments
[–] communist@beehaw.org 1 points 1 year ago (1 children)

How does this compare to mycroft?

[–] cyberscribe@sh.itjust.works 2 points 1 year ago (1 children)

Unfortunately Mycroft has been discontinued by its management team and is heading towards deprecation. This project is starting up now and has a strong initial release that works well. Mycroft was intended to be used with a cloud backend hosted by the Mycroft team (that being said they did eventually open source their backend but it was not intended for use with single instances).

Willow is designed to work with very low power/cost hardware (esp32-s3-box) and either homeassistant or openHAB right out of the box.

[–] communist@beehaw.org 1 points 1 year ago (1 children)

Oh wow, I didn't even hear it was discontinued, interesting.

Does it use a large language model? it says "Willow users can now self-host the Willow Inference Server for lightning-fast language inference tasks with Willow and other applications (even WebRTC) including STT, TTS, LLM, and more!"

but i'm not sure if that refers to using a large language model or if LLM refers to something else

[–] cyberscribe@sh.itjust.works 1 points 1 year ago

Yes - I have not tested it out yet but the author of this project suggests Llama derivatives like Vicuna. I am excited to see how this project evolves alongside Homeasisstant's voice goals. The author of Rhasspy is working for Nabu Casa so im sure that will grow too!