Think of all the thoughts you can think while not thinking!
this post was submitted on 28 Nov 2023
2 points (100.0% liked)
Emacs
313 readers
3 users here now
A community for the timeless and infinitely powerful editor. Want to see what Emacs is capable of?!
Get Emacs
Rules
- Posts should be emacs related
- Be kind please
- Yes, we already know: Google results for "emacs" and "vi" link to each other. We good.
Emacs Resources
Emacs Tutorials
- Beginner’s Guide to Emacs
- Absolute Beginner's Guide to Emacs
- How to Learn Emacs: A Hand-drawn One-pager for Beginners
Useful Emacs configuration files and distributions
Quick pain-saver tip
founded 1 year ago
MODERATORS
And now you have to double check everything twice!
This and other things also possible with ellama. It also works with local models.
Yeah, I'd be eager to try and see if it makes the response faster without sacrificing quality. Are there models right now that have decent output running on something like a Chromebook?
On most Chromebooks are very weak hardware. I don't think it will work fast enough to be useful.
Mini orca 3b maybe?
But you can use it with open ai or Google api.
See also [this reply](https://github.com/s-kostyaev/ellama/issues/13#issuecomment-1807954046) about using ellama on weak hardware:
You can try:
- lower quantization of zephyr (like
- 7b-beta-q2_K
- ,
- 7b-beta-q3_K_S
- ,
- 7b-beta-q3_K_M
- ,
- 7b-beta-q3_K_L
- lighter models, like
- orca-mini:3b-q4_0
- or
- starcoder:1b
- ,
- starcoder:3b
- for coding
- deploy ollama on more powerful machine and connect ellama to it (on cloud provider, for example)
- use OpenAI of Google API and connect ellama to it