1
submitted 11 months ago by publicvoit@alien.top to c/emacs@communick.news

Hi,

I was using my search engine to look for available Emacs integrations for the open (and local) https://gpt4all.io/ when I realized that I could not find a single one.

Is there somebody who's using GPT4All with Emacs already and did not publish his/her integration?

top 11 comments
sorted by: hot top controversial new old
[-] dj_goku@alien.top 1 points 11 months ago

Can’t say if it is helpful but maybe it can be added to https://elpa.gnu.org/packages/llm.html

[-] ahyatt@alien.top 1 points 11 months ago

Yes, the llm package does not have this but does have ollama, which seems pretty similar. I’m curious what the differences are. But if anyone thinks this is worth adding, it can be done, making it available to any package integrating with the llm package.

[-] ahyatt@alien.top 1 points 11 months ago

I've now added this to the llm package. Although I have to say it's not nearly as complete as Ollama is. In particular, it lacks embedding functionality and streaming.

[-] karthink@alien.top 1 points 11 months ago

I can add this to gptel quite easily, but I can't find the instructions on how to use it. Does it use a local http server? Where can I find these details?

[-] publicvoit@alien.top 1 points 11 months ago

Hi,

I personally would not have expected that the desktop app doesn't have to run in background anyway. ;-)

Any "gtp4all.el"-like mode would help me in writing my queries in Emacs as well as receiving its output directly into Emacs (babel/org-mode preferred, I suppose). Currently, I do a lot of copy&paste for that purpose.

[-] karthink@alien.top 1 points 11 months ago

In that case you can use it right now with gptel, which supports an Org interface for chat.

Enable the server mode in the desktop app, and in Emacs, run

(setq-default gptel-model "gpt4all-j-v1.3-groovy"
              gptel-host "http://localhost:4891/v1"
              gptel-api-key "--")

Then you can spawn a dedicated chat buffer with M-x gptel or chat from any buffer by selecting a region of text and running M-x gptel-send.

[-] publicvoit@alien.top 1 points 11 months ago

Great news - will try in the next days. Thank you.

[-] nickanderson5308@alien.top 1 points 11 months ago

I have played with this a bit in the last few days.

It's nice and minimal, but I am hitting some issues with not enough memory. It seems gptel wants to load whatever model is specified, but I don't have enough memory to run the model GPT4All desktop loads by default plus what gptel wants to load.

[-] karthink@alien.top 1 points 11 months ago

In the meantime I added explicit support for GPT4All, the above instructions may be incorrect by the time you get to it. The Readme should have updated instructions (if it mentions support for local LLMs at all).

[-] LionyxML@alien.top 1 points 11 months ago

3 days painfully waiting :)

[-] ahyatt@alien.top 1 points 11 months ago

It isn't actually the same, though - they don't support streaming. How are you getting around this?

this post was submitted on 22 Oct 2023
1 points (100.0% liked)

Emacs

305 readers
1 users here now

A community for the timeless and infinitely powerful editor. Want to see what Emacs is capable of?!

Get Emacs

Rules

  1. Posts should be emacs related
  2. Be kind please
  3. Yes, we already know: Google results for "emacs" and "vi" link to each other. We good.

Emacs Resources

Emacs Tutorials

Useful Emacs configuration files and distributions

Quick pain-saver tip

founded 1 year ago
MODERATORS