18
submitted 1 year ago* (last edited 1 year ago) by noneabove1182@sh.itjust.works to c/localllama@sh.itjust.works

These are the full weights, the quants are incoming from TheBloke already, will update this post when they're fully uploaded

From the author(s):

WizardLM-70B V1.0 achieves a substantial and comprehensive improvement on coding, mathematical reasoning and open-domain conversation capacities.

This model is license friendly, and follows the same license with Meta Llama-2.

Next version is in training and will be public together with our new paper soon.

For more details, please refer to:

Model weight: https://huggingface.co/WizardLM/WizardLM-70B-V1.0

Demo and Github: https://github.com/nlpxucan/WizardLM

Twitter: https://twitter.com/WizardLM_AI

GGML quant posted: https://huggingface.co/TheBloke/WizardLM-70B-V1.0-GGML

GPTQ quant repo posted, but still empty (GPTQ is a lot slower to make): https://huggingface.co/TheBloke/WizardLM-70B-V1.0-GPTQ

you are viewing a single comment's thread
view the rest of the comments

Tried the q2 ggml and it seems to be very good! First tests make it seem as good as airoboros, which is my current favorite.

[-] noneabove1182@sh.itjust.works 1 points 1 year ago

agreed, it seems quite capable, i haven't tested all the way down to q2 to verify but i'm not surprised

this post was submitted on 09 Aug 2023
18 points (100.0% liked)

LocalLLaMA

2216 readers
18 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS