this post was submitted on 20 Nov 2023
148 points (100.0% liked)

Technology

37724 readers
475 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Article from The Atlantic, archive link: https://archive.ph/Vqjpr

Some important quotes:

The tensions boiled over at the top. As Altman and OpenAI President Greg Brockman encouraged more commercialization, the company’s chief scientist, Ilya Sutskever, grew more concerned about whether OpenAI was upholding the governing nonprofit’s mission to create beneficial AGI.

The release of GPT-4 also frustrated the alignment team, which was focused on further-upstream AI-safety challenges, such as developing various techniques to get the model to follow user instructions and prevent it from spewing toxic speech or “hallucinating”—confidently presenting misinformation as fact. Many members of the team, including a growing contingent fearful of the existential risk of more-advanced AI models, felt uncomfortable with how quickly GPT-4 had been launched and integrated widely into other products. They believed that the AI safety work they had done was insufficient.

Employees from an already small trust-and-safety staff were reassigned from other abuse areas to focus on this issue. Under the increasing strain, some employees struggled with mental-health issues. Communication was poor. Co-workers would find out that colleagues had been fired only after noticing them disappear on Slack.

Summary: Tech bros want money, tech bros want speed, tech bros want products.

Scientists want safety, researchers want to research...

you are viewing a single comment's thread
view the rest of the comments
[–] cwagner@beehaw.org 3 points 1 year ago (1 children)

That’s an interface for models. Which model did you use?

[–] RandoCalrandian@kbin.social 3 points 1 year ago (1 children)

Mistral-7B-Instruct-v0.1 is the default, i'm downloading the Llama2 model to test it with now, but many models on HuggingFace should still work

[–] cwagner@beehaw.org 1 points 1 year ago (1 children)

I do not believe any 7B model comes even close to 3.5 in quality. I used LLama V1 64B, and it was horrible in comparison. Are you really telling me that this tiny model gives better general answers? Or am I just misunderstanding what you are saying?

[–] RandoCalrandian@kbin.social 1 points 1 year ago (1 children)

I didn’t say better, I said comparable
And faster, without handing over my data and conversations for monetization

Given the locally hosted benefits, and the ability to go to chatgpt for any answer minstrel gives that doesn’t satisfy you, makes it strong competition to chatgpt as the default tool

Hosting it yourself also means you can swap llm’s out based on context and what they’re trained on. Highly tuned models perform better than chatgpt at the things they are meant to excel in.

[–] cwagner@beehaw.org 1 points 1 year ago* (last edited 1 year ago)

Prompt:

I’m currently trying to show on the Website Beehaw, that certain LLMs are far superior in writing than others. Examples of what bigger models do better than smaller ones: *

Mistral-7B-Instruct-v0.1

  • ntire articles* vs Headlines Descriptions vs Product titles *Bul

GPT 3.5-Turbo doesn’t support completion as it’s for chat, so I used an even worse one, text-davinci-003 which is far behind state of the art.

Bigger models are able to handle more complex and detailed tasks with ease

  • Bigger models are better suited for natural language understanding and text processing
  • Bigger models are able to learn more accurate representations of context, thus improving the precision of the output
  • Bigger models can process data more quickly and efficiently, saving time and processing power when large volumes of data are used
  • Bigger models can better recognize more subtle nuances in language, which allows them to produce more accurate results
  • Bigger models are able to use more sophisticated algorithms, resulting in a more comprehensive and deeper understanding of the data being used

Mistral 7B might be okay for some very specific cases, but it’s not comparable to proper models at all.

edit: gave it a second chance, it’s a bit better (at least no complete nonsense anymore), but still terrible writing and doesn’t make much sense

Paraphrasing The ability of a language model to generate text that has a similar meaning to the original text is called paraphrasing. This is a very common problem in natural language processing, and many LLMs are designed to be able to paraphrase text. However, there are some LLMs that are particularly good at paraphrasing, and these models are often preferred over smaller models because of their ability to generate more varied and unique text. Examples of LLMs that are known for their paraphrasing abilities include GPT-2 and transformers. These models