this post was submitted on 05 Jul 2024
105 points (100.0% liked)

TechTakes

1491 readers
43 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] fasterandworse@awful.systems 39 points 5 months ago (5 children)

Is it absurd that the maker of a tech product controls it by writing it a list of plain language guidelines? or am I out of touch?

[–] kgMadee2@mathstodon.xyz 30 points 5 months ago (1 children)

@fasterandworse @dgerard I mean, it is absurd. But it is how it works: an LLM is a black box from a programming perspective, and you cannot directly control what it will output.
So you resort to pre-weighting certain keywords in the hope that it will nudge the system far enough in your desired direction.
There is no separation between code (what the provider wants it to do) and data (user inputs to operate on) in this application 🥴

[–] corbin@awful.systems 7 points 5 months ago

That's the standard response from last decade. However, we now have a theory of soft prompting: start with a textual prompt, embed it, and then optimize the embedding with a round of fine-tuning. It would be obvious if OpenAI were using this technique, because we would only recover similar texts instead of verbatim texts when leaking the prompt (unless at zero temperature, perhaps.) This is a good example of how OpenAI's offerings are behind the state of the art.

[–] ebu@awful.systems 20 points 5 months ago* (last edited 5 months ago) (2 children)

simply ask the word generator machine to generate better words, smh

this is actually the most laughable/annoying thing to me. it betrays such a comprehensive lack of understanding of what LLMs do and what "prompting" even is. you're not giving instructions to an agent, you are feeding a list of words to prefix to the output of a word predictor

in my personal experiments with offline models, using something like "below is a transcript of a chat log with XYZ" as a prompt instead of "You are XYZ" immediately gives much better results. not good results, but better

[–] fasterandworse@awful.systems 14 points 5 months ago

it's all so anti-precision

[–] o7___o7@awful.systems 10 points 5 months ago* (last edited 5 months ago)

simply ask the word generator machine to generate better words, smh

Butterfly man: "Is this recursive self-improvement"

[–] barsquid@lemmy.world 14 points 5 months ago

It is absurd. It's just throwing words at it and hoping whatever area of the vector database it starts generating words from makes sense in response.

[–] hairyvisionary@fosstodon.org 7 points 5 months ago (2 children)

@fasterandworse @dgerard I am pretty sure I have seen programming the computer in plain English used as a selling point for various products since the 1970s at least

the best part is that most of these products are ex-products

[–] hairyvisionary@fosstodon.org 7 points 5 months ago

@fasterandworse @dgerard I mean, it's like catnip for the people who control how the company's money is spent

For absurd, I think one would want the LLM's configuration language to be more like INTERCAL; but this may also be more explicit about how your instructions are merely suggestions to a black box full of weights and pulleys and with some randomness added to make it less predictable/repetitive

[–] brouhaha@mastodon.social 3 points 5 months ago

@hairyvisionary @fasterandworse @dgerard
That was explicitly a goal of COBOL, and (guessing here) probably Commercial Translator as well.

[–] V0ldek@awful.systems 3 points 5 months ago

"controls" is way too generous