[-] DR_Hero@programming.dev 8 points 1 month ago

There's a much more accurate stat... and it's disgusting

[-] DR_Hero@programming.dev 3 points 1 month ago

I think he was just imminently concerned about their safety. Like the post suggests, many thought desperate times were coming and any rando in a maga hat might retaliate.

[-] DR_Hero@programming.dev 4 points 1 month ago

Ai generation has legitimately gotten that good. They do text very well, and photorealism too

[-] DR_Hero@programming.dev 3 points 1 month ago

At least the same company developed both in that case. As soon as a new open source AI model released, Elon just slapped it on wholesale and started charging for it

[-] DR_Hero@programming.dev 5 points 2 months ago

It's a dream I considered many times. It can be cheaper* than land life.

[-] DR_Hero@programming.dev 5 points 3 months ago

I'm confused as to what your point is

[-] DR_Hero@programming.dev 11 points 4 months ago

Collective mass arbitration is my favorite counter to this tactic, and is dramatically more costly for the company than a class action lawsuit.

https://www.nytimes.com/2020/04/06/business/arbitration-overload.html

A lot of companies got spooked a few years back and walked back their arbitration agreements. I wonder what changed for companies to decide it's worth it again. Maybe the lack of discovery in the arbitration process even with higher costs?

[-] DR_Hero@programming.dev 4 points 8 months ago

Excuse me but, the fuck is wrong with you?

[-] DR_Hero@programming.dev 19 points 1 year ago* (last edited 1 year ago)

The worst part is that it took them years after it came out to be a known risk before they actually sent me a replacement machine.

Having to choose between the risk of heart failure and the risk of cancer sure was fun...

[-] DR_Hero@programming.dev 3 points 1 year ago

Now I'm upset this wasn't the original haha

[-] DR_Hero@programming.dev 8 points 1 year ago

I've definitely experienced this.

I used ChatGPT to write cover letters based on my resume before, and other tasks.

I used to give it data and tell chatGPT to "do X with this data". It worked great.
In a separate chat, I told it to "do Y with this data", and it also knocked it out of the park.

Weeks later, excited about the tech, I repeat the process. I tell it to "do x with this data". It does fine.

In a completely separate chat, I tell it to "do Y with this data"... and instead it gives me X. I tell it to "do Z with this data", and it once again would really rather just do X with it.

For a while now, I have had to feed it more context and tailored prompts than I previously had to.

[-] DR_Hero@programming.dev 5 points 1 year ago

CisHet here, also with a statistically improbable number of close trans friends.

Growing up, I ate eggs so often they said I would turn into one...

I think I'm safe for now...

view more: next ›

DR_Hero

joined 1 year ago