this post was submitted on 14 Feb 2024
211 points (99.5% liked)

Technology

59377 readers
4098 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 17 comments
sorted by: hot top controversial new old
[–] GenderNeutralBro@lemmy.sdf.org 51 points 9 months ago (2 children)

Here's a wild idea: make them publish the exact criteria and formulae used to determine coverage. Their decisions should be verifiable and reproducible.

This isn't rocket science.

[–] SpaceNoodle@lemmy.world 24 points 9 months ago (2 children)
[–] SinningStromgald@lemmy.world 16 points 9 months ago

They will add someone who's job it is to click okay to every decision the AI makes. Therefore the AI isn't making a decision the human always clicking okay is.

[–] Reverendender@sh.itjust.works 9 points 9 months ago

I'm sure it was a stern warning.

[–] Tolstoshev@lemmy.world 14 points 9 months ago

AI will deny the care after being rubber stamped by a doctor who graduated last in his class and this is the only job he can get, being a traitor for the insurance companies.

[–] Jubei_K_08@lemmy.world 13 points 9 months ago

Oh, that's some serious finger wagging, sure to make them think twice.

[–] just_change_it@lemmy.world 9 points 9 months ago

Yeah, sure, ok. We pinky promise not to use AI to generate leads that are then printed out on paper and put in front of a doctor's assistant's autopen for signatures denying insurance or coverage.

There is absolutely ZERO way to practically enforce this. An AI team can act like a black box, ingesting data and outputting hard copies that cannot be traced back to them. There is no way this will not happen.

"We'll audit the company!" -> they'll send the data to an offshore shell company that doesn't follow the law, then the recommendations will be sent back.

Prove that legislation can stop this, just try.

[–] pineapplepizza@lemm.ee 7 points 9 months ago (1 children)

I am not from the US but it baffles me how someone can be cut off from health care in a supposed first world country.

[–] pearsaltchocolatebar 6 points 9 months ago

Because greed.

[–] badcommandorfilename@lemmy.world 3 points 9 months ago

Cruel AND unusual??

[–] tourist@lemmy.world 3 points 9 months ago (1 children)

well what were they using before

[–] einat2346@lemmy.today 7 points 9 months ago (1 children)
[–] PipedLinkBot@feddit.rocks 1 points 9 months ago

Here is an alternative Piped link(s):

https://www.piped.video/watch?v=tCJcrIpgrr0

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] CaptainPedantic@lemmy.world 3 points 9 months ago* (last edited 9 months ago) (1 children)

Who needs "AI" when the simple algorithm they already use works perfectly well?

while 1==1:
    deny_coverage = True
[–] bartolomeo@suppo.fi 2 points 9 months ago

I hate that you are absolutely right.

Medical directors do not see any patient records or put their medical judgment to use, said former company employees familiar with the system. Instead, a computer does the work. A Cigna algorithm flags mismatches between diagnoses and what the company considers acceptable tests and procedures for those ailments. Company doctors then sign off on the denials in batches, according to interviews with former employees who spoke on condition of anonymity.

“We literally click and submit,” one former Cigna doctor said. “It takes all of 10 seconds to do 50 at a time.”