this post was submitted on 14 Feb 2024
481 points (98.6% liked)

Technology

59358 readers
6811 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Last year, two Waymo robotaxis in Phoenix "made contact" with the same pickup truck that was in the midst of being towed, which prompted the Alphabet subsidiary to issue a recall on its vehicles' software. A "recall" in this case meant rolling out a software update after investigating the issue and determining its root cause.

In a blog post, Waymo has revealed that on December 11, 2023, one of its robotaxis collided with a backwards-facing pickup truck being towed ahead of it. The company says the truck was being towed improperly and was angled across a center turn lane and a traffic lane. Apparently, the tow truck didn't pull over after the incident, and another Waymo vehicle came into contact with the pickup truck a few minutes later. Waymo didn't elaborate on what it meant by saying that its robotaxis "made contact" with the pickup truck, but it did say that the incidents resulted in no injuries and only minor vehicle damage. The self-driving vehicles involved in the collisions weren't carrying any passenger.

After an investigation, Waymo found that its software had incorrectly predicted the future movements of the pickup truck due to "persistent orientation mismatch" between the towed vehicle and the one towing it. The company developed and validated a fix for its software to prevent similar incidents in the future and started deploying the update to its fleet on December 20.

top 50 comments
sorted by: hot top controversial new old
[–] noodlejetski@lemm.ee 114 points 9 months ago* (last edited 9 months ago) (3 children)

I love the corpospeak. why say "crashed into" when you can use "made contact" which sounds futuristic and implies that your product belongs to an alien civilization?

[–] pastermil@sh.itjust.works 40 points 9 months ago (2 children)

By "made contact", it means that they "smashed".

[–] TWeaK@lemm.ee 22 points 9 months ago (1 children)

it means that they "smashed".

So are we gonna have some baby robotaxi trucks driving around in a few month's time?

[–] ironhydroxide@sh.itjust.works 8 points 9 months ago

Now that's how you get a true generative ai.

You smash, you make "babies", babies are slightly different and maybe better.(probably worse)

[–] Kecessa@sh.itjust.works 14 points 9 months ago

Make contact with that like button!

[–] lengau@midwest.social 18 points 9 months ago

Next they're going to add passive voice to further confuse the issue. "A pickup truck was made contact with by two vehicles..."

[–] bstix@feddit.dk 70 points 9 months ago (6 children)

The company says the truck was being towed improperly

Shit happens on the road. It's still not a great idea to drive into it.

The company developed and validated a fix for its software to prevent similar incidents

So their plan is to fix one accident at a time..

[–] DoomBot5@lemmy.world 19 points 9 months ago (1 children)

Rules are written in blood. Once you figure out all the standard cases, you can only try and predict as many edge cases that you can think of. You can't make something fool proof because there will always be a greater fool that will come by.

[–] bstix@feddit.dk 13 points 9 months ago (1 children)

Unexpected or not, it should do its best to stop or avoid the obstacle, not drive into it.

An autonomous vehicle shouldn't ever be able to actively drive forward into anything. It's basic collision detection that ought to brake the car here. If something is in the position the car wants to drive to, it simply shouldn't drive there. There's no reason to blame the obstacle for being towed incorrectly..

[–] NotMyOldRedditName@lemmy.world 7 points 9 months ago* (last edited 9 months ago) (9 children)

In this case it thought the vehicle had a different trajectory due to how it was improperly set up.

The car probably thought it wasn't going to hit it until it was too late and the trajectory calculation proved incorrect.

Every vehicle on the road is few moments away from crashing if we calculate that incorrectly. It doesn't matter if it knows its there.

load more comments (9 replies)
[–] Tetsuo@jlai.lu 13 points 9 months ago (2 children)

Honestly, I think only trial and error will let us get a proper autonomous car.

And I still think autonomous cars will save many more lives than it endangered once it become reliable.

But for now this is bound to happen...

To be clear, they still are responsible of these car and the safety of others. They didn't test properly.

They should be trying every edge case they can think about.

A large screen on the side of a truck ? What if a car is displayed on it ? Would the car sensor notice the difference?

A farmer dropped a hay bale on the road ? It got flattened by rain ? Does the car understand that this might not be safe to drive on or to brake on ?

There is hundreds of unique situations that they should be trying before an autonomous car gets even close to a public road.

But even if you try everything there will be mistakes and fatalities.

load more comments (2 replies)
[–] Chozo@kbin.social 12 points 9 months ago (3 children)

So their plan is to fix one accident at a time…

Well how else would you do it?

[–] bstix@feddit.dk 28 points 9 months ago (4 children)

You drive a car and can't quite figure out what is happening in front of you.

Do you:

  • A: Turn up the music and plow right through.
  • B: Slow down (potentially to a full stop) and assess the situation.
  • C : Slow down, close your eyes and continue driving slowly into the obstacle
  • D: Sound the horn and flash the lights

From the description offered in the article the car chose C, which is wrong.

[–] lengau@midwest.social 15 points 9 months ago (1 children)

Given the millions of global road deaths annually I think B is probably the least popular answer.

load more comments (1 replies)
load more comments (3 replies)
load more comments (2 replies)
[–] aniki@lemm.ee 12 points 9 months ago

Just like Tesla! And people wonder why they are a hated company.

load more comments (2 replies)
[–] cestvrai@lemm.ee 60 points 9 months ago (1 children)

Hmm, so it’s only designed to handle expected scenarios?

That’s not how driving works… at all. 😐

load more comments (1 replies)
[–] overzeetop@lemmy.world 51 points 9 months ago (5 children)

The description of an unexpected/(impossible) orientation for an on road obstacle works as an excuse, right up to the point where you realize that the software should, explicitly, not run into anything at all. That’s got to be, like, the first law of (robotic) vehicle piloting.

It was just lucky that it happened twice as, otherwise, Alphabet likely would have shrugged it off as some unimportant, random event.

[–] dan1101@lemm.ee 20 points 9 months ago (10 children)

Billionaires get to alpha test their software on public roads and everyone is at risk.

load more comments (10 replies)
[–] LesserAbe@lemmy.world 10 points 9 months ago

I didn't read it as them saying "therefore this isn't a problem," it was an explanation for why it happened. Think about human explanations for accidents: "they pulled out in front of me" "they stopped abruptly". Those don't make it ok that an accident happened either.

load more comments (3 replies)
[–] JCreazy@midwest.social 48 points 9 months ago (5 children)

I'm getting tired of implementing technology before it's finished and all the bugs are worked out. Driverless cars are still not ready for prime time yet. The same thing is happening currently with AI or companies are utilizing it without having any idea what it can do.

[–] corsicanguppy@lemmy.ca 28 points 9 months ago (1 children)

tired of implementing technology before it's finished

That's is every single programme you've ever used.

Software will be built, sold, used, maintained and finally obsoleted and it will still not be 'complete'. It will have bugs, sometimes lots, sometimes huge, and those will not be fixed. Our biggest accomplishment as a society may be the case where we patched software on Mars or in the voyager probe still speeding away from earth.

Self-driving cars, though, don't need to have perfectly 'complete' software, though; they just need to work better than humans. That's already been accomplished, long ago.

And with each fix applied to every one of them, it's a situation they all shouldn't ever repeat. Can we say the same about humans? I can't even get my beautiful, stubborn wife to slow down, leave more space, and quit turning the steering wheel in that rope-climbing way like a farmer on a tractor does (because the airbag will take her hand off).

[–] dsemy@lemm.ee 9 points 9 months ago (3 children)

That's is every single programme you've ever used.

No software is perfect, but anybody who uses a computer knows that some software is much less complete. This currently seems to be the case when it comes autonomous driving tech.

And with each fix applied to every one of them, it's a situation they all shouldn't ever repeat.

First, there are many companies developing autonomous driving tech, and if there's one thing tech companies like to do is re-invent the wheel (ffs Tesla did this literally). Second, have you ever used modern software? A bug fix guarantees nothing. Third, you completely ignore the opposite possibility - what if they push a serious bug in an update, which drives you off a cliff and kills you? It doesn't matter if they push a fix 2 hours later (and let's be honest, many of these cars will likely stop getting updates pretty fast anyway once this tech gets really popular, just look at the state of software updates in other industries).

load more comments (3 replies)
[–] long_chicken_boat@sh.itjust.works 26 points 9 months ago

I'm against driverless cars, but I don't think this type of errors can be detected in a lab environment. It's just impossible to test with every single car model or real world situations that it will find in actual usage.

An optimal solution would be to have a backup driver with every car that keeps an eye on the road in case of software failure. But, of course, this isn't profitable, so they'd rather put lives at risk.

[–] nooeh@lemmy.world 14 points 9 months ago (2 children)

How will they encounter these edge cases without real world testing?

load more comments (2 replies)
[–] LesserAbe@lemmy.world 13 points 9 months ago (6 children)

You're right there should be a minimum safety threshold before tech is deployed. Waymo has had pretty extensive testing (unlike say, Tesla). As I understand it their safety record is pretty good.

How many accidents have you had in your life? I've been responsible for a couple rear ends and I collided with a guard rail (no one ever injured). Ideally we want incidents per mile driven to be lower for these driverless cars than when people drive. Waymos have driven a lot of miles (and millions more in a virtual environment) and supposedly their number is better than human driving, but the question is if they've driven enough and in enough varied situations to really be an accurate stat.

load more comments (6 replies)
load more comments (1 replies)
[–] Aatube@kbin.social 36 points 9 months ago (11 children)

Why is an update called a recall?

[–] twack@lemmy.world 23 points 9 months ago (1 children)

Because Tesla was fixing significant safety issues without reporting it to the NHTSA in a way that they could track the problems and source of the issue. The two of them got into a pissing match, and the result is that now all OTA's are recalls. After this, the media realized that "recall" generates more views than "OTA", and here we are.

[–] Dlayknee@lemmy.world 6 points 9 months ago

I think it's slightly more nuanced - not all OTAs are recalls, and not all recalls are OTAs (for Tesla). Depending on the issue (for Teslas), the solution may be pushed via an OTA in which case they "issue a recall" with a software update. They're actually going through this right now. For some other issues though, it's a hardware problem that an OTA won't fix so they issue a recall to repair the problem (ex: when the wiring harness for their cameras was fraying the cables).

This is 100% from the NHTSA shenanigans, though.

load more comments (10 replies)
[–] psycho_driver@lemmy.world 29 points 9 months ago

aaaaand fuck this truck in particular.

[–] Chozo@kbin.social 28 points 9 months ago* (last edited 9 months ago) (8 children)

After an investigation, Waymo found that its software had incorrectly predicted the future movements of the pickup truck due to “persistent orientation mismatch” between the towed vehicle and the one towing it.

Having worked at Waymo for a year troubleshooting daily builds of the software, this sounds to me like they may be trying to test riskier, "human" behaviors. Normally, the cars won't accelerate at all if the lidar detects an object in front of it, no matter what it thinks the object is or what direction it's moving in. So the fact that this failsafe was overridden somehow makes me think they're trying to add more "What would a human driver do in this situation?" options to the car's decision-making process. I'm guessing somebody added something along the lines of "assume the object will have started moving by the time you're closer to that position" and forgot to set a backup safety mechanism for the event that the object doesn't start moving.

I'm pretty sure the dev team also has safety checklists that they go through before pushing out any build, to make sure that every failsafe is accounted for, so that's a pretty major fuckup to have slipped through the cracks (if my theory is even close to accurate). But luckily, a very easily-fixed fuckup. They're lucky this situation was just "comically stupid" instead of "harrowing tragedy".

load more comments (8 replies)
[–] tonyn@lemmy.ml 28 points 9 months ago (2 children)

That pickup truck was asking for it I tell ya. He was looking at me sideways, he was.

[–] postmateDumbass@lemmy.world 12 points 9 months ago

It said RAM om the side!

load more comments (1 replies)
[–] indomara@lemmy.world 15 points 9 months ago (2 children)

I still don't understand how these are allowed. One is not allowed to let a Tesla drive without being 100% in control and ready to take the wheel at all times, but these cars are allowed to drive around autonomously?

If I am driving my car, and I hit a pedestrian, they have legal recourse against me. What happens when it was an AI or a company or a car?

[–] kava@lemmy.world 9 points 9 months ago (3 children)

You have legal recourse against the owner of the car, presumably the company that is profiting from the taxi service.

You see these all the time in San Francisco. I'd imagine the vast majority of the time, there are no issues. It's just going to be big headlines whenever some accident does happen.

Nobody seems to care about the nearly 50,000 people dying every year from human-caused car accidents

load more comments (3 replies)
load more comments (1 replies)
[–] deafboy@lemmy.world 15 points 9 months ago (2 children)

"made contact" "towed improperly". What a pathetic excuse. Wasn't the entire point of self driving cars the ability to deal with unpredictable situations? The ones that happen all the time every day?

Considering the driving habits differ from town to town, the current approaches do not seem to be viable for the long term anyway.

load more comments (2 replies)
[–] EdibleFriend@lemmy.world 15 points 9 months ago (1 children)

Do we have a fuck you in particular group yet?

[–] baseless_discourse@mander.xyz 6 points 9 months ago (1 children)
load more comments (1 replies)
[–] rsuri@lemmy.world 15 points 9 months ago (2 children)

In a blog post, Waymo has revealed that on December 11, 2023, one of its robotaxis collided with a backwards-facing pickup truck being towed ahead of it. The company says the truck was being towed improperly and was angled across a center turn lane and a traffic lane.

See? Waymo robotaxis don't just take you where you need to go, they also dispense swift road justice.

load more comments (2 replies)
[–] samus12345@lemmy.world 13 points 9 months ago* (last edited 9 months ago)

They thought the truck was being driven by Sarah Conner.

[–] nxdefiant@startrek.website 10 points 9 months ago
[–] Sculptor9157@sh.itjust.works 9 points 9 months ago

Maybe it was a cybertruck and the super stealth design made it's signature very small.

[–] Mango@lemmy.world 7 points 9 months ago

It was in an orientation our devs didn't account for and we don't want liability.

"Towed improperly"

[–] SloppyPuppy@lemmy.world 6 points 9 months ago

At least they are consistent

load more comments
view more: next ›