this post was submitted on 02 Nov 2024
136 points (93.6% liked)

Technology

1383 readers
388 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution

founded 1 year ago
MODERATORS
 

The probe hones in on one of Tesla's most eyebrow-raising decisions when it comes to its driver assistance package: the insistence on exclusively relying on camera sensors instead of LiDAR and radar like its competitors, which CEO Elon Musk has long derided as a "crutch."

In 2022, the company went all-in on cameras, ditching ultrasonic sensors in its vehicles altogether — a decision that could prove to be a major mistake as it struggles to catch up with its competition and has now promised robust self-driving capabilities to owners who may lack the necessary sensor hardware.

you are viewing a single comment's thread
view the rest of the comments
[–] NotMyOldRedditName@lemmy.world -1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Except the radar doesn't know where every object is. It can't detect stopped things while traveling at high speeds.

You know, the things people keep having accidents with, with or without l2 semi-autonomous software.

[–] pupbiru@aussie.zone 4 points 2 weeks ago (1 children)

the large majority of current self driving cars have radar, lidar, ultra sonic, and cameras. their detection sets overlap, and complement each other so they can see a wide array of things that others can’t. focusing on 1 and saying “it doesn’t see X” is a very poor argument when others see those things just fine

[–] NotMyOldRedditName@lemmy.world -2 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

The point is that to be truly autonomous when vision is the only fail safe reliable sensor, then vision MUST work to have a truly autonomous vehicle.

You can't rely on radar without vision or lidar because it can't see stopped vehicles at high speed. This a deadly serious problem.

You can't rely on lidar in rain/fog/snow/dust because the light bounces off of the particles and gives bad data, plus it can't tell you anything about what the object is or might intend to do, only that it's there.

Only vision can do all of those, it's just a matter of number of cameras, camera quality, and AI processing capabilities.

If vision can do all those things perfectly, maybe you don't need those other sensors after all?

if vision can't do it, then we won't have a truly autonomous future.

The other sensors are a crutch because the vision problem is so hard.

[–] pupbiru@aussie.zone 2 points 2 weeks ago (1 children)

i don’t think anyone is relying solely on radar - that’s the point. every sensor we have as fallible in some way (and so, btw, are our eyes - they can’t see through things but radar can in some cases!)

even if you CAN rely solely on vision, why hamstring yourself? with a whole sensor package, the algorithms know when certain sensors are useless - that’s what the training is for… knock 1 out, the others see that it’s in X condition and works around it

if you only have a single sensor (like cameras) then if something happens you have 0 sensors… our eyes are MUCH better at vision than cameras - just the dynamic range alone, let alone the “resolution”… and that’s not even getting into, as others have said, the fact that our brains have had millions of years of evolution to process images.

the technology for vision only just isn’t there yet. that’s just straight up fact. can it be? perhaps, but “perhaps in the future” is not “we should do this now”. that’s called a beta test, and you’re playing with human lives not just UI bugs - and there’s no good reason… just add extra sensors

[–] NotMyOldRedditName@lemmy.world 1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

even if you CAN rely solely on vision, why hamstring yourself?

Their stance is that by using lidar OEMs are hamstringing themselves on solving vision because they are so reliant on it. They spend less time and resources perfecting vision so they never truly solve the problem. From their perspective you got it backwards.

and there’s no good reason… just add extra sensors

The more sensors you deal with, the more your attention gets divided. You aren't laser focused on one thing.

The extra sensors also cost a lot of money, you can't put waymo's sensor package onto millions of cars that consumers can buy when the suite is 10s of thousands of dollars (and originally well over 100k).

By focusing on vision where the system can be put onto millions of cars, you can get massive amounts of extra training data and training data is going to be a huge part of solving this problem.

You might not like the reasons, or their stance, but it's not such an unreasonable position to take. Mobile Eye even cancelled their next gen lidar project after seeing improvements in vision and radar. What happens when they keep seeing improvements in vision and now radar isn't needed?

I don't know if you've ever used AP but all the crazy headlines you see about it are idiots in cars being idiots. As a L2 vision only system it works very well. If people wanna blame Elon for convincing people to be idiots, sure, you can do that, but that has nothing to do with the actual technological approach they are taking. They're two different things.

[–] pupbiru@aussie.zone 2 points 2 weeks ago (1 children)

Their stance is that by using lidar OEMs are hamstringing themselves on solving vision because they are so reliant on it.

i get that… but… vision is kinda shit. why not use all the tools at your disposal? like literally “x ray vision” is something that we see as a super power because it’d be so useful - radar gives us that

vision is an approximation of things like lidar. can you get a depth map out of vision? sure by why not just measure it directly and then you’re not introducing error by your model literally hallucinating

The more sensors you deal with, the more your attention gets divided. You aren't laser focused on one thing.

kinda but also the last 20% takes 80% of the effort… solving a lot of easy problems with more information will lead to a better short term outcome, and then when you’re getting good results then you can solve from 80% to 85% then 85 to 90 etc across your whole sensor suite

The extra sensors also cost a lot of money

so they though? you can buy hobbyist ultrasonic sensors for literally a couple of bucks, lidar for a few hundred - sure that’s not at the grade that you’d use for cars, but at some point it’s an economies of scale problem. they’re not actually that expensive for a commodity “good enough” sensor package

You might not like the reasons, or their stance

correct - i understand them, but as an engineer it’s just wrong when you’re talking about one of the most dangerous activities that humanity collectively engages in (driving)

What happens when they keep seeing improvements in vision and now radar isn't needed?

i think this could be the sticking point - i don’t think any extra sensors are needed, just like i don’t think seatbelts or air bags etc are needed… but… they’re helpful and improve the safety of people in and around the car

all the crazy headlines you see about it are idiots in cars being idiots

agree, and i totally think driverless is the way to go - humans are far worse drivers than machines are right now without any improvement

… however, better isn’t perfect, and when it comes to safety simply ignoring tools because of some belief that eventually it’ll be fine is misguided at best, and negligent at worst

If people wanna blame Elon for convincing people to be idiots, sure, you can do that

absolutely that too! their systems aren’t “drives itself no problemo” and that’s how they’re marketing it

[–] NotMyOldRedditName@lemmy.world 1 points 2 weeks ago* (last edited 2 weeks ago)

I don't really have much more to add, but just wanted to say I appreciated the conversation we had. This topic can often devolve and it's nice that it didn't.