1 1
airdvr

The Failure Of This Self-Driving Truck Company Tells You All You Need To Know About Self-Driving Vehicles

Recommended Posts

38 minutes ago, billvon said:

Again, I present you with a challenge.  You are writing the control program for an airport automated rail system.  There is a person on the track, and a mother on the trolley with a newborn who will drop (and severely injure) the newborn if maximum braking is applied.  If you don't apply maximum braking the person on the track will be struck and likely killed.  What do YOU write the program to do?

Are we assuming the system has sensors that can make all these determinations and that someone at some point decided that these determinations are useful? (i.e. "mother", "newborn"?)
Generally, ignoring these determinations, I will have 2 choices:
1. Program the system to NEVER apply more braking power than would be safe for the passengers
2. Always apply as much breaking power as necessary to not hit the person on the track and ignore what that may do to the passengers

I WILL have to make that choice. But again, this would not be a neural network algorithm, if I made that choice.
In a neural network algorithm, where the program eventually learns itself how much breaking power it applies in each case, it would have to learn this based on final outcomes (by past experience) and yes: That would probably be based on who and how many die or get hurt.

Share this post


Link to post
Share on other sites
2 minutes ago, mbohu said:

Are we assuming the system has sensors that can make all these determinations and that someone at some point decided that these determinations are useful? (i.e. "mother", "newborn"?)

Not at all.  Neither the rail system nor the car has any such sensors.

Quote

Generally, ignoring these determinations, I will have 2 choices:
1. Program the system to NEVER apply more braking power than would be safe for the passengers
2. Always apply as much breaking power as necessary to not hit the person on the track and ignore what that may do to the passengers

OK.  How will the rail system know there's a person there?

Share this post


Link to post
Share on other sites
49 minutes ago, billvon said:

But they're not programming cars with the instruction "don't hit the person."  They are training neural networks to identify large objects in the road and do their best to avoid them.

That isn't different than "don't hit the person" except that you are reducing the information even more to "don't hit the large object"--this would almost certainly be too little information for a successful program. They are absolutely trying to take into account if the object is likely a person or a thing--and to distinguish even further between different "things", such as a tumbleweed and a rock--with different priorities of avoiding either.

Share this post


Link to post
Share on other sites
1 minute ago, billvon said:

Not at all.  Neither the rail system nor the car has any such sensors.

OK.  How will the rail system know there's a person there?

I have no idea. You made up this challenge. I assume that I will have the data available, otherwise how do you want me to write a program for it?

Share this post


Link to post
Share on other sites
Just now, mbohu said:

That isn't different than "don't hit the person" except that you are reducing the information even more to "don't hit the large object"--this would almost certainly be too little information for a successful program. 

?? A car that avoids hitting large objects (including people) in the roadway would have "too little information" to be successful?  At that point it would be much better than a human driver.

Quote

I have no idea. You made up this challenge. I assume that I will have the data available, otherwise how do you want me to write a program for it?

Ding ding!  You got it!  The train has no sensors capable of that.  Neither do cars.  Neither one can decide to hit the person or not, because neither of them knows it's a person.  Neither one can decide to trade off the person against the newborn baby in the car or train, because neither knows anything about the newborn baby.  They can't make that decision because they don't even have the most basic inputs to be able to make that decision in the first place.

 

Share this post


Link to post
Share on other sites
3 minutes ago, mbohu said:

They are absolutely trying to take into account if the object is likely a person or a thing--and to distinguish even further between different "things", such as a tumbleweed and a rock--with different priorities of avoiding either.

In fact, wasn't the widely fatal accident with a Tesla caused by exactly that? The system misinterpreted what exactly it was seeing and had determined that it wasn't the type of thing that needed to be avoided?

Share this post


Link to post
Share on other sites
Just now, mbohu said:

In fact, wasn't the widely fatal accident with a Tesla caused by exactly that? The system misinterpreted what exactly it was seeing and had determined that it wasn't the type of thing that needed to be avoided?

Yes.  It makes determinations between "nothing" (i.e. noise in road returns) "something that's not a risk" (i.e. a paper bag) and "something that's a risk" (i.e. a person, motorcycle, traffic barrier etc.)  It screwed that up.

Share this post


Link to post
Share on other sites
1 minute ago, billvon said:

Ding ding!  You got it!  The train has no sensors capable of that.  Neither do cars.

That isn't true for sure. As I said above, the car needs to know not to brake full force for a tumbleweed; it needs to determine if a pothole is too deep to just speed over; 
And as I said, I am pretty sure that the fatal Tesla accident was caused by the program interpreting an object it should have avoided as one that was irrelevant.

 

Share this post


Link to post
Share on other sites
1 minute ago, billvon said:

Yes.  It makes determinations between "nothing" (i.e. noise in road returns) "something that's not a risk" (i.e. a paper bag) and "something that's a risk" (i.e. a person, motorcycle, traffic barrier etc.)  It screwed that up.

And again, then it is possible to conscioulsly decide to go no further with any more detailed determinations. But that is a conscious choice as well--and I can tell you the people I talked to that are in related fields are not indicating that they are stopping with such simple determinations. They do want to figure out if the thing is a person, a vehicle, an inanimate object, etc.

Share this post


Link to post
Share on other sites
Just now, mbohu said:

And again, then it is possible to conscioulsly decide to go no further with any more detailed determinations. But that is a conscious choice as well--and I can tell you the people I talked to that are in related fields are not indicating that they are stopping with such simple determinations. They do want to figure out if the thing is a person, a vehicle, an inanimate object, etc.

I am in that field, and we are working like crazy to be able to uniquely identify people (and other things.)  So far it's not possible to do it reliably on a test course, much less in the field.  So autonomous cars are NOT making that determination, and they will likely never use it for determination of avoidance maneuvers.  (What they WILL do is identify pedestrians who are near a crosswalk to choose a target speed.)

Share this post


Link to post
Share on other sites
1 minute ago, billvon said:

I am in that field

Oh, I didn't know that. Very cool.
So that sounds like the decision algorithm is then just based on very simple fixed parameters like "don't hit anything that is considered a threat" and the neural network and learning algorithm is only used in the determination and pattern recognition phase to determine what is a threat. That is different from what I was told and had discussed with people that were involved in other AI efforts.

I would still think that in the end a more powerful system would be more sophisticated, and be trained based on outcomes, not such a simple instruction, so it would choose its decisions based on learned experience from millions of prior outcomes. This is most definitely done in other fields of AI.

Share this post


Link to post
Share on other sites
20 minutes ago, billvon said:

?? A car that avoids hitting large objects (including people) in the roadway would have "too little information" to be successful?  At that point it would be much better than a human driver.

But aren't there clearly cases where hitting something is preferable to the alternatives? 

Along the lines of: better to fly into the tree than execute an extreme low turn and hit the ground full speed?

Share this post


Link to post
Share on other sites
22 minutes ago, mbohu said:

In fact, wasn't the widely fatal accident with a Tesla caused by exactly that? The system misinterpreted what exactly it was seeing and had determined that it wasn't the type of thing that needed to be avoided?

Which accident are you talking about?

a) https://arstechnica.com/cars/2020/02/i-was-just-shaking-new-documents-reveal-details-of-fatal-tesla-crash/

b) https://www.bbc.com/news/world-us-canada-43604440

c) https://www.bbc.com/news/technology-48308852

Share this post


Link to post
Share on other sites
11 minutes ago, mbohu said:

I think it may have been the original 2016 one that is referenced in link a) (but not the one the article is about)

Hell, I didn't realize that two different Tesla's have crashed into a tractor-trailer broadside. I only recalled the 2016 incident. This article has a couple image captures from the cars camera a moment prior to impact in the 2019 crash:

https://www.autoblog.com/2020/03/19/ntsb-investigation-tesla-autopilot-florida-fatal-crash/

Share this post


Link to post
Share on other sites
11 hours ago, mbohu said:

But aren't there clearly cases where hitting something is preferable to the alternatives? 

 

Like what?  The car is going to do its best to avoid objects that hit the threshold for "things to avoid."  It may not always succeed, of course.

Share this post


Link to post
Share on other sites
10 hours ago, billvon said:

Like what?  The car is going to do its best to avoid objects that hit the threshold for "things to avoid."  It may not always succeed, of course.

Well, I gave an example from skydiving already. But I guess the easiest one I would come up with for a car would be a mountain road:
image.png.5bd56d91c336c417fb5390c0eef241fb.png

Say, you drive around the curve and find a truck stalled straight across the 2 lanes (=big object to avoid); on the right side is the mountain wall (bigger object to avoid), and the left side is completely free.
With the rules we have in place right now ("avoid objects") we'll have some fun on the way down.

Now, if you say, the response is ALWAYS just to break as hard as possible, I could not imagine that to be true, because there would be many easy (and more common) scenarios where veering to the left or right would produce much better outcomes.
 

Share this post


Link to post
Share on other sites
(edited)
30 minutes ago, mbohu said:

Well, I gave an example from skydiving already. But I guess the easiest one I would come up with for a car would be a mountain road:
image.png.5bd56d91c336c417fb5390c0eef241fb.png

Say, you drive around the curve and find a truck stalled straight across the 2 lanes (=big object to avoid); on the right side is the mountain wall (bigger object to avoid), and the left side is completely free.

 

When all vehicles are self-driving, your car will know that the truck is there, before it gets to the curve.

 

Edited by headoverheels

Share this post


Link to post
Share on other sites
8 minutes ago, headoverheels said:

When all vehicles are self-driving, your car will know that the truck is there, before it gets to the curve.

 

And it will also know about the landslide that is blocking the road, or the elk that are standing there?

Share this post


Link to post
Share on other sites
2 hours ago, mbohu said:

Well, I gave an example from skydiving already. But I guess the easiest one I would come up with for a car would be a mountain road:
image.png.5bd56d91c336c417fb5390c0eef241fb.png

Say, you drive around the curve and find a truck stalled straight across the 2 lanes (=big object to avoid); on the right side is the mountain wall (bigger object to avoid), and the left side is completely free.
With the rules we have in place right now ("avoid objects") we'll have some fun on the way down.

Now, if you say, the response is ALWAYS just to break as hard as possible, I could not imagine that to be true, because there would be many easy (and more common) scenarios where veering to the left or right would produce much better outcomes.
 

That looks like the "million dollar highway" in Colorado. US 550. Not a place to test your fancy cruise control. I drove it once south bound to Durango. I had no idea what I was getting into when I started down it in a semi. The south bound side is the cliff side. Best drive ever.

Share this post


Link to post
Share on other sites
(edited)
5 minutes ago, gowlerk said:

That looks like the "million dollar highway" in Colorado. US 550. Not a place to test your fancy cruise control. I drove it once south bound to Durango. I had no idea what I was getting into when I started down it in a semi. The south bound side is the cliff side. Best drive ever.

I think we have a winner. Same photo here: 

http://www.tworvgypsies.us/!USA-2012-trip-5/46g-red_mtn.html

And here: 

https://en.wikipedia.org/wiki/U.S._Route_550

Edited by ryoder

Share this post


Link to post
Share on other sites
(edited)
2 hours ago, ryoder said:

And it will also know about the landslide that is blocking the road, or the elk that are standing there?

Sometimes, if other vehicles tell it so.  To further comment on mhobu's scenerio -- the vehicle was going too fast around a blind curve, if simple braking wouldn't be an adequate reaction.  Something more likely with human drivers.

Edited by headoverheels

Share this post


Link to post
Share on other sites
12 hours ago, mbohu said:

Now, if you say, the response is ALWAYS just to break as hard as possible, I could not imagine that to be true, because there would be many easy (and more common) scenarios where veering to the left or right would produce much better outcomes.

it isn't a very good example. the rule could be stay in your lane or lanes and brake hard.

The main rule that autonomous cars will want to follow is don't move faster than you can see, which means cars would slow down automatically for blind curves, simply because they can't see around the curve. 

combine those rules of don't drive too fast and just brake hard when surprises come up (and follow rules of the road) and you cover enough cases that the results will exceed human driver safety performance. 

a better what if scenario is the uber fatality in AZ a few years ago. there you had a situation where a person decides to jaywalk right in front of an oncoming car, something against the law and not recommended. what if the autonomous car had braked hard as soon as the person entered the roadway (there was an extra lane to the left) would a crash been avoided? what about braking hard when the person entered the vehicle travel lane? probably would not have avoided a crash, but it may changed to non-fatal. what about swerving around behind the person if they entered the lane as mbohu is advocating. it would have avoided a crash sure.

but i submit that swerving to avoid is not required for autonomous cars to be allowed on roads. it would be nice to have, and a "simple swerve" is a lot different than the complex trolley problem decision tree that mbohu is saying is required.

(as i type that last sentence i get a bit of deja vu. i suspect i have typed a similar position a few weeks or months ago. <shrug> rehash. a very SC thing to do) 

 

 

 

Share this post


Link to post
Share on other sites
(edited)
11 minutes ago, SethInMI said:

The main rule that autonomous cars will want to follow is don't move faster than you can see, which means cars would slow down automatically for blind curves, simply because they can't see around the curve. 

Eventually they will have support from sensors and signals built into the roadway. All this talk of "neural networks" to me means that each unit will be part of a network and will supply data into it to be shared. I know that is not what the engineers here are speaking of. Humans drive faster than the distance they can see on a fairly regular basis. With predictable results on occasion.

Edited by gowlerk

Share this post


Link to post
Share on other sites
(edited)
27 minutes ago, gowlerk said:

Eventually they will have support from sensors and signals built into the roadway. 

Bwahahaha! I can show you plenty of places where I ride in the mountains that don't even have cell service yet, let alone smart roadways. :tongue:

Edited by ryoder

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

1 1