1 1
airdvr

The Failure Of This Self-Driving Truck Company Tells You All You Need To Know About Self-Driving Vehicles

Recommended Posts

2 minutes ago, ryoder said:

That is a bogus scenario, that I have heard too many times.

That's probably true of this specific scenario, but there are less contrived ones and the program will be able to consider things like numbers of people, if they are in the car or not etc. (or the programmers will purposefully have to decide not to consider these things.) I've certainly had to make the veer off the road or crash into what's ahead of me decision at least once in my driving career.

More importantly, decisions about success parameters are already effecting current algorithms being used in the field. For example, software designed to advise judges on sentencing guidelines is programmed to calculate rates of recidivism and is programmed to consider how close someone lives to other felons or ex-prisoners, which (mostly unintentionally) gives entire neighborhoods longer prison sentences, which then reinforces that people living in these areas are more likely to go to prison, increasing their sentences,...creating a feedback loop with unintended consequences, not because the algorithm gives incorrect results (or has some sort of evil intention) but simply because success parameters have been incorrectly or narrowly defined.

I'm just saying that despite my enthusiasm for self-learning AI and the potential that I think it holds (not just for practical use but also for helping us humans to think in new and different ways), I am also aware of its potential for some big mess-ups.

Still: if you play chess (or are much smarter than me, and are even semi-decent at Go), check out Alpha Zero. It's fascinating.

Share this post


Link to post
Share on other sites
2 hours ago, mbohu said:

The interesting thing here is, that the problem would have been in programming a too narrow set of goals: "eliminate all threats"--that's why this may be the biggest challenge in terms of self-driving cars: How do you define a successful solution, especially when only less than ideal choices are available? (veer into the old woman on the left, the mother and child on the right, or smash into the wall ahead, killing all passengers)

Never happens.  People don't make those decisions; neither will machines.  It's like asking whether autopilots will choose to crash into another aircraft rather than perform an airframe-damaging turn to avoid a TCAS conflict.  The issue simply doesn't come up.

Share this post


Link to post
Share on other sites
35 minutes ago, billvon said:

Never happens.  People don't make those decisions; neither will machines.  It's like asking whether autopilots will choose to crash into another aircraft rather than perform an airframe-damaging turn to avoid a TCAS conflict.  The issue simply doesn't come up.

It has. These are from real examples from real programmers in the field. (Even though my specific example was a bit contrived, as ryoder pointed out) In a neural network AI, the question doesn't have to be answered directly, as it would in an expert system, but it will come up in an abstract way in the definition of "success" or in the parameters that guide the weighting system. (i.e: sucess=as few potential deaths as possible; or: x likely serious injuries = 1 death; or: give more weight to the passengers of the car than people outside, etc.)

Share this post


Link to post
Share on other sites
2 minutes ago, mbohu said:

People don't make those decisions

I think they do: When you see a YouTube video of a plane landing on a highway, the pilot made a decision to endanger uninvolved drivers in order to save passengers (or himself if he is the only one) He may THINK that he can land without seriously injuring anyone on the road, but he does not know it. He weighted his risk assessment. He may not even have done it consciously--but for AI that has to be done more explicitly. (Sully must have gone through a similar decision when he landed in the Hudson--everything worked out, but had he hit a boat, or instead made a sudden move avoiding the boat and endangering the airplane, the effects of his decision would have been more visible)

Share this post


Link to post
Share on other sites
(edited)
14 minutes ago, mbohu said:

or: give more weight to the passengers of the car than people outside, etc.)

Actually, these kinds of decisions are already made by car designers--it doesn't require AI or software: When they build heavier and heavier SUVs to tout them as safer than other cars they are weighting safety towards the passengers inside the car, versus other users of the road outside: Increased mass always increases the energy in a crash and therefore cannot be safer. But it shifts the damage to the other participants in the crash, away from the passengers.--not to speak about roo bars (which certainly aren't meant for kangaroos, if you're not driving in Australia)

Edited by mbohu

Share this post


Link to post
Share on other sites
On 3/22/2020 at 11:18 AM, ryoder said:

There has been a shortage of truck drivers for years:

https://www.bloomberg.com/news/articles/2019-07-24/u-s-truck-driver-shortage-is-on-course-to-double-in-a-decade

Or at least there was; Hard to say at the moment.

I would suggest an intermediate step would be to use the autonomous mode on the major highways, (where it works best).  Then when the truck exits the highway, have it pull over and take on a human driver for the local part of the trip.

A perfect world where trucks NEVER enter the passing lane??? Sounds lovely.  

  • Like 1

Share this post


Link to post
Share on other sites
On 3/24/2020 at 9:07 PM, mbohu said:

It has. These are from real examples from real programmers in the field. (Even though my specific example was a bit contrived, as ryoder pointed out) In a neural network AI, the question doesn't have to be answered directly, as it would in an expert system, but it will come up in an abstract way in the definition of "success" or in the parameters that guide the weighting system. (i.e: sucess=as few potential deaths as possible; or: x likely serious injuries = 1 death; or: give more weight to the passengers of the car than people outside, etc.)

I guarantee you no designer out there is setting "number of deaths" as a metric.  Again, it's a contrived problem that doesn't happen in real life - whether a human or an AI is driving.

Share this post


Link to post
Share on other sites
48 minutes ago, billvon said:

I guarantee you no designer out there is setting "number of deaths" as a metric.  Again, it's a contrived problem that doesn't happen in real life - whether a human or an AI is driving.

Why would they NOT use that metric?

Its logic, and algorithms, right?

Serious question.

Share this post


Link to post
Share on other sites
Just now, turtlespeed said:

It seems prudent to me that it should be a metric - maybe not primary.

 

That's fine that you want it to be a metric; start writing code that takes that into account. 

That doesn't change the fact that it is not a metric.

Share this post


Link to post
Share on other sites
17 minutes ago, turtlespeed said:

Why so Snarky?

 

Sorry, not trying to be snarky.

The conversation to this point:

Me: No designer is using number of deaths as a metric.

You: Why not?

Me: (explanation)

You: Well, they SHOULD be using that metric.

Me: If you think that, then you should work to make that happen.

I mean, I could say "tough titties" or something, but that would seem to be even more snarky.

Share this post


Link to post
Share on other sites
8 minutes ago, billvon said:

Sorry, not trying to be snarky.

The conversation to this point:

Me: No designer is using number of deaths as a metric.

You: Why not?

Me: (explanation)

You: Well, they SHOULD be using that metric.

Me: If you think that, then you should work to make that happen.

I mean, I could say "tough titties" or something, but that would seem to be even more snarky.

I'm ignorant of how this works - I assume you are not.

I was looking for the "why" for why it wouldn't be used.

Logically to me it seems like an important consideration. 

Share this post


Link to post
Share on other sites
6 hours ago, turtlespeed said:

I'm ignorant of how this works - I assume you are not.

I was looking for the "why" for why it wouldn't be used.

Logically to me it seems like an important consideration. 

It isn't a useful metric, for one because there are not going to be that many crashes where the autonomous car is at fault, let alone crashes that involve a fatality. EVENTUALLY with enough autonomous miles there may be some data that indicates that say, per mile Waymo has fewer fatalities than Cruise, but both will be well below a human driver, and fatalities per mile will naturally follow crashes in general, so the company that avoids crashing will avoid fatalities.

 

Share this post


Link to post
Share on other sites
8 hours ago, billvon said:

Because the primary metric they use is not crashing.  

 

9 hours ago, billvon said:

it's a contrived problem that doesn't happen in real life - whether a human or an AI is driving.

Here is a good article:

https://link.springer.com/article/10.1007/s10677-016-9745-2

It shows that, clearly AI designers, as well as philosophers are very much thinking about this and that yes: They HAVE to program into the algorithm, what decision metrics to use in the case of a crash.:

"For these reasons, automated vehicles need to be programmed for how to respond to situations where a collision is unavoidable; they need, as we might put it, to be programmed for how to crash."

On the other these metrics don't deal with certainties (of death, etc.) but with probabilities hand they do point out that the problem is therefore not quite analogous to the well-known (and often loosely quoted) "trolley problem"--so maybe that goes to your point billvon. Nevertheless, they are absolutely thinking about this, and are using a number of metrics to guide the decision making process in such a case.

Another good quote from the article:
"...we need to engage in moral reasoning about risks and risk-management. We also need to engage in moral reasoning about decisions under uncertainty."

Share this post


Link to post
Share on other sites
31 minutes ago, SethInMI said:

It isn't a useful metric, for one because there are not going to be that many crashes where the autonomous car is at fault,

I don't think that is relevant. It is always the trickiest part of a programming job to program for the cases that are VERY unlikely but not impossible. I can often finish the biggest part of the software that does everything that USUALLY should happen and then spend the same or even MUCH more time on catching the weird little exceptions that ALMOST never happen but really mess things up when they do. You have to program for these cases, there is no way around it. (And it's not about the autonomous car being at necessary at fault, it's about situations where the autonomous car still has some options for decisions, but where these options are limited by what happened before--no matter who (or what) is at fault.)

Share this post


Link to post
Share on other sites
3 minutes ago, mbohu said:

Here is a good article:

Yep.  I have a friend who is also a philosophy professor who goes on about this stuff all the time.  It's an interesting thing to talk about, like the trolley problem.

But here's the thing.  Philosophers talk about the trolley problem all the time - but it never happens.  It's a thought experiment that is a great way to get grad students talking about philosophy.  Like Schrodinger's Cat, it makes you think about some pretty basic issues.  Real world applications?  Not so much.  No one is putting cats in boxes to be killed.  No one is deciding to push the fat man off the bridge to derail the trolley.  And no autonomous vehicle is deciding to kill the old man in the walker rather than risk the people in the car.

Share this post


Link to post
Share on other sites
1 hour ago, billvon said:

Yep.  I have a friend who is also a philosophy professor who goes on about this stuff all the time.  It's an interesting thing to talk about, like the trolley problem.

But here's the thing.  Philosophers talk about the trolley problem all the time - but it never happens.  It's a thought experiment that is a great way to get grad students talking about philosophy.  Like Schrodinger's Cat, it makes you think about some pretty basic issues.  Real world applications?  Not so much.  No one is putting cats in boxes to be killed.  No one is deciding to push the fat man off the bridge to derail the trolley.  And no autonomous vehicle is deciding to kill the old man in the walker rather than risk the people in the car.

I completely agree with you on the trolley problem (actually I mentioned that in another thread.) I think it is contrived and it pretends to deal with certainties, when in reality there are none. A million things can happen between the bridge and the time the trolley hits the people, and it is impossible to know that the fat man will stop the train but your body won't. I think its best use is to help people get to the realization that morality really lives in a completely different place than utilitarian decisions based on cost-benefit equations.

BUT: The situation for a programmer is a bit different: If I write ANY type of program I have to account for the failure conditions and program into the system what to do in those conditions. There is only ONE way to avoid that, and that is to consciously not program ANY specific parameters for these conditions and let the program fail in whatever the default way is, but this is like putting my hands in front of my face to cover my eyes, or sticking my head in the sand. It is a conscious decision, like any other programming decision would be...and it's probably the worst. 

If you are on your final approach at below 100ft and another canopy is flying straight at you (which for some reason you didn't notice before) you can leave your decision up to your in-the-moment reaction and intuition (although that probably is determined by previous training and experience as well--but most likely you didn't consider the situation consciously before.) But if you program a computer game for skydivers and this is a possible scenario in the game, you will definitely have programmed the decision parameters for this situation into the game. You HAVE to. There WILL be parameters. If "potential deaths" is NOT one of the parameters you use, then that is a conscious decision you made that will have certain consequences, just as much as if you decided to use this as one of the parameters. I'm not saying I know which option is better, just that these questions are real ones that the programmers face.

Share this post


Link to post
Share on other sites
7 minutes ago, mbohu said:

BUT: The situation for a programmer is a bit different: If I write ANY type of program I have to account for the failure conditions and program into the system what to do in those conditions. There is only ONE way to avoid that, and that is to consciously not program ANY specific parameters for these conditions and let the program fail in whatever the default way is, but this is like putting my hands in front of my face to cover my eyes, or sticking my head in the sand. It is a conscious decision, like any other programming decision would be...and it's probably the worst. 

 

OK let's take a simpler case, then.

You are writing the control program for an airport automated rail system.  There is a person on the track, and a mother on the trolley with a newborn who will drop (and severely injure) the newborn if maximum braking is applied.  If you don't apply maximum braking the person on the track will be struck and likely killed.  What do you write the program to do?

(similar to the trolley problem - but set in the real world of automated rail systems.)

Quote

But if you program a computer game for skydivers and this is a possible scenario in the game, you will definitely have programmed the decision parameters for this situation into the game. You HAVE to. There WILL be parameters. If "potential deaths" is NOT one of the parameters you use, then that is a conscious decision you made that will have certain consequences, just as much as if you decided to use this as one of the parameters. 

?? Right.  Potential death is one possibility that the person will choose - not the program.  The program works well if it accurately simulates the results of your actions.

In that program, people will react differently, and I can pretty much guarantee you that no one is comparing their earning potential to the earning potential of the person they are about to hit, or how many dependents they have.  They are just trying to not hit the other person.  Autonomous vehicles will do the same thing.

 

Share this post


Link to post
Share on other sites
20 hours ago, billvon said:

They are just trying to not hit the other person.  Autonomous vehicles will do the same thing.

Yes, you could program the car with the instruction "don't hit the person". But then, if that is simply not an option (a person to the left, 2 to the right, 10 in front, 3 passengers in the car--all too close to brake in time) then--if you left it at that--your algorithm is now in an error condition. Will it do nothing? Brake as hard as it can even though that doesn't help? be stuck in an infinite loop as it calculates each option and gets a "no" each time because of the instruction "don't hit a person"?)

Like I said, the programmer can choose to not program an option for this error condition, but that is a choice as well. As for "income potential", etc. Sure: That is not a good metric, nor may be "pregnant woman", "old man", etc. but "car passenger" (=paying customer) versus "other people" may well be one where there will be pressure to consider it. (as in the example of current SUVs built like tanks to protect the paying customer over the "other less relevant people"--sure we can pretend that this is not the reason we build them like this and advertise their safety potential to the paying customer...but that doesn't make it true.) There may be other metrics, once the sensors and software are smart enough to potentially consider them. Just imagine the pressure from "mothers-against-babies-killed-in-accidents.org"

Share this post


Link to post
Share on other sites
2 minutes ago, mbohu said:

Yes, you could program the car with the instruction "don't hit the person". But then, if that is simply not an option (a person to the left, 2 to the right, 10 in front, 3 passengers in the car--all too close to brake in time) then--if you left it at that--your algorithm is now in an error condition. Will it do nothing? Brake as hard as it can even though that doesn't help? be stuck in an infinite loop as it calculates each option and gets a "no" each time because of the instruction "don't hit a person"?)

Like I said, the programmer can choose to not program an option for this error condition, but that is a choice as well. As for "income potential", etc. Sure: That is not a good metric, nor may be "pregnant woman", "old man", etc. but "car passenger" (=paying customer) versus "other people" may well be one where there will be pressure to consider it. (as in the example of current SUVs built like tanks to protect the paying customer over the "other less relevant people"--sure we can pretend that this is not the reason we build them like this and advertise their safety potential to the paying customer...but that doesn't make it true.) There may be other metrics, once the sensors and software are smart enough to potentially consider them. Just imagine the pressure from "mothers-against-babies-killed-in-accidents.org"

You keep talking about "programming algorithms". Have you actually studied how neural network AI works? It is all matrix mathematics and calculus, as opposed to humans writing code.

Share this post


Link to post
Share on other sites
2 hours ago, mbohu said:

Yes, you could program the car with the instruction "don't hit the person".

But they're not programming cars with the instruction "don't hit the person."  They are training neural networks to identify large objects in the road and do their best to avoid them.

Again, I present you with a challenge.  You are writing the control program for an airport automated rail system.  There is a person on the track, and a mother on the trolley with a newborn who will drop (and severely injure) the newborn if maximum braking is applied.  If you don't apply maximum braking the person on the track will be struck and likely killed.  What do YOU write the program to do?

 

Share this post


Link to post
Share on other sites
2 hours ago, ryoder said:

You keep talking about "programming algorithms". Have you actually studied how neural network AI works? It is all matrix mathematics and calculus, as opposed to humans writing code.

Yes, you have to go back a  number of posts, where I am saying that these comments apply to more traditional "expert system" types of programs. They are easier to discuss.
However, you have something analogous for self-learning neural network algorithms: You need to set initial success parameters. They determine everything else. It basically cascades backwards from these initial parameters. For example, AlphaZero started out with the very simple parameter of "winning the game". Then it could set its own parameters by working backwards from there. In self-driving algorithms that are based on neural networks, setting the initial success parameters would likely not be as straightforward and will have to include parameters about what is considered "success" in case of fatal crashes.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

1 1