1 1
airdvr

The Failure Of This Self-Driving Truck Company Tells You All You Need To Know About Self-Driving Vehicles

Recommended Posts

1 hour ago, billvon said:

And Tandy Computers.  And Atari.  The computer revolution is a big dud.  No one is going to invest in personal computers after those fiascos.

And don't even get me started on the whole "online" thing.  Remember America Online?  The failure of AOL is all you need to know about any potential "future" for the Internet.

The Apollo, DEC, IBM, Cray, Motorola, Data General computers that provided so much research, 3D, CAD/CAM/CAE, wind tunnel testing, CFD, rocket testing, yeah, such a failure. I can't believe the billions wasted on so much worthless technology. Gotta sometime wonder what it all gave us.

/s

Share this post


Link to post
Share on other sites
4 hours ago, normiss said:

Computer technology had less than zero to do with velcro.

Invented and patented before computers existed.

It had even less to do with any sort of investment funding.

 

Alternate Facts mean nothing to you?  For shame.

Share this post


Link to post
Share on other sites
7 hours ago, billvon said:

And Tandy Computers.  And Atari.  The computer revolution is a big dud.  No one is going to invest in personal computers after those fiascos.

And don't even get me started on the whole "online" thing.  Remember America Online?  The failure of AOL is all you need to know about any potential "future" for the Internet.

When did I say it wasn't going to happen?

Share this post


Link to post
Share on other sites
On 3/22/2020 at 5:57 PM, billvon said:

?? Most commercial aircraft can indeed fly autonomously.  Cat IIIc certified aircraft can fly from takeoff to touchdown on autopilot.  And many of those are entirely fly by wire.  Which is a lot more complex (and the failure modes a lot more dire) than a car.

I love this place. Where else can the arrogance of a highly skilled and accomplished engineer run into the arrogance of a grade 9 drop out truck driver? (I actually don't feel either of us are particularly arrogant)

Sure, airplanes can fly themselves. Until they can't. Autopilots and augmented control systems are tools used by pilots to make it easier to do there jobs safely. But they lack the thing we call "situational awareness". Even if you would be foolish enough to fly in an autonomous aircraft with no pilot, you will not be allowed to in your lifetime, or likely that of you children. Perhaps your grandchildren. What will happen when the computers have to deal with both engines out from a flock of geese? I'm certain it won't make a decision and execute a survivable landing into the Hudson River.

I know that self driving cars are here, in a way, sort of. But that is a far cry from the vision of roads full of driverless autonomous vehicles interacting with each other while people sit in them playing Candy Crush or posting in forums. This is the end result that Waymo and others are reaching for, and will not be able to reach. This is the complex problem of not just finding a way to do it, but to do it reliably with a reasonable amount of support and maintenance, and without spending so much money on super computers in each unit. AI is sort of an oxymoron. It is artificial, but not intelligent in any way. It can only deal with what it is taught. I know, self learning is the new buzz in AI, but I will say again, it will be easier and more practical to fly an aircraft autonomously than to fill the highways with  autonomous vehicles. It is like "flying cars". Theoretically possible, and several examples exist of machines that can do both, but impractical on more than one level.

Share this post


Link to post
Share on other sites

Equating aircraft autopilot/autoland with autonomous road vehicles is a totally false equivalence.  Autoland (Cat III) has been available in commercial aircraft since 1964 (Hawker Siddeley Trident), and in military aircraft (Avro Vulcan) prior to that.  The problems to be solved are totally different both qualitatively and quantitatively.  

Share this post


Link to post
Share on other sites
36 minutes ago, kallend said:

Equating aircraft autopilot/autoland with autonomous road vehicles is a totally false equivalence.  Autoland (Cat III) has been available in commercial aircraft since 1964 (Hawker Siddeley Trident), and in military aircraft (Avro Vulcan) prior to that.  The problems to be solved are totally different both qualitatively and quantitatively.  

I wouldn't mention what current military UAV's are capable of with satellite guidance and entirely autonomous operational missions.

Share this post


Link to post
Share on other sites
41 minutes ago, normiss said:

I wouldn't mention what current military UAV's are capable of with satellite guidance and entirely autonomous operational missions.

I’m sure they are capable of making enormous mistakes that someone other than their operators will pay the price for. 

Share this post


Link to post
Share on other sites
2 hours ago, gowlerk said:

Sure, airplanes can fly themselves. Until they can't.

Absolutely.  And autonomous vehicle systems can drive themselves - until they can't.  And car, truck and train drivers can drive their vehicles safely - until they can't.  

Quote

What will happen when the computers have to deal with both engines out from a flock of geese? I'm certain it won't make a decision and execute a survivable landing into the Hudson River.

Probably not.   But once we reach that point, if we have to exchange 1 failed ditching for the ~100 fatal crashes every year by humans, that will be to the great benefit the people in those 99 aircraft.

Quote

I know that self driving cars are here, in a way, sort of. But that is a far cry from the vision of roads full of driverless autonomous vehicles interacting with each other while people sit in them playing Candy Crush or posting in forums.

Well, except we already have that, right now.  There's been one fatal crash with the person involved playing a game on their phone when their autopilot failed.  Which means there are thousands of people doing that (against all recommendations) who did that while their autopilots did NOT fail.  The forums are full of pictures taken by drivers showing people in Teslas, Audis and Cadillacs playing games, using a laptop and sleeping while their car drives blithely along.

Needless to say, right now this isn't a safe thing to do.  It will become incrementally safer year by year until doing that is seen as risky as not having a "nine-and-three" hand position is now.  And those sort of autonomous vehicles will, overall, reduce road deaths, even with Booth's Law in play.

Quote

and without spending so much money on super computers in each unit.

That will happen; that's a given.  Right now there are processors out there that are so powerful that they approximate the total computational ability of the brains of small animals.  They are hideously expensive right now (we are using one) but in three years they will be on sale for $39.99.  Because there's a new one out that's ten times as powerful.

Quote

AI is sort of an oxymoron. It is artificial, but not intelligent in any way. It can only deal with what it is taught.

And what it is provided with via hardcoded responses.  In other words - like people.

Quote

I know, self learning is the new buzz in AI, but I will say again, it will be easier and more practical to fly an aircraft autonomously than to fill the highways with  autonomous vehicles.

Not . . . really.  If you envision aircraft navigating an airspace like the one we have now, then yes, it's a lot easier.  But if roads were like that (one segment in use by one vehicle at one time, careful separation by lanes and speeds, predictable behavior by all other cars) then autonomous cars would be a piece of cake.  They were demoing entirely autonomous vehicles in a dedicated highway lane here twenty years ago - when someone else handles traffic control it gets very easy.

Likewise, if you are talking about air taxis and whatnot, the problem becomes orders of magnitude harder.  With perhaps a slight benefit that you are not starting out with a free-for-all, which is how our roads and traffic laws evolved.

Share this post


Link to post
Share on other sites
3 minutes ago, billvon said:

But once we reach that point, if we have to exchange 1 failed ditching for the ~100 fatal crashes every year by humans, that will be to the great benefit the people in those 99 aircraft.

The last fatal crashes I have heard of in commercial aviation were cause by faultly computer control systems. I'm not sure about the 100 number you are quoting, but that would have to be almost all GA, and they are not ever going to be affording to have autonomous control.

 

6 minutes ago, billvon said:

Well, except we already have that, right now.  There's been one fatal crash with the person involved playing a game on their phone when their autopilot failed.  Which means there are thousands of people doing that (against all recommendations) who did that while their autopilots did NOT fail.  The forums are full of pictures taken by drivers showing people in Teslas, Audis and Cadillacs playing games, using a laptop and sleeping while their car drives blithely along.

That is what I am saying. Those are not autonomous vehicles. A vehicle that requires a human to monitor it is not autonomous.

8 minutes ago, billvon said:

And what it is provided with via hardcoded responses.  In other words - like people.

This is the arrogance of the tech world I speak of. The tech is not up to replacing the human, and it will not be for many years.

9 minutes ago, billvon said:

But if roads were like that (one segment in use by one vehicle at one time, careful separation by lanes and speeds, predictable behavior by all other cars) then autonomous cars would be a piece of cake.  They were demoing entirely autonomous vehicles in a dedicated highway lane here twenty years ago - when someone else handles traffic control it gets very easy.

That's what I said. They need to pick the low hanging fruit. I am not a Luddite. I can see that some tasks can be handled by the machines. I am simply saying that the goal that is be attempted to be sold, namely a system of widespread autonomous transportation, in not attainable with our present tech.

 

Which is to say, I agree with the OP.

Share this post


Link to post
Share on other sites
1 minute ago, gowlerk said:

The last fatal crashes I have heard of in commercial aviation were cause by faultly computer control systems. I'm not sure about the 100 number you are quoting, but that would have to be almost all GA, . . . 

Yes, it's about 20 commercial crashes (part 121 and 135) and 80 GA crashes (part 91) a year.

Quote

they are not ever going to be affording to have autonomous control.

Well, except we are talking about autonomous control for small vehicles (cars) right?  And autopilots are making their way into smaller and smaller aircraft; saw a stat about six months ago that stated most Cirrus aircraft are now ordered with autopilots.  Sort of like what is happening with cars.

Quote

This is the arrogance of the tech world I speak of. The tech is not up to replacing the human, and it will not be for many years.

No one is suggesting it "replace humans" for anything other than specific tasks (which it does already.)  I pointed out that learning, plus some built-in behaviors, is common to both AI and people.  If that's arrogant, so be it.

Share this post


Link to post
Share on other sites
12 minutes ago, billvon said:

Yes, it's about 20 commercial crashes (part 121 and 135) and 80 GA crashes (part 91) a year.

Well, except we are talking about autonomous control for small vehicles (cars) right?  And autopilots are making their way into smaller and smaller aircraft; saw a stat about six months ago that stated most Cirrus aircraft are now ordered with autopilots.  Sort of like what is happening with cars.

No one is suggesting it "replace humans" for anything other than specific tasks (which it does already.)  I pointed out that learning, plus some built-in behaviors, is common to both AI and people.  If that's arrogant, so be it.

We seem to be talking about two different things. I am speaking of the impracticability of autonomous  vehicles. You keep answering me with examples of fancy cruise control units. They aren’t the same thing!

Share this post


Link to post
Share on other sites
53 minutes ago, billvon said:

Absolutely.  And autonomous vehicle systems can drive themselves - until they can't.  And car, truck and train drivers can drive their vehicles safely - until they can't.  

Probably not.   But once we reach that point, if we have to exchange 1 failed ditching for the ~100 fatal crashes every year by humans, that will be to the great benefit the people in those 99 aircraft.

Well, except we already have that, right now.  There's been one fatal crash with the person involved playing a game on their phone when their autopilot failed.  Which means there are thousands of people doing that (against all recommendations) who did that while their autopilots did NOT fail.  The forums are full of pictures taken by drivers showing people in Teslas, Audis and Cadillacs playing games, using a laptop and sleeping while their car drives blithely along.

Needless to say, right now this isn't a safe thing to do.  It will become incrementally safer year by year until doing that is seen as risky as not having a "nine-and-three" hand position is now.  And those sort of autonomous vehicles will, overall, reduce road deaths, even with Booth's Law in play.

That will happen; that's a given.  Right now there are processors out there that are so powerful that they approximate the total computational ability of the brains of small animals.  They are hideously expensive right now (we are using one) but in three years they will be on sale for $39.99.  Because there's a new one out that's ten times as powerful.

And what it is provided with via hardcoded responses.  In other words - like people.

Not . . . really.  If you envision aircraft navigating an airspace like the one we have now, then yes, it's a lot easier.  But if roads were like that (one segment in use by one vehicle at one time, careful separation by lanes and speeds, predictable behavior by all other cars) then autonomous cars would be a piece of cake.  They were demoing entirely autonomous vehicles in a dedicated highway lane here twenty years ago - when someone else handles traffic control it gets very easy.

Likewise, if you are talking about air taxis and whatnot, the problem becomes orders of magnitude harder.  With perhaps a slight benefit that you are not starting out with a free-for-all, which is how our roads and traffic laws evolved.

Distracted reader question - wasn't 9 and 3 replaced with 8 and 4 after full deployment of airbags?

Share this post


Link to post
Share on other sites
25 minutes ago, billvon said:

Yes, it's about 20 commercial crashes (part 121 and 135) and 80 GA crashes (part 91) a year.

Well, except we are talking about autonomous control for small vehicles (cars) right?  And autopilots are making their way into smaller and smaller aircraft; saw a stat about six months ago that stated most Cirrus aircraft are now ordered with autopilots.  Sort of like what is happening with cars.

No one is suggesting it "replace humans" for anything other than specific tasks (which it does already.)  I pointed out that learning, plus some built-in behaviors, is common to both AI and people.  If that's arrogant, so be it.

Autoland is available on the G3000  Which is a very advanced system. Common autopilots have been standard on the Flight Design light sport for a while.   Its fairly advanced, reliable and the idea that used carbon fiber A/C can be had for under 60k. With a autopilot is quite remarkable. This isn't a 40 year old Arrow either.

You're right about tech. Its been replacing skilled humans for over 70 years and contrary to luddites, it will continue. No Ken not directed at you.

Share this post


Link to post
Share on other sites
30 minutes ago, gowlerk said:

We seem to be talking about two different things. I am speaking of the impracticability of autonomous  vehicles. You keep answering me with examples of fancy cruise control units. They aren’t the same thing!

I am saying they will converge until the distinction between them is technical and incomprehensible to the average driver.

Share this post


Link to post
Share on other sites
(edited)
45 minutes ago, billvon said:

I am saying they will converge until the distinction between them is technical and incomprehensible to the average driver.

It will become clear when the personal transportation pod no longer has a steering wheel. Until then it will not be an AV.

 

edit to add. If a vehicle requires one of its occupants to be a liscenced driver it is not autonomous.

Edited by gowlerk

Share this post


Link to post
Share on other sites
1 hour ago, gowlerk said:

It will become clear when the personal transportation pod no longer has a steering wheel. Until then it will not be an AV.

Disagree. It the vehicle could navigate to and from a destination without the passenger directly controlling it. It would be AV.

edit to add. If a vehicle requires one of its occupants to be a liscenced driver it is not autonomous.

Agree.

"Further, as people think about the effects of automation technologies in the workplace, more say automation has brought more harm than help to American workers. "

48% mostly hurt,22% mostly helped, from above poll. Clearly rising standards of living arising from automation is not understood. Nobody will have a smart phone for "free" let alone the information that flows through it. Without automation.

Share this post


Link to post
Share on other sites
8 hours ago, gowlerk said:

AI is sort of an oxymoron. It is artificial, but not intelligent in any way. It can only deal with what it is taught.

That is a complete misunderstanding. It used to be mostly true when I studied AI as part of my computer science program at the Technical University in Vienna in the 80s (admittedly, I never finished, so I am not professionally involved in the field): Back then most "AI" were so called expert systems. They were basically programmed with all the human knowledge we could gather about a topic and then created decision trees that led them down the path of most likely solutions.
This is not the case at all with current self-learning systems. The most extreme of them have no human knowledge programmed into them whatsoever. Again, one of the best examples are the 2 best available chess computers at the time--both, far exceeding human capabilities in the game but for very different reasons:

Stockfish: This is (more or less) the human-programmed "expert system". It has the sum of all human knowledge of chess programmed into itself. The reason it beats humans at the game is because it can compute so many more moves ahead than any human possibly could (about 70 million moves per second on a powerful computer); This type of program, of course benefits tremendously from higher computing power, so runs much better on a super-computer than my laptop. It calculates up to 70million! moves per second

Alpha Zero: This is a true modern, self-learning AI: It has no human knowledge about chess programmed into it WHATSOEVER. It has 2 components: 1. a self-learning neural network algorithm, 2. an evaluation engine that evaluates positions based on likelihood of winning (gained by experience. All its ability in the game comes entirely from having learned through experience (by playing millions of games--initially against itself.)
 

9 hours ago, gowlerk said:

and without spending so much money on super computers in each unit.


AlphaZero (while running on special hardware designed for it) calculates only 80,000 moves per second (almost 1,000 times less that stockfish!) and therefore does generally not benefit from more hardware like stockfish does. So, more computing power is not essential for these kinds of programs!

Whatever knowledge such a system has, is exactly NOT programmed in by humans. In fact, humans usually have no idea why an AI like this makes the decisions it does--and usually the AI cannot tell them.
There is an AI that diagnoses the likelihood of skin cancer based on images of the skin, and it has much better results than the best oncologists and dermatologists--and they cannot tell what exactly the AI is seeing in the images that allows it to come up with that level of accuracy.

5 hours ago, gowlerk said:

This is the arrogance of the tech world I speak of. The tech is not up to replacing the human, and it will not be for many years.

Of course, it is not "replacing humans", except in the way the tractor was "replacing humans" for some aspects of field work. It's weird: We just seem to get a bit touchy when the word "intelligence" is being used.--like at this point in history, we have gotten used to having machines do physical work, maybe even to the idea that they are better at "raw calculation", but "intelligence"...that is our domain!

To come back to self-driving cars: It would be interesting if a similar approach could be used in designing algorithms for self-driving cars. Instead of trying to program decision-trees into the software based on human experience, one approach would be to install the same sensors and cameras that an eventual self-driving car would use, into millions of human-driven cars and then to send the data to the learning algorithm and simply let it learn from the results of the human decisions. After billions of decisions and outcomes, the algorithm should be able to learn, without humans needing to program it.
Of course, one challenge is that it is much harder to define successful outcomes as simply as for chess, or even skin-cancer detection. But self-driving cars still have relatively simple success-parameters, so they may be an area where such self-learning neural networks can eventually be employed relatively easily and successfully.

Share this post


Link to post
Share on other sites
1 hour ago, airdvr said:

 "Defense network computers. New... powerful... hooked into everything, trusted to run it all. They say it got smart, a new order of intelligence". According to Reese, Skynet "saw all humans as a threat; not just the ones on the other side" and "decided our fate in a microsecond: extermination".

The interesting thing here is, that the problem would have been in programming a too narrow set of goals: "eliminate all threats"--that's why this may be the biggest challenge in terms of self-driving cars: How do you define a successful solution, especially when only less than ideal choices are available? (veer into the old woman on the left, the mother and child on the right, or smash into the wall ahead, killing all passengers)

As with any technology, there are dangers and there are some real doozies with AI, many of them related to how we subconsciously program our prejudices into the system--and the systems realize them with greater efficiency than we ever could.

...but the red-eyed, self-aware terminator image is not one of the most realistic ones.

Share this post


Link to post
Share on other sites
(edited)
11 minutes ago, mbohu said:

The interesting thing here is, that the problem would have been in programming a too narrow set of goals: "eliminate all threats"--that's why this may be the biggest challenge in terms of self-driving cars: How do you define a successful solution, especially when only less than ideal choices are available? (veer into the old woman on the left, the mother and child on the right, or smash into the wall ahead, killing all passengers)

 

That is a bogus scenario, that I have heard too many times.

I have never had such a situation in decades of driving, nor have I ever heard of anyone else experiencing it. The car should do what any rational driver would do: Stay in your lane and apply the brakes. Swerving would be done only IF there was insufficient distance for braking, AND there was no empty lane to swerve into.

A human driver has no way of knowing the ages/genders of the occupants of nearby vehicles, nor would an autonomous car.

Edited by ryoder

Share this post


Link to post
Share on other sites
1 hour ago, ryoder said:

That is a bogus scenario, that I have heard too many times.

I have never had such a situation in decades of driving, nor have I ever heard of anyone else experiencing it. The car should do what any rational driver would do: Stay in your lane and apply the brakes. Swerving would be done only IF there was insufficient distance for braking, AND there was no empty lane to swerve into.

A human driver has no way of knowing the ages/genders of the occupants of nearby vehicles, nor would an autonomous car.

But that whole network automation segment, even more so with SDWAN/SDLAN, the customer has the interface via web browsers now.

We're only needed when the lowest default configuration fails, outages, and EOL/upgrade projects.

Times are changing. Again. Still. Continually.

 

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

1 1