Why driverless car's are a LONG way off.

Why driverless car's are a LONG way off.

Author
Discussion

skyrover

Original Poster:

12,671 posts

204 months

Monday 30th May 2016
quotequote all
I've seen plenty of posts recently recently describing the imminent demise of the human driver in place of autonomous car's and how our leadership will soon look to ban human piloted car's altogether on the grounds of "safety".

I am going to list the reasons why this simply is not the case, at least for the near and somewhat distant foreseeable future.

1. Humans are actually very good at driving with around 1 death for every 100 million miles driven on average, something a machine will find incredibly difficult to match for the following reasons.

2. Machines are incapable of dealing with tasks that have not been foreseen by the software engineers and each possible eventuality programmed with acceptable solutions. It will take billons of miles driven and an absolutely enormous variety of situations and hazards for the software engineers understand and to solve.

3. Machines are incapable of making philosophical decisions. If a crash is unavoidable, who do you hit, the lady crossing the street or the oncoming car? How does a machine anticipate hazards i.e is the child on a bicycle along the pavement more likely to veer into the road compared to the adult?

4. How does a machine react to faulty sensors.. or sensors giving inaccurate information?

5. Someone is about to drive into you... do you continue on or stop for the red light in front of you?

6. How does the machine behave with damage or neglect? etc etc

Even google.. arguably the most successful implementer of driverless technology so far admits it's going to be a long time before these vehicles are a common sight on our roads.

http://thenextweb.com/opinion/2016/03/18/great-tec...

Then there are legal, legislative and consumer acceptance barriers to get through, not to mention inter vehicle software communication/update standards to decide upon.

So sleep easy folks, ignore the hype, your steering wheel and pedals are safe for a long while yet.

skyrover

Original Poster:

12,671 posts

204 months

Monday 30th May 2016
quotequote all
Krikkit said:
2. Wrong. The software engineers can program in decision making through fuzzy logic, machines make decisions from their inputs all the time, and not necessarily based on pre-set triggers that the engineers write in.
Fuzzy logic still needs to encompass every conceivable scenario. The other problem being you can get some very weird results if you are not specific enough

Krikkit said:
3. Classic case of the child on a bike - if you wanted to you could program the car to recognise both a bicycle and the general size of a child, then add in a touch extra paranoia in case the worst happened. Realistically the cars will react quickly enough that you don't need to worry. Decisions can be made much faster than humans, and full braking or evasive action instantly applied.
The machine can only react... the human can understand his surroundings and anticipate. The child on a bycicle was only an example, there are countless alternate possibilities where understanding what is actually going on will be better than simply slamming on the brakes.

Krikkit said:
4. Easy enough to monitor its own sensors - firstly you build in redundancy by having two sets of everything, then you can compare the inputs of both.
Machines still fail. Airliners disengage autopilot when conflicting readings occur, handing control straight back to the pilots who can make better decisions based upon the evolving situation.

Lets also consider how learning to drive is one of the best tools to teach young adults personal responsibility in a critical environment.

skyrover

Original Poster:

12,671 posts

204 months

Monday 30th May 2016
quotequote all
Here's another dilemma.

Autonomous car malfunctions and pitches itself into a wall/pedestrian/vehicle/etc, resulting in fatalities.

Who is liable? Will anyone be prosecuted? Are the engineers who wrote the software now guilty of manslaughter?

skyrover

Original Poster:

12,671 posts

204 months

Wednesday 1st June 2016
quotequote all
technodup said:
ou presumably trust humans; the wife for example? And even if you don't trust her to drive you, if you drive anywhere you're trusting ones you don't even know not to crash into you.

Some of them will be drunk. Some will be borderline blind. Some will be reading a tablet or applying make-up. Some will be half asleep and some will simply be poor drivers.

I'm pretty sure I'd trust a billions pound computer system than take pot luck with the general public but it might just be me. (I know it's not just me).
Computers are dumb and incapable of reasoning.

Currently drive-less car's have an accident rate more than double that of humans, despite them operating in heavily controlled circumstances and at low speeds.

Of course this can get better as technology improves, but it's a enormously difficult task to create a vehicle that can operate as safely as humans can.

skyrover

Original Poster:

12,671 posts

204 months

Wednesday 1st June 2016
quotequote all
This is very true...

Look at the problems Nvidia has had with it's latest graphics card drivers cooking peoples hardware.

This is from a company with YEARS of development experience and a very mature driver codebase/team and yet they still get it wrong with wide scale consequences.

Autonomous car's are orders of magnitude more complex and the stakes are a lot higher than some toasted silicone.

skyrover

Original Poster:

12,671 posts

204 months

Wednesday 1st June 2016
quotequote all
98elise said:
Apart from the people on this thread that have said they want them?

Autopilot is once of the main reasons I'm buying a Tesla. If another manufacturer had a fully autonomous car available now I would buy one. I do 30k+ per year mainly on the M25 so it will be huge bonus.
Volvo remains unconvinced

http://jalopnik.com/volvo-engineer-calls-out-tesla...

Vested interest? Perhaps... It does seem to struggle with unusual situations

http://electrek.co/2016/05/26/tesla-model-s-crash-...

skyrover

Original Poster:

12,671 posts

204 months

Wednesday 1st June 2016
quotequote all
Would you be happy climbing into a self driving car with a failed sensor?

skyrover

Original Poster:

12,671 posts

204 months

Wednesday 1st June 2016
quotequote all
Is there a risk your friends remaining good eye will fail while driving?

skyrover

Original Poster:

12,671 posts

204 months

Wednesday 1st June 2016
quotequote all
What happens when snow/mud covers the sensors while driving?

skyrover

Original Poster:

12,671 posts

204 months

Wednesday 1st June 2016
quotequote all
deckster said:
Snow and mud will have no impact on the vast majority of sensors. You could cover the whole thing with black paint and it wouldn't make the slightest bit of difference.
Apparently not

http://www.bloomberg.com/news/articles/2016-02-10/...

skyrover

Original Poster:

12,671 posts

204 months

Wednesday 1st June 2016
quotequote all
FredClogs said:
But they could, the technology is there and perfectly viable and it would seem given the Malaysian Air disappearance and that German pilot who killed himself and his passengers I think the arguments are there to be had.

This is the only major stumbling block I see, one of liability and the legislation needs to be clear who is responsible, the man or the machine. As the law can not punish the machine and the manufacturers are unlikely to want to warrant the machines once they're out there in the hands of people who are, largely, idiots (I'd be looking to rewire/reprogram the thing within an hour of delivery) then I can't see it getting fully automated, but it could and they'll be an increasing and creeping demand for it.
Pilots will not be removed as no machine can manage hazards in an environment where situational awareness is everything.

Aircraft are built the incredibly tight standards yet electronics still catch fire, hydraulics still leak and unforseen problems still occur, all of which need human oversight.

Tell me... what would autopilot do with no pilot, untrustworthy airspeed data, a hydraulic leak and potential engine failure?

skyrover

Original Poster:

12,671 posts

204 months

Wednesday 1st June 2016
quotequote all
kambites said:
skyrover said:
Tell me... what would autopilot do with no pilot, untrustworthy airspeed data, a hydraulic leak and potential engine failure?
Conversely, there have been commercial airliner crashes caused by to pilots not believing perfectly functional instruments and, for example, shutting down the wrong engine when one has failed.

Commercial aeroplane auto-pilots aren't really designed to operate with no human intervention. If they were, I suspect they could deal with such problems at least as well as human pilots.
Can you tell me how you would design a computer to operate after it has turned itself off due to short circuit/fire?

skyrover

Original Poster:

12,671 posts

204 months

Wednesday 1st June 2016
quotequote all
This is true... however I would trust the decision making of a well trained pilot over a computer.

skyrover

Original Poster:

12,671 posts

204 months

Saturday 4th June 2016
quotequote all
Another bit of "food for thought"

How similar are autonomous car's to the pathfinding in racing/car based videogames

How reliable is the pathfinding in those games?

Will modern autonomous car's be better than their virtual counterparts?

skyrover

Original Poster:

12,671 posts

204 months

Saturday 4th June 2016
quotequote all
The Vambo said:
skyrover said:
Another bit of "food for thought"
And that food would be Hákarl.
Still tastes better than autonomous driving at the moment