Why driverless car's are a LONG way off.

Why driverless car's are a LONG way off.

Author
Discussion

skyrover

Original Poster:

12,671 posts

204 months

Monday 30th May 2016
quotequote all
I've seen plenty of posts recently recently describing the imminent demise of the human driver in place of autonomous car's and how our leadership will soon look to ban human piloted car's altogether on the grounds of "safety".

I am going to list the reasons why this simply is not the case, at least for the near and somewhat distant foreseeable future.

1. Humans are actually very good at driving with around 1 death for every 100 million miles driven on average, something a machine will find incredibly difficult to match for the following reasons.

2. Machines are incapable of dealing with tasks that have not been foreseen by the software engineers and each possible eventuality programmed with acceptable solutions. It will take billons of miles driven and an absolutely enormous variety of situations and hazards for the software engineers understand and to solve.

3. Machines are incapable of making philosophical decisions. If a crash is unavoidable, who do you hit, the lady crossing the street or the oncoming car? How does a machine anticipate hazards i.e is the child on a bicycle along the pavement more likely to veer into the road compared to the adult?

4. How does a machine react to faulty sensors.. or sensors giving inaccurate information?

5. Someone is about to drive into you... do you continue on or stop for the red light in front of you?

6. How does the machine behave with damage or neglect? etc etc

Even google.. arguably the most successful implementer of driverless technology so far admits it's going to be a long time before these vehicles are a common sight on our roads.

http://thenextweb.com/opinion/2016/03/18/great-tec...

Then there are legal, legislative and consumer acceptance barriers to get through, not to mention inter vehicle software communication/update standards to decide upon.

So sleep easy folks, ignore the hype, your steering wheel and pedals are safe for a long while yet.

Krikkit

26,527 posts

181 months

Monday 30th May 2016
quotequote all
2. Wrong. The software engineers can program in decision making through fuzzy logic, machines make decisions from their inputs all the time, and not necessarily based on pre-set triggers that the engineers write in.
3. Classic case of the child on a bike - if you wanted to you could program the car to recognise both a bicycle and the general size of a child, then add in a touch extra paranoia in case the worst happened. Realistically the cars will react quickly enough that you don't need to worry. Decisions can be made much faster than humans, and full braking or evasive action instantly applied.
4. Easy enough to monitor its own sensors - firstly you build in redundancy by having two sets of everything, then you can compare the inputs of both.

OwenK

3,472 posts

195 months

Monday 30th May 2016
quotequote all
You're thinking about it all backwards. The programming isn't for every single scenario. They don't have to program in "what should the car do if a cat runs out in front of the car but also there's a hot air balloon about to crash into the back of you AND it's going from a 60 into a 30 zone in the rain.

The car is programmed the same way as a person. "Is it safe to proceed? Yes/no" basically. If it decides its positively safe to proceed, it carries on - like a person. If it's safe but has the potential of danger,, say a blind corner, it proceeds with caution ready to stop at a moments notice - like a person. If anything is questionable and it can't be sure it's proceeding safely then it will stop and either wait for the situation to change, or seek assistance from something else - the driver, if there's still controls, or maybe something on a network, maybe sensors on other cars or something.

skyrover

Original Poster:

12,671 posts

204 months

Monday 30th May 2016
quotequote all
Krikkit said:
2. Wrong. The software engineers can program in decision making through fuzzy logic, machines make decisions from their inputs all the time, and not necessarily based on pre-set triggers that the engineers write in.
Fuzzy logic still needs to encompass every conceivable scenario. The other problem being you can get some very weird results if you are not specific enough

Krikkit said:
3. Classic case of the child on a bike - if you wanted to you could program the car to recognise both a bicycle and the general size of a child, then add in a touch extra paranoia in case the worst happened. Realistically the cars will react quickly enough that you don't need to worry. Decisions can be made much faster than humans, and full braking or evasive action instantly applied.
The machine can only react... the human can understand his surroundings and anticipate. The child on a bycicle was only an example, there are countless alternate possibilities where understanding what is actually going on will be better than simply slamming on the brakes.

Krikkit said:
4. Easy enough to monitor its own sensors - firstly you build in redundancy by having two sets of everything, then you can compare the inputs of both.
Machines still fail. Airliners disengage autopilot when conflicting readings occur, handing control straight back to the pilots who can make better decisions based upon the evolving situation.

Lets also consider how learning to drive is one of the best tools to teach young adults personal responsibility in a critical environment.

EnglishTony

2,552 posts

99 months

Monday 30th May 2016
quotequote all
I think the major stumbling block is going to be customer resistance. Why would anybody want one?


jkh112

21,997 posts

158 months

Monday 30th May 2016
quotequote all
Krikkit said:
4. Easy enough to monitor its own sensors - firstly you build in redundancy by having two sets of everything, then you can compare the inputs of both.
Not quite that easy due to common cause failure. You would also need to build in diversity and segregation.

V8RX7

26,851 posts

263 months

Monday 30th May 2016
quotequote all
In general would I trust a machine not to crash into me or your average:

Mum, uninsured scumbag, drink/drunk driver, young driver showing off etc

I'll take the machine.

However generally for where / what I drive, I'd rather drive myself.



boz1

422 posts

178 months

Monday 30th May 2016
quotequote all
I think the OP is spot on.

Krikkit said:
2. Wrong. The software engineers can program in decision making through fuzzy logic, machines make decisions from their inputs all the time, and not necessarily based on pre-set triggers that the engineers write in.
In theory yes, but clearly that is clearly still going to require vast quantities of driving across a huge range of conditions in order for the machine to "learn" and for humans to verify the quality of its decisions.

At the moment, even a very sophisticated vehicle can't deal with a slightly unusual situation that would be trivial for even a crap human driver to deal with safely:
http://www.cnet.com/roadshow/news/model-s-on-autop...

said:
3. Classic case of the child on a bike ... Realistically the cars will react quickly enough that you don't need to worry. Decisions can be made much faster than humans, and full braking or evasive action instantly applied.
Don't understand how you could possible make this assumption. A human who is paying attention will react in a fraction of a second. Even if a computer can react in a shorter fraction of a second, the difference this will make to the distance in which the car can stop/maneuver/whatever will be minimal as the computer still has to deal with real world physics.

said:
4. Easy enough to monitor its own sensors - firstly you build in redundancy by having two sets of everything, then you can compare the inputs of both.
In a plane costing £200 million, you have a lot of redundancy. How feasible is it going to be to have that in a driverless car, given weight, size and cost? I've had a quick look and I can see Volvo claims it will have full redundancy in cars in its pilot scheme, which might be available in a small pilot area in 2020:
http://spectrum.ieee.org/cars-that-think/transport...
My guess? The deadline will slip and, when it does launch, drivers will be told they still need to sit ready to take control and then there'll be a couple of "incidents" and the idea will ultimately be dropped.

I think instead, there will be some further increases in scope of driver aids, so that they become common across most cars but actually manufacturers will increase their emphasis that you, the human driver, remains the ultimate decision-maker, because they won't want to be responsible for decisions made by their systems without human oversight.

technodup

7,580 posts

130 months

Monday 30th May 2016
quotequote all
EnglishTony said:
I think the major stumbling block is going to be customer resistance.
I think taxi firms, fleet operators, hauliers and the like already want them. The public will follow. I wouldn't see them as a direct replacement though, the way we ultimately buy/rent/use them might change qute significantly.

And if for some reason we don't switch the government will 'encourage' it. Like drink driving, wearing seatbelts etc don't underestimate the power of government to change our behaviours. A combination of carrot and stick, a tax break here, an increased charge there and we'll dutifully do what they want. And the government is throwing money at it so I think we can take their position as read.

I can understand the PH head in the sand attitude but I'm all for them.


Gary C

12,425 posts

179 months

Monday 30th May 2016
quotequote all
Dead simple to create driverless cars, just remove all the other traffic from the roads smile

Actually the recent Google project report showed that the accidents that have occurred, they were all caused by other road users, and that the harm was reduced by the actions taken by the driverless cars.

So basically, they already exist, and they work.

Getting them past legislators and public opinion, that might be harder.

Gary C

12,425 posts

179 months

Monday 30th May 2016
quotequote all
jkh112 said:
Krikkit said:
4. Easy enough to monitor its own sensors - firstly you build in redundancy by having two sets of everything, then you can compare the inputs of both.
Not quite that easy due to common cause failure. You would also need to build in diversity and segregation.
Lol, do you work in the nuclear industry too ?

EnglishTony

2,552 posts

99 months

Monday 30th May 2016
quotequote all
I don't think the public is ready for driverless trucks on the motorway. Or will ever be.

Factor in the ability of sat nav to go wrong / get hacked and the nightmare is complete.

Halmyre

11,190 posts

139 months

Monday 30th May 2016
quotequote all
EnglishTony said:
I think the major stumbling block is going to be customer resistance. Why would anybody want one?
Think of all that extra time you can spend on Facebook/Twitter/Instagram/...

technodup said:
And if for some reason we don't switch the government will 'encourage' it. Like drink driving, wearing seatbelts etc don't underestimate the power of government to change our behaviours. A combination of carrot and stick, a tax break here, an increased charge there and we'll dutifully do what they want. And the government is throwing money at it so I think we can take their position as read.
This. And accident statistics will be spun every which way to show that automated driving = good, manual driving = baaaaad.

wemorgan

3,578 posts

178 months

Monday 30th May 2016
quotequote all
AI cars could learn from each other. Humans seem quite slow or resistant to do this.

Mr GrimNasty

8,172 posts

170 months

Monday 30th May 2016
quotequote all
If you wanted to design an autonomous electric transport system from scratch, the current direction of developments is illogical, impractical, and very wasteful.

You would have pedestrians and manual traffic segregated, and you would dumped the wasteful batteries and combine direct power delivery with much of the automation/guidance for a far more fail safe system.

Jader1973

3,989 posts

200 months

Monday 30th May 2016
quotequote all
I read an article a couple of months ago about a test in the US: the self driving car had to join a freeway from a slip road, cross 4 lanes of traffic to exit from another slip road on the other side a few hundred metres further on.

It couldn't do it because all the cameras, sensors etc were picking up enough traffic for it to decide it wasn't safe. A human would just have stuck the indicator on and gone for it.

Perfect example of the difficulty of having normal and self driving cars that can't communicate with each other on the roads, and one reason why it is years away, if it ever happens at all.

Hoofy

76,352 posts

282 months

Monday 30th May 2016
quotequote all
Halmyre said:
EnglishTony said:
I think the major stumbling block is going to be customer resistance. Why would anybody want one?
Think of all that extra time you can spend on Facebook/Twitter/Instagram/...
Or working. Or sleeping.

Kawasicki

13,082 posts

235 months

Monday 30th May 2016
quotequote all
Self-Driving Cars Must Be Programmed to Kill

https://www.technologyreview.com/s/542626/why-self...

EnglishTony

2,552 posts

99 months

Monday 30th May 2016
quotequote all
Hoofy said:
Halmyre said:
EnglishTony said:
I think the major stumbling block is going to be customer resistance. Why would anybody want one?
Think of all that extra time you can spend on Facebook/Twitter/Instagram/...
Or working. Or sleeping.
Use public transport.

poing

8,743 posts

200 months

Monday 30th May 2016
quotequote all
EnglishTony said:
I don't think the public is ready for driverless trucks on the motorway. Or will ever be.

Factor in the ability of sat nav to go wrong / get hacked and the nightmare is complete.
There are many many problems to overcome and people are easily the single biggest problem, as above we are not very trusting of machines so getting the public to accept 20 tons of truck controlled by an operating system is a problem.

Given the nature of the technology, and the way it has to work, we are introducing a lot of new issues to deal with. Vehicles will want to be connected to each other, the owners systems and to the company that makes them. This is the same as a smart phone. The difference being that hacking a smart phone doesn't get someone run over or it can't be used to remotely commit a physical crime. GTA with remotely controlled real cars would become a genuine possibly for hackers.

Having said that, I think within time (a shorter time than we expect) driverless vehicles will become the norm. This will be led mostly by finance because drivers cost money but sending a truck from depot A to depot B is fairly straight forward, so is stopping at a bus stop and following the same route.