RE: Autonomous cars: do you feel lucky

RE: Autonomous cars: do you feel lucky

Tuesday 14th November 2017

Autonomous cars: do you feel lucky?

'Signing your life away' for a car is about to take on a whole new meaning



Would you get into a car if you knew there was a chance it might take a conscious decision to kill you?

Probably not. And yet this is exactly the Hobson's choice that travellers will be facing sooner rather than later as artificially intelligent (AI) cars come on stream and start mixing it with the dumb stuff we're currently driving.


Here's a scenario. You're in a fully autonomous pod with two other urban travellers. Something beyond the pod's control happens. There's going to be an accident. The AI-equipped pod has to make an instant choice between (a) mowing down eight people in a bus queue or (b) crashing head-on into a concrete bollard, possibly killing three.

The choice that an autonomous pod will make in such a scenario may well be based on a simple mathematical calculation. It could be goodbye Vienna for you - and there'll be no comeback either because not only will passengers in fully-autonomous vehicles have to face up to the possibility of murder by car, they will probably have had to sign a disclaimer absolving the manufacturer or the insurer of all blame for that before being allowed into the vehicle in the first place.

This is pure science fiction, surely? Well, no, it isn't. It might have been in the 1950s, when visionary sci-fi author Isaac Asimov first set out the Three Laws of Robotics in his 'I Robot' collection, and artificial intelligence was being put forward as something we should perhaps be studying. But sixty years on, we're racing headlong into an artificially intelligent world, and the implications are properly gobsmacking.


There's a stampede on at the minute to be the first company to bring 100 per cent autonomous vehicles to the public. Big players like Intel, Uber, Apple, Google, General Motors and a few others are vying for the glory that goes with saving motorists from the distracting influences of sleep, drink or, worst of all, the social media messages that are constantly beamed into our phones and cars by the likes of Intel, Uber, Apple, Google, General Motors and a few others.

The thing is, AI development is happening at lightning speed. The faster it goes, the faster it goes. In the same week that Waymo (the self-driving car division of Google) announced that its fully self-driving vehicles were rolling in Chandler, Arizona without human backup drivers, a self-driving shuttle bus in Las Vegas was involved in a crash on its very first day of service with a lorry whose (all too human) driver didn't see the bus. When the shuttle perceived the oncoming danger, it correctly stopped in its tracks. The lorry didn't.


A human driver could have moved the shuttle out of harm's way, or at least done something to attract the lorry driver's attention. When full AI comes to transportation, everything should (technically) be fine. But the transition phase between 2017 and full autonomy - in which humans who are actually less rational than their cars will be doing battle with AI-brained vehicles that are also taking 'human' actions - could produce the most dangerous motoring conditions in history.

Some transport experts who have taken the time to look at the implications of AI in cars are saying that it poses all manner of issues, not just legal and regulatory, but moral. It's easy enough to see the upside of genuinely sorted motoring autonomy. Apart from the obvious decoupling of all motoring stress, the arrival of AI on our roads could also see an end to the use of vehicles as weapons. Driverless vehicles cannot try to kill someone. They have to stop, just like that Vegas shuttle bus did. That's based on one of those three Asimov rules that have gradually morphed from science fiction into science fact. But what's to prevent pedestrians from stopping cars either for a laugh or with more sinister motives in mind, like robbery, or worse? And what if hackers compromise them?


Where does this leave the insurance industry? Will there be anything to insure? In a fully AI-ed up world, not only accidents but also car thefts should become things of the past. Only the doziest thieves are going to try and pinch something that has the ability to lock them in, notify the cops of an impending felon delivery and then drive them smoothly off to the police station, complete with a neatly-packaged forensic record of the crime. Maybe insurance companies will simply reposition their automotive products into the provision of death and injury benefits for dependents.

Many believe that politicians have been using artificial intelligence for years. Certainly, they don't appear to have been using the genuine stuff recently. Governments aren't looking at the AI apocalypse very hard, not because they don't want to look at it but because it's a bit too hard to look at.

Thing is, engineers and programmers will be only too happy to keep equipping us with cool stuff like AI cars as long as there's money in it, but at some point in the game someone needs to think about the implications. These vehicles won't be driving robotically by following a series of individual instructions. They will actually be learning how to drive, just as we humans do.


And that is a massive difference. The algorithms AI cars will be using in this so-called 'deep learning' process are heavy duty. They involve computations that humans aren't even thinking about, and will enable those cars to make decisions that we won't be able to track or explain. When algorithms reach the levels of sophistication that they're attaining right now, nobody really knows how they work or what they do. That's fact, not conspiracy theory. From 2018, the EU may be requiring companies to at least have the ability to provide explanations for automated system decisions. Then again, they may not.

Accountability to users should be key, but machine-learning at this sort of level brings a different and potentially quite uncomfortable new set of parameters into play. As it stands we're all kind of relying on the hope that the AI geeks will live up to their calming reassurances that everything is going to be just fine.

But the march of AI is going to impact right across society. Cyclists may well be legislated off the roads on the grounds of them being AI-confusing pests. You can probably come up with a few end-game possibilities of your own.

And when you add in another fact - that AI cars could actually be given rights, just like animals or human beings - it does give you pause. Or it should do at least.

Author
Discussion

Kawasicki

Original Poster:

13,096 posts

236 months

Tuesday 14th November 2017
quotequote all
AI doesn't exist, so it is difficult for it to be used any time soon in a car.

When AI finally does exist it will be a gigantic development for the human race.

There is gigantic exaggeration of current autonomous car capability. Yes it will come, but expect limited performance for the next decade at least.

Kawasicki

Original Poster:

13,096 posts

236 months

Tuesday 14th November 2017
quotequote all
RobDickinson said:
Kawasicki said:
AI doesn't exist, so it is difficult for it to be used any time soon in a car.
Limited AI certainly does exist, and fully concious general AI isnt that far off ( 20-40 years or so)

But we dont need full AI to have self driving cars which will start to get common in the next ~5 years.

We already have them operating on real roads in the real world (waymo/us), and so long as they are safer overall than humans (not hard) they will be good.
AI is always 20-40 years off.

An autonomous system can definitely soon be developed that is better that a poor driver or a good driver driving poorly. That is hardly a challenge. The question is who wants to be driven by a somewhat better than poor driver? Not me. I see that systems can and are being developed that work to reduce workload in stop start jams and other reasonably controlled environments. Great and not easy to get working well for what customers are willing to pay.

I was in a meeting a couple of months ago with some autonomous driving engineers. There was a couple of mentions of AI in the documents. I asked about the AI. The lead engineer told me there is no AI in the system, the only reason that AI is mentioned is because marketing requested it.

Kawasicki

Original Poster:

13,096 posts

236 months

Tuesday 14th November 2017
quotequote all
big_rob_sydney said:
Every second wker on here thinks they're a driving god.
Yup...half of them probably think they are better than average. wkers.