Autonomous cars: do you feel lucky?


Would you get into a car if you knew there was a chance it might take a conscious decision to kill you?

Probably not. And yet this is exactly the Hobson's choice that travellers will be facing sooner rather than later as artificially intelligent (AI) cars come on stream and start mixing it with the dumb stuff we're currently driving.


Here's a scenario. You're in a fully autonomous pod with two other urban travellers. Something beyond the pod's control happens. There's going to be an accident. The AI-equipped pod has to make an instant choice between (a) mowing down eight people in a bus queue or (b) crashing head-on into a concrete bollard, possibly killing three.

The choice that an autonomous pod will make in such a scenario may well be based on a simple mathematical calculation. It could be goodbye Vienna for you - and there'll be no comeback either because not only will passengers in fully-autonomous vehicles have to face up to the possibility of murder by car, they will probably have had to sign a disclaimer absolving the manufacturer or the insurer of all blame for that before being allowed into the vehicle in the first place.

This is pure science fiction, surely? Well, no, it isn't. It might have been in the 1950s, when visionary sci-fi author Isaac Asimov first set out the Three Laws of Robotics in his 'I Robot' collection, and artificial intelligence was being put forward as something we should perhaps be studying. But sixty years on, we're racing headlong into an artificially intelligent world, and the implications are properly gobsmacking.


There's a stampede on at the minute to be the first company to bring 100 per cent autonomous vehicles to the public. Big players like Intel, Uber, Apple, Google, General Motors and a few others are vying for the glory that goes with saving motorists from the distracting influences of sleep, drink or, worst of all, the social media messages that are constantly beamed into our phones and cars by the likes of Intel, Uber, Apple, Google, General Motors and a few others.

The thing is, AI development is happening at lightning speed. The faster it goes, the faster it goes. In the same week that Waymo (the self-driving car division of Google) announced that its fully self-driving vehicles were rolling in Chandler, Arizona without human backup drivers, a self-driving shuttle bus in Las Vegas was involved in a crash on its very first day of service with a lorry whose (all too human) driver didn't see the bus. When the shuttle perceived the oncoming danger, it correctly stopped in its tracks. The lorry didn't.


A human driver could have moved the shuttle out of harm's way, or at least done something to attract the lorry driver's attention. When full AI comes to transportation, everything should (technically) be fine. But the transition phase between 2017 and full autonomy - in which humans who are actually less rational than their cars will be doing battle with AI-brained vehicles that are also taking 'human' actions - could produce the most dangerous motoring conditions in history.

Some transport experts who have taken the time to look at the implications of AI in cars are saying that it poses all manner of issues, not just legal and regulatory, but moral. It's easy enough to see the upside of genuinely sorted motoring autonomy. Apart from the obvious decoupling of all motoring stress, the arrival of AI on our roads could also see an end to the use of vehicles as weapons. Driverless vehicles cannot try to kill someone. They have to stop, just like that Vegas shuttle bus did. That's based on one of those three Asimov rules that have gradually morphed from science fiction into science fact. But what's to prevent pedestrians from stopping cars either for a laugh or with more sinister motives in mind, like robbery, or worse? And what if hackers compromise them?


Where does this leave the insurance industry? Will there be anything to insure? In a fully AI-ed up world, not only accidents but also car thefts should become things of the past. Only the doziest thieves are going to try and pinch something that has the ability to lock them in, notify the cops of an impending felon delivery and then drive them smoothly off to the police station, complete with a neatly-packaged forensic record of the crime. Maybe insurance companies will simply reposition their automotive products into the provision of death and injury benefits for dependents.

Many believe that politicians have been using artificial intelligence for years. Certainly, they don't appear to have been using the genuine stuff recently. Governments aren't looking at the AI apocalypse very hard, not because they don't want to look at it but because it's a bit too hard to look at.

Thing is, engineers and programmers will be only too happy to keep equipping us with cool stuff like AI cars as long as there's money in it, but at some point in the game someone needs to think about the implications. These vehicles won't be driving robotically by following a series of individual instructions. They will actually be learning how to drive, just as we humans do.


And that is a massive difference. The algorithms AI cars will be using in this so-called 'deep learning' process are heavy duty. They involve computations that humans aren't even thinking about, and will enable those cars to make decisions that we won't be able to track or explain. When algorithms reach the levels of sophistication that they're attaining right now, nobody really knows how they work or what they do. That's fact, not conspiracy theory. From 2018, the EU may be requiring companies to at least have the ability to provide explanations for automated system decisions. Then again, they may not.

Accountability to users should be key, but machine-learning at this sort of level brings a different and potentially quite uncomfortable new set of parameters into play. As it stands we're all kind of relying on the hope that the AI geeks will live up to their calming reassurances that everything is going to be just fine.

But the march of AI is going to impact right across society. Cyclists may well be legislated off the roads on the grounds of them being AI-confusing pests. You can probably come up with a few end-game possibilities of your own.

And when you add in another fact - that AI cars could actually be given rights, just like animals or human beings - it does give you pause. Or it should do at least.

P.H. O'meter

Join the PH rating wars with your marks out of 10 for the article (Your ratings will be shown in your profile if you have one!)

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
Rate this article

Comments (146) Join the discussion on the forum

  • Kawasicki 3 days ago

    AI doesn't exist, so it is difficult for it to be used any time soon in a car.

    When AI finally does exist it will be a gigantic development for the human race.

    There is gigantic exaggeration of current autonomous car capability. Yes it will come, but expect limited performance for the next decade at least.

  • Venturist 3 days ago

    For god’s sake can we stop banging on about the “AI making moral choices” thing. It is not a thing. The car decision tree is very simple and better than humans are at it:
    Is it clear to continue, or not?
    No?
    Is there time to stop?
    No?
    Is there space to SAFELY avoid the obstacle?
    No?
    Then the impact is inevitable so slam on the anchors in a straight line, the most efficient way to scrub as much speed as possible, and take the hit front-on, the direction that cars are safest taking hits.
    The end. At no point does the car need to decide the relative worth of a kleptomaniac elderly nun vs a reformed tax fraud with a potato carving hobby.

    Humans are crap at this as our self preservation instincts often lead us to swerves and other last ditch snap reaction manoeuvres which end up doing more harm than good.

  • Yipper 3 days ago

    Everyone already entrusts their safety to an imperfect machine, like a human taxi driver. Autonomous vehicles are / will be much safer than human vehicles. Net-net, you're safer in Knight Rider.

  • Debaser 3 days ago

    I recall Mercedes stating their autonomous cars will prioritise the lives of the occupants.

  • Debaser 3 days ago

    Can we not get the programmers working on AI and deep learning to come up with automatic wipers that actually work?

View all comments in the forums Make a comment