Probably not. And yet this is exactly the Hobson's choice that travellers will be facing sooner rather than later as artificially intelligent (AI) cars come on stream and start mixing it with the dumb stuff we're currently driving.
The choice that an autonomous pod will make in such a scenario may well be based on a simple mathematical calculation. It could be goodbye Vienna for you - and there'll be no comeback either because not only will passengers in fully-autonomous vehicles have to face up to the possibility of murder by car, they will probably have had to sign a disclaimer absolving the manufacturer or the insurer of all blame for that before being allowed into the vehicle in the first place.
This is pure science fiction, surely? Well, no, it isn't. It might have been in the 1950s, when visionary sci-fi author Isaac Asimov first set out the Three Laws of Robotics in his 'I Robot' collection, and artificial intelligence was being put forward as something we should perhaps be studying. But sixty years on, we're racing headlong into an artificially intelligent world, and the implications are properly gobsmacking.
The thing is, AI development is happening at lightning speed. The faster it goes, the faster it goes. In the same week that Waymo (the self-driving car division of Google) announced that its fully self-driving vehicles were rolling in Chandler, Arizona without human backup drivers, a self-driving shuttle bus in Las Vegas was involved in a crash on its very first day of service with a lorry whose (all too human) driver didn't see the bus. When the shuttle perceived the oncoming danger, it correctly stopped in its tracks. The lorry didn't.
Some transport experts who have taken the time to look at the implications of AI in cars are saying that it poses all manner of issues, not just legal and regulatory, but moral. It's easy enough to see the upside of genuinely sorted motoring autonomy. Apart from the obvious decoupling of all motoring stress, the arrival of AI on our roads could also see an end to the use of vehicles as weapons. Driverless vehicles cannot try to kill someone. They have to stop, just like that Vegas shuttle bus did. That's based on one of those three Asimov rules that have gradually morphed from science fiction into science fact. But what's to prevent pedestrians from stopping cars either for a laugh or with more sinister motives in mind, like robbery, or worse? And what if hackers compromise them?
Many believe that politicians have been using artificial intelligence for years. Certainly, they don't appear to have been using the genuine stuff recently. Governments aren't looking at the AI apocalypse very hard, not because they don't want to look at it but because it's a bit too hard to look at.
Thing is, engineers and programmers will be only too happy to keep equipping us with cool stuff like AI cars as long as there's money in it, but at some point in the game someone needs to think about the implications. These vehicles won't be driving robotically by following a series of individual instructions. They will actually be learning how to drive, just as we humans do.
Accountability to users should be key, but machine-learning at this sort of level brings a different and potentially quite uncomfortable new set of parameters into play. As it stands we're all kind of relying on the hope that the AI geeks will live up to their calming reassurances that everything is going to be just fine.
But the march of AI is going to impact right across society. Cyclists may well be legislated off the roads on the grounds of them being AI-confusing pests. You can probably come up with a few end-game possibilities of your own.
And when you add in another fact - that AI cars could actually be given rights, just like animals or human beings - it does give you pause. Or it should do at least.