Driverless Cars- What do we think
Discussion
otolith said:
People come up with convoluted scenarios to force the car to face some kind of moral dilemma - but we're talking about highly deterministic systems, not AI, and we're usually putting them into hypotheticals that they would be designed not to get themselves into in the first place.
In the case of approaching hazard, brakes have failed, no clear space to move into, it will hit the hazard while executing whatever protocol it is programmed to do in case of brake failure - probably slowing through the gears. If it can avoid it without hitting anything else, it will. It is unlikely to be programmed to leave the road under any circumstances. It is unlikely to be programmed to calculate probabilities. Probably best to maintain your brakes properly.
I did say it was a bit silly, I wasn't expecting it to be a serious obstacle to driver-less cars coming into use. In the case of approaching hazard, brakes have failed, no clear space to move into, it will hit the hazard while executing whatever protocol it is programmed to do in case of brake failure - probably slowing through the gears. If it can avoid it without hitting anything else, it will. It is unlikely to be programmed to leave the road under any circumstances. It is unlikely to be programmed to calculate probabilities. Probably best to maintain your brakes properly.
I'm all for them to be honest. Would certainly be nice to be able to get to/from a pub without having to rely on a taxi. And much as I enjoy driving, when commuting, I'd rather just sit back and listen to the radio or something.
Edited by Conscript on Thursday 16th April 16:56
As we don't yet understand the brain, and we are many years away from mapping it / modelling it / replicating it - the idea of being able to replace it with a computer is a little absurd... it is only recently that chess computers have beaten the top humans - and that is simply about power to crunch pre-determinable sequential steps - i.e. with enough power every optoin can be mapped as available responses are known - in the real world you have no idea of the alternative responses you might face, so you can't do this - you have to use AI (which is nowhere near powerful enough yet) or have fail-safe rules which can be exploited (as per the example above of kids jumping out / bullying the driverless car)...
I look forward to being in a powerful petrol car with everyone else in these bubbles... just drive straight at them - they will move out of the way or stop - want to overtake, there will always be a space as the car you are overtaking will stop - overtake 6 of 10, and cut back in, number 6 will stop and probably cause a pileup due to reaction times with the bubbles behind - you will be fine - they will be liable - sounds a lot of fun!
There are so many ridiculously detailed scenarios not yet considered, and certainly not yet answered which are already a potential issue in the limited tech we are now seeing - cars that read speed limits - who holds liability if it gets it wrong, what if a speed limit on the back of a lorry is mistaken for a repeater sign? How will it deal with anticipation - I see a child bouncing a ball, walking away from me on the pavement and consider that the ball may go into the road - the car sees the child walking on the pavement and considers that it is fine...
we are a long way off doing this - I can see it working on preset rails, such as trams / trolleybuses / trains, so, drive your car, and then join this for the commute... but otherwise, nope, not happening yet
I look forward to being in a powerful petrol car with everyone else in these bubbles... just drive straight at them - they will move out of the way or stop - want to overtake, there will always be a space as the car you are overtaking will stop - overtake 6 of 10, and cut back in, number 6 will stop and probably cause a pileup due to reaction times with the bubbles behind - you will be fine - they will be liable - sounds a lot of fun!
There are so many ridiculously detailed scenarios not yet considered, and certainly not yet answered which are already a potential issue in the limited tech we are now seeing - cars that read speed limits - who holds liability if it gets it wrong, what if a speed limit on the back of a lorry is mistaken for a repeater sign? How will it deal with anticipation - I see a child bouncing a ball, walking away from me on the pavement and consider that the ball may go into the road - the car sees the child walking on the pavement and considers that it is fine...
we are a long way off doing this - I can see it working on preset rails, such as trams / trolleybuses / trains, so, drive your car, and then join this for the commute... but otherwise, nope, not happening yet
Conscript said:
I did say it was a bit silly, I wasn't expecting it to be a serious obstacle to driver-less cars coming into use.
I'm all for them to be honest. Would certainly be nice to be able to get to/from a pub without having to rely on a taxi. And much as I enjoy driving, when commuting, I'd rather just sit back and listen to the radio or something.
Understood - but I've seen similar exercises, there's a cliff on one side and a pensioner on the pavement and a van full of rescued kittens broken down... I'm all for them to be honest. Would certainly be nice to be able to get to/from a pub without having to rely on a taxi. And much as I enjoy driving, when commuting, I'd rather just sit back and listen to the radio or something.
MitchT said:
Also, by taking away the responsibility for driving from the human you're taking out of their life an activity which keeps them thinking on their feet, so to speak. Driverless cars may make the roads safer, statistically, but as the need to use one's brain is slowly removed from certain activities, other activities become more hazardous because the people undertaking them are less practiced at using their brains.
This sounds like something Karl Pilkington would come out with. Even if your point about needing to keep the brain active is valid, you're apparently assuming that ex-drivers will stare blankly out of the window while they're being ferried about; who's to say that they won't use the time to do a sudoku, or read a book, or play a computer game, all of which keep the brain ticking over nicely. I know a fair few people who don't drive and they're not bumbling Mr Magoo types who are a danger to be around, so I don't think we need to worry.akirk said:
As we don't yet understand the brain, and we are many years away from mapping it / modelling it / replicating it - the idea of being able to replace it with a computer is a little absurd...
It's absurd to think that "automated driving" is in any way the same as "replacing the brain".akirk said:
it is only recently that chess computers have beaten the top humans
Yes, and we're talking, what? The top 10 out of 7 billion people? How do you think your average driver rates on the intelligence scale?akirk said:
- in the real world you have no idea of the alternative responses you might face, so you can't do this - you have to use AI (which is nowhere near powerful enough yet)
There are so many ridiculously detailed scenarios not yet considered, and certainly not yet answered which are already a potential issue in the limited tech we are now seeing
You don't program for every eventuality, you program for a class - every detected event only needs to be classified: brake; steer left; steer right; accelerate; continue (obviously with varying degrees of each) - you don't need every single possible occurrence in the history or forever.There are so many ridiculously detailed scenarios not yet considered, and certainly not yet answered which are already a potential issue in the limited tech we are now seeing
akirk said:
How will it deal with anticipation - I see a child bouncing a ball, walking away from me on the pavement and consider that the ball may go into the road - the car sees the child walking on the pavement and considers that it is fine...
It will deal with it by using programmers smarter than you; why do you think a programmer would think, "oh, that's safe" where you have just identified it is a risk?All this "kids will jump out in front of them all the time" is a load of crap.
Regardless of whether it's a robot or a human in control, the car is still a car. It can only stop in a certain finite distance. If kids are playing chicken with them because they know they'll automatically stop, well... they'd better be pretty good at judging distances, too.
Regardless of whether it's a robot or a human in control, the car is still a car. It can only stop in a certain finite distance. If kids are playing chicken with them because they know they'll automatically stop, well... they'd better be pretty good at judging distances, too.
OwenK said:
All this "kids will jump out in front of them all the time" is a load of crap.
Regardless of whether it's a robot or a human in control, the car is still a car. It can only stop in a certain finite distance. If kids are playing chicken with them because they know they'll automatically stop, well... they'd better be pretty good at judging distances, too.
There's an element of assisting evolution, there.Regardless of whether it's a robot or a human in control, the car is still a car. It can only stop in a certain finite distance. If kids are playing chicken with them because they know they'll automatically stop, well... they'd better be pretty good at judging distances, too.
xRIEx said:
akirk said:
How will it deal with anticipation - I see a child bouncing a ball, walking away from me on the pavement and consider that the ball may go into the road - the car sees the child walking on the pavement and considers that it is fine...
It will deal with it by using programmers smarter than you; why do you think a programmer would think, "oh, that's safe" where you have just identified it is a risk?If there is one group of people who seem to make more assumptions then anyone else it is programmers!
The points I am making are simple - the computer which can replace a human doesn't yet exist and is a very long way away...
Therefore any automated logic will have to make assumptions
and simply put - that will undoubtedly go wrong
even the dimmest driver in the UK has passed a certain level of test - yes there are drivers who will make silly mistakes which an automated car won't make, but those same drivers are constantly making decisions at a much higher level than any automated car could possibly manage... Have the makers seriously analysed every possible scenario which occurs including a vehicle, or in the vicinity and programmed an answer? How will that deal with the new scenario which will occur somewhere tomorrow, and the new scenario the day after - or the 10,000 new scenarios the day after across the country? Humans are imaginative creatures capable of doing some stunningly absurd or unexpected things, how exactly is your clever programmer going to cope with every event?
sorry, the concept that programmers have that ability is quite honestly ridiculous - it can not be done on an algorithm, it could only be done by pattern matching and while that is now a good technology, it can't possibly cover every possible pattern of events - find one that doesn't fit and you have a car which doesn't know what to do - it must therefore have a default position - stop? that could be exactly the wrong thing to do in that event...
Conscript said:
Had a similar discussion about driverless cars before on another forum (pretty sure it wasn't here).
Something that was brought up, is how do driverless cars calculate risk? They can't think, they can only make decisions based on logical inputs.
Therefore, imagine a situation where you are on a multi lane road and an accident is imminent due to an obstruction in your lane. The car's brakes have failed. The choice comes down to:
A) Swerve into the other lane. Decrease risk to the driver, but increased risk of injury/death to other road users.
B) Swerve into a busy pavement. As above, but increased risk to pedestrians.
C) Maintain course.
I think most people would weigh up the risk and choose option A and hope for the best.
But how would a computer decide between these 3 situations? One assumes it would be programmed to choose the option that is probable to cause the fewest human casualties.
In which case, would it calculate that option C is the logical choice and plough into the obstruction regardless? As the only life as certain risk is yours, would your car happily sacrifice you to ensure the lowest probability of human casualties? Rather than taking a risk that might not guarantee other people are hurt, but will certainly increase your own chance of survival.
I know it's a bit of a silly hypothetical. But quite an interesting question....how do you sell a car controlled y an entity which might well kill you for the greater good?
My answer would be - firstly, total brake failure is now, in reality, effectively an impossibility and it is simply not worth allowing for. The car would know that it has a brake problem and thus would park (indeed may not even commence the journey) so simply won't get into the situation in the first place. Driving around in dangerous wrecks would be something undertaken by humans only.Something that was brought up, is how do driverless cars calculate risk? They can't think, they can only make decisions based on logical inputs.
Therefore, imagine a situation where you are on a multi lane road and an accident is imminent due to an obstruction in your lane. The car's brakes have failed. The choice comes down to:
A) Swerve into the other lane. Decrease risk to the driver, but increased risk of injury/death to other road users.
B) Swerve into a busy pavement. As above, but increased risk to pedestrians.
C) Maintain course.
I think most people would weigh up the risk and choose option A and hope for the best.
But how would a computer decide between these 3 situations? One assumes it would be programmed to choose the option that is probable to cause the fewest human casualties.
In which case, would it calculate that option C is the logical choice and plough into the obstruction regardless? As the only life as certain risk is yours, would your car happily sacrifice you to ensure the lowest probability of human casualties? Rather than taking a risk that might not guarantee other people are hurt, but will certainly increase your own chance of survival.
I know it's a bit of a silly hypothetical. But quite an interesting question....how do you sell a car controlled y an entity which might well kill you for the greater good?
Next thought - driving on pavements is illegal, so the car would never do that.
The car would know it has a brake problem and may be able to drive itself to the garage to be fixed. Imagine that - driverless cars taking themselves for service and repair - that's something I'd be happy with.
One interesting question though - if autonomous cars are so good / such a fantastic solution - why do we not have driverless aeroplanes?
we have driverless trains (e.g. DLR) because they are on a track and it is no more challenging than setting up your hornby railway...
we don't have driverless aeroplanes (despite auto-pilot / the technology / etc.) because the risk is too high...
other than the fact they are on the ground rather than in the air - the complexity and risk is vastly higher with cars - why does anyone think that it is going to work...?
we have driverless trains (e.g. DLR) because they are on a track and it is no more challenging than setting up your hornby railway...
we don't have driverless aeroplanes (despite auto-pilot / the technology / etc.) because the risk is too high...
other than the fact they are on the ground rather than in the air - the complexity and risk is vastly higher with cars - why does anyone think that it is going to work...?
MoAmin89 said:
Good morning on my first post here.
I am involved in this very topic, trying to decipher mainly potential user perception, taking into account areas such as liability, driver role, usage, acceptance etc.
It would be interesting to see what everyone thinks of this area.
Heres to a hopefully interesting forum
Thank you
Mo
Just thinking out loud....I am involved in this very topic, trying to decipher mainly potential user perception, taking into account areas such as liability, driver role, usage, acceptance etc.
It would be interesting to see what everyone thinks of this area.
Heres to a hopefully interesting forum
Thank you
Mo
Don't suppose you have any connection to a Car Hire company...
akirk said:
why does anyone think that it is going to work...?
Because a lot of very clever people and a huge amount of money are getting bloody close to it already and they are going to continue working and spending until they get there.As to why there is a drive to automate cars and not aircraft, last year commercial air travel killed 761 people. The worst year on record was 1972, when 2429 people died. Currently, road transport kills about one and a quarter million people every year. I'd say that the squeaky wheel is getting the grease.
akirk said:
xRIEx said:
akirk said:
How will it deal with anticipation - I see a child bouncing a ball, walking away from me on the pavement and consider that the ball may go into the road - the car sees the child walking on the pavement and considers that it is fine...
It will deal with it by using programmers smarter than you; why do you think a programmer would think, "oh, that's safe" where you have just identified it is a risk?If there is one group of people who seem to make more assumptions then anyone else it is programmers!
akirk said:
The points I am making are simple - the computer which can replace a human doesn't yet exist and is a very long way away...
What's all this "replacing a human" st about? We're not talking about a computer replacing a human, that's just bks. We're talking about a computer regulating a function. Computers can already regulate braking and accelerating; basically all that's happening is a computer will regulate steering as well.akirk said:
Have the makers seriously analysed every possible scenario which occurs including a vehicle, or in the vicinity and programmed an answer? How will that deal with the new scenario which will occur somewhere tomorrow, and the new scenario the day after - or the 10,000 new scenarios the day after across the country? Humans are imaginative creatures capable of doing some stunningly absurd or unexpected things, how exactly is your clever programmer going to cope with every event?
I've already said it doesn't need to cover every scenario. The template argument against is "every second on the road is completely different to every other second, a computer won't be able to cope!!1!!11!" Last time it was "what if a horse was about to jump over a fence on to the road, how would car know it was going to do it?!" Quick, call the fking Daily Mail, they must immediately run a story about the hundreds of motorists killed by horses jumping out of their fields; it's true, I've seen it happen loads of times. Next the argument is going to be: "what if a meteorite falls out of the sky and hits a cow making it explode into loads of bits, and the head comes sailing over the fence into the road and lands on my car, how is a computer going to deal with that?!"And 10,000 new scenarios? Do me a favour. Most 'scenarios' you're talking about essentially end up being classified as "avoid obstacle", for which it has four basic responses: steer left, steer right, brake, accelerate.
TerraMax has been around for about 10 years and is now used by the US and UK armed forces. 30 years ago GPS was military only, the first civilian implementation was crippleware; now look at its capabilities.
I'm a massive believer in driverless cars, even to the point of a six figure investment in one of the companies involved with the full knowledge it could go one of two ways. The benefits FAR outweigh any negatives on all fronts, the car companies are behind it in a very big way, and the cost of not transferring over to the future are too great on all fronts, financial and lives saved.
I think they're coming, they'll save lives, and they'll make the average boring trip more pleasant by causing fewer traffic jams and being more predictable.
They might make the odd mistake and crash in some bizarre situation that a human (at least one paying attention at the time) might have avoided, and the press will naturally crucify them for that, but overall they will save more lives than they take, and unlike humans, driverless cars will improve with every generation.
They might make the odd mistake and crash in some bizarre situation that a human (at least one paying attention at the time) might have avoided, and the press will naturally crucify them for that, but overall they will save more lives than they take, and unlike humans, driverless cars will improve with every generation.
xRIEx said:
akirk said:
The points I am making are simple - the computer which can replace a human doesn't yet exist and is a very long way away...
What's all this "replacing a human" **** about? We're not talking about a computer replacing a human, that's just ********. We're talking about a computer regulating a function. Computers can already regulate braking and accelerating; basically all that's happening is a computer will regulate steering as well.So a car which regulates braking / accelerating / steering in the absence of anything else? of course not - it needs to do it in the context of getting from A to B sharing the road with anything from random children to car thieves, doddery grannies driving at 10mph to scaffolding collapsing from a local building... to survive on our roads requires a brain - so these systems will absolutely need to replace a human / brain...
xRIEx said:
akirk said:
Have the makers seriously analysed every possible scenario which occurs including a vehicle, or in the vicinity and programmed an answer? How will that deal with the new scenario which will occur somewhere tomorrow, and the new scenario the day after - or the 10,000 new scenarios the day after across the country? Humans are imaginative creatures capable of doing some stunningly absurd or unexpected things, how exactly is your clever programmer going to cope with every event?
I've already said it doesn't need to cover every scenario. The template argument against is "every second on the road is completely different to every other second, a computer won't be able to cope!!1!!11!" Last time it was "what if a horse was about to jump over a fence on to the road, how would car know it was going to do it?!" Quick, call the fking Daily Mail, they must immediately run a story about the hundreds of motorists killed by horses jumping out of their fields; it's true, I've seen it happen loads of times. Next the argument is going to be: "what if a meteorite falls out of the sky and hits a cow making it explode into loads of bits, and the head comes sailing over the fence into the road and lands on my car, how is a computer going to deal with that?!"And 10,000 new scenarios? Do me a favour. Most 'scenarios' you're talking about essentially end up being classified as "avoid obstacle", for which it has four basic responses: steer left, steer right, brake, accelerate.
TerraMax has been around for about 10 years and is now used by the US and UK armed forces. 30 years ago GPS was military only, the first civilian implementation was crippleware; now look at its capabilities.
wikipedia said:
2005 DARPA Grand Challenge
To navigate the terrain, the 2005 TerraMax vehicle utilized three LIDAR laser-ranging units (one of which featured four planes), three forward-looking cameras, and two GPS navigation systems. During the 2005 DARPA Grand Challenge the vehicle encountered a minor windstorm which blew tumbleweeds across the road. This confused the vehicle as it tried to maneuver its bulk around each individual tumbleweed. Around 5:00 in the afternoon, TerraMax arrived at the second tunnel, causing miscalibration of the sensor suite. After this, the vehicle consistently pulled to the left, but arrived safely at the finish line. The vehicle's average speed was 10.27 mph (16.53 km/h). It was the only finisher not eligible for the US $2 million in prize money, because it exceeded the ten-hour time limit.
so it struggled with tumbleweed, the sensor suite miscalibrated due to a tunnel and it averaged just over 10 mph - other than the speed how suitable will that be in London? admittedly that was 10 years ago, so I am sure that it is far better now, but would you be happy being anywhere near it driving through London rush-hour?To navigate the terrain, the 2005 TerraMax vehicle utilized three LIDAR laser-ranging units (one of which featured four planes), three forward-looking cameras, and two GPS navigation systems. During the 2005 DARPA Grand Challenge the vehicle encountered a minor windstorm which blew tumbleweeds across the road. This confused the vehicle as it tried to maneuver its bulk around each individual tumbleweed. Around 5:00 in the afternoon, TerraMax arrived at the second tunnel, causing miscalibration of the sensor suite. After this, the vehicle consistently pulled to the left, but arrived safely at the finish line. The vehicle's average speed was 10.27 mph (16.53 km/h). It was the only finisher not eligible for the US $2 million in prize money, because it exceeded the ten-hour time limit.
I realise that there is a lot of money going into this and I suspect that sadly much will become a reality because we seem to have a political system which bows before big business and where Google and Apple lead, politicians will fall in... however to do it will undoubtedly lead to huge swathes of compromise to allow it to work as it is realised that the systems are really not anywhere up to the equivalent of human drivers...
also, is the question really being asked as to whether it is actually needed - other than businesses seeing it as a way of making money?
I understand that there are road deaths - but the money invested in this could easily be better spent - there are lots of ways in which deaths could be reduced - starting with better driver training... (and I assume from the figures above that we are not just talking of the UK)
Gassing Station | General Gassing | Top of Page | What's New | My Stuff