Fatal Tesla crash, software based issue?

Fatal Tesla crash, software based issue?

Author
Discussion

saaby93

32,038 posts

179 months

Tuesday 5th July 2016
quotequote all
98elise said:
Of course real time decision making can be programmed. I've worked on fully autonomous weapons platforms that will threat assess and engage many many targets at once. That's 1960's technology which went from requirements to delivery in a handful of years.

I'm currently a business analyst so I write software requirements + functional and technical spec's. Drawing the process model for driving car and reacting to risk would be reasonably simple . The hard bit is getting it to recognise and what other vehicles/people are doing.

It needs to see what we see, and understand it as we do. The decision making from that point is far better handled by a machine.
Typical stupid collision is when someone swerves to avoid a dog and runs over someone on the pavement.
How would the computer decide to take out the dog instead?


FurtiveFreddy

8,577 posts

238 months

Tuesday 5th July 2016
quotequote all
The Spruce goose said:
https://www.youtube.com/watch?v=gQNMvHbL3jU

he says he was a witness and spoke to the women who said it passed her over the 85 she was doing.
Yes, but clearly that happened BEFORE the Tesla went under the trailer. Not after, as you stated.

No mention of a "roofless car" passing her and if you think about it, it would be pretty difficult for another car to be travelling along the same stretch of road when a trailer was across both lanes at the time.


Edited by FurtiveFreddy on Tuesday 5th July 11:47

Pete317

1,430 posts

223 months

Tuesday 5th July 2016
quotequote all
saaby93 said:
98elise said:
Of course real time decision making can be programmed. I've worked on fully autonomous weapons platforms that will threat assess and engage many many targets at once. That's 1960's technology which went from requirements to delivery in a handful of years.

I'm currently a business analyst so I write software requirements + functional and technical spec's. Drawing the process model for driving car and reacting to risk would be reasonably simple . The hard bit is getting it to recognise and what other vehicles/people are doing.

It needs to see what we see, and understand it as we do. The decision making from that point is far better handled by a machine.
Typical stupid collision is when someone swerves to avoid a dog and runs over someone on the pavement.
How would the computer decide to take out the dog instead?
Seriously, how many times has something like that actually happened?


Edited by Pete317 on Tuesday 5th July 11:48

saaby93

32,038 posts

179 months

Tuesday 5th July 2016
quotequote all
Pete317 said:
Seriously, how many times has something like that actually happened?
Well if I know about once that equals the number of occurrences the subject of this thread
so likely more? wink

All it shows is that either way youre not going to get these collisions down to zero.

I havent read all the pages why did the car continue topless for so long?
There was a similar accident like that some years ago tractor and long trailer pulling of a field early morning when a jag decided it didnt need a roof and the driver a head frown




Edited by saaby93 on Tuesday 5th July 11:56

AW111

9,674 posts

134 months

Tuesday 5th July 2016
quotequote all
Pete317 said:
saaby93 said:
Typical stupid collision is when someone swerves to avoid a dog and runs over someone on the pavement.
How would the computer decide to take out the dog instead?
Seriously, how many times has something like that actually happened?


Edited by Pete317 on Tuesday 5th July 11:48
I've saved the lives of three dogs in the last five years.

FurtiveFreddy

8,577 posts

238 months

Tuesday 5th July 2016
quotequote all
saaby93 said:
I havent read all the pages why did the car continue topless for so long?
Having the roof sheared off isn't going to slow it down significantly, even if it was doing the 65mph speed limit.

Here's a diagram from the Florida Highway Patrol which might help explain what happened to those who have only looked at news reports.



If the roof was sheared off, the camera used for AP would have gone with it anyway, so it couldn't have been working after the collision.

Devil2575

13,400 posts

189 months

Tuesday 5th July 2016
quotequote all
skyrover said:
Devil2575 said:
I think you're missing the point somewhat. The car doesn't need genuine AI, in the same way that a computer can beat a chess master without possessing genuine AI.
Chemical plants these days are run by computers, they don't have AI but what they do have is the ability to control thousands of different parameters simultaneously while running the plant as hard as possible at the limit of it's envelope. Human beings couldn't hope to operate it half as well and we know this because they used to.
Factories are run by computers which are overseen by humans, keeping a close eye on sorting out the problems which inevitably turn up.
True, but what kind of problems do the people have to deal with generally? On the plant I work on the problems are typically to do with a failure or a blockages and the human intervention is required primarliy to maintain continued operation and stop the safety systems kicking in. The plant safety systems in most cases are fully automated and kept separate from the control system. The computer system running the plant is 1990s vintage and the brand new one that is currently being installed is even more capable. However the biggest limitation we have is that the plant was built in 1978 so the hardware isn't always up to it.

The key point though is that human operators overseeing a plant run by computers is far more efficient and far more safe than one just run by humans.


Edited by Devil2575 on Tuesday 5th July 14:42

405dogvan

5,328 posts

266 months

Tuesday 5th July 2016
quotequote all
Pete317 said:
saaby93 said:
98elise said:
Of course real time decision making can be programmed. I've worked on fully autonomous weapons platforms that will threat assess and engage many many targets at once. That's 1960's technology which went from requirements to delivery in a handful of years.

I'm currently a business analyst so I write software requirements + functional and technical spec's. Drawing the process model for driving car and reacting to risk would be reasonably simple . The hard bit is getting it to recognise and what other vehicles/people are doing.

It needs to see what we see, and understand it as we do. The decision making from that point is far better handled by a machine.
Typical stupid collision is when someone swerves to avoid a dog and runs over someone on the pavement.
How would the computer decide to take out the dog instead?
Seriously, how many times has something like that actually happened?


Edited by Pete317 on Tuesday 5th July 11:48
How many times has a human reacted to something and caused a much greater accident as a result??

Humans are MASSIVELY flawed drivers - computers will never be able to solve every situation, but it's conceivable they could do a better job than humans.

The problem with self-driving cars isn't making them work - that's easy - the hard parts are

1 - coming-up with standards to control them and how they interact/talk to each other (vested interests will tie this st up for decades)
2 - convincing people they're better/safer

That person you know who drives distracted, constantly fiddling with stuff or gripping the wheel like a vice or staring at their kids instead of the road or who never bothers indicating - they dislike computers driving 'because they might make a mistake' whilst being unaware of themselves and 10,000 other people like them...

Devil2575

13,400 posts

189 months

Tuesday 5th July 2016
quotequote all
Jader1973 said:
A chemical plant monitors set parameters and is programmed to manage those e.g. Pressure not to exceed x, temp not to exceed y.
It doesn't just do that. In addition what is does is work out the most economical way to produce the quantity of products that the business wants from the availible feeds. FWIW the plant where I work monitors and controls well over 7000 different parameters which are ranked in order of priority.


Jader1973 said:
Operating a car will require an element of real time decision making that can't be easily programmed.

For example how to assess the risk associated with and react to:
A dog on its own on the pavement.
A pair of legs moving across the front of a parked car.
Wheelie bins at the side of the road on a windy day.
Car had just parked and the driver is likely to get out.

If it simply reacts with "I'm possibly going to hit that" then it will keep stopping.

Good example is current ACC - they only start accelerating once a car is fully clear of the sensor. Drivers see the slower car indicate and start to move across so start accelerating before the car is fully clear of the lane. The driver makes a judgement based on experience that the computer can't (at the moment).
But it will only be a matter of time before this is possible. Or alternatively cars will be built with the capability to operate autonomously but it will only be activated once all cars are autonomous.
I also don't think we should over estimate the level thought that goes into human reaction to a sudden event. How many people simply freeze and hit the brakes?

Devil2575

13,400 posts

189 months

Tuesday 5th July 2016
quotequote all
405dogvan said:
How many times has a human reacted to something and caused a much greater accident as a result??

Humans are MASSIVELY flawed drivers - computers will never be able to solve every situation, but it's conceivable they could do a better job than humans.

The problem with self-driving cars isn't making them work - that's easy - the hard parts are

1 - coming-up with standards to control them and how they interact/talk to each other (vested interests will tie this st up for decades)
2 - convincing people they're better/safer

That person you know who drives distracted, constantly fiddling with stuff or gripping the wheel like a vice or staring at their kids instead of the road or who never bothers indicating - they dislike computers driving 'because they might make a mistake' whilst being unaware of themselves and 10,000 other people like them...
This.



NJH

3,021 posts

210 months

Tuesday 5th July 2016
quotequote all
Agreed. Expert systems as they are called are pretty common, I think a lot of companies developed them without even knowing that they were developing an expert system. For sure a computer which has a body of decision making knowledge built into it is going to do much better than a panicked human, as long as the computer only has to deal of course with situations which fit neatly within its decision matrix. This then leaves the big problem of how does the system fail safe when it has to deal with a situation that doesn't compute? You guys have already given lots of situations where the computer would want to either brake or crawl along the road really slowly to ensure safety margins are maintained, for us human drivers we just suck it in an keep the peddle to the metal whilst fully realising (hopefully) that sometime we may have been unlucky and ended up in a serious accident. Would we be happy for an autonomous system to apply statistics and say carry on regardless as most of the time we will get away with this scenario? Would society ever be happy with machines acting this way?

Jader1973

4,024 posts

201 months

Thursday 7th July 2016
quotequote all
I read today that the NHTSA are investigating a non-fatal accident involving a Model X rollover to establish if it was on "Autopilot" or not.

Possibly a non-event but an example of the rod Tesla have made for their own back. For a while yet accidents involving one of their cars are going to get attention from the authorities and consequently written up in the press. Over time that could do real damage to the brand.

FurtiveFreddy

8,577 posts

238 months

Thursday 7th July 2016
quotequote all
Jader1973 said:
I read today that the NHTSA are investigating a non-fatal accident involving a Model X rollover to establish if it was on "Autopilot" or not.

Possibly a non-event but an example of the rod Tesla have made for their own back. For a while yet accidents involving one of their cars are going to get attention from the authorities and consequently written up in the press. Over time that could do real damage to the brand.
It could, but no more than other manufacturers who have had issues over the years with litigious Yanks.

Remember sticking gas pedals on Toyotas? All those SUV roll-overs? Exploding Pintos?

In this Model X case, the owner reportedly told Police he had AP enabled but has't returned calls from Tesla who haven't been able to retrieve data from the car to analyse the cause of the accident.

At best, that's just inconclusive.


NDNDNDND

2,024 posts

184 months

Thursday 7th July 2016
quotequote all
405dogvan said:
How many times has a human reacted to something and caused a much greater accident as a result??

Humans are MASSIVELY flawed drivers - computers will never be able to solve every situation, but it's conceivable they could do a better job than humans.

The problem with self-driving cars isn't making them work - that's easy - the hard parts are

1 - coming-up with standards to control them and how they interact/talk to each other (vested interests will tie this st up for decades)
2 - convincing people they're better/safer
I think you're way off course there - and giving people much less credit than they're due. Humans are still much better drivers than computers - as has been repeatedly said, a computer can only operate within the constraints of its programming, and the roads are hugely chaotic and often unpredictable.

Getting the cars to interact and communicate with each other will be relatively easy.
People are evidently quite easy to convince that they are safe - as evidenced by this Tesla crash and by many Tesla purchasers, who are quite happy to buy a car that they think 'drives itself' even when it actually doesn't.

The difficult part will absolutely be dealing with the unpredictable and chaotic nature of roads, and dealing the moral and ethical decisions that result. I don't mean just the extreme 'kill a dog or a human' question - how does an automated car interact with a human driven car when merging in traffic? When does an automated car decide to overtake a slower car? Would it ever overtake a slower car?

405dogvan said:
That person you know who drives distracted, constantly fiddling with stuff or gripping the wheel like a vice or staring at their kids instead of the road or who never bothers indicating - they dislike computers driving 'because they might make a mistake' whilst being unaware of themselves and 10,000 other people like them...
This doesn't make sense to me at all. A person who is incompetent and constantly distracted doesn't want an automated car? Why wouldn't they? In my experience, those most cynical about the automated car are those who like and care about driving the most.

Personally, I'd quite like to see all the numpties consigned to automated transport, so I don't have to deal with them or be threatened by their incompetence when driving myself! However, I'll rue the day when 'manually' driving a car will be forbidden

However, could it be that fully automated cars will only ever happen when the roads are converted into a much more predictable environment, perhaps even with pedestrians and cars absolutely segregated to minimise the possibility of unpredictable things happening...?


NJH

3,021 posts

210 months

Thursday 7th July 2016
quotequote all
T junctions are a nightmare for us humans, all of us must have had many instances when we were not sure if a vehicle approaching a T junction with us was actually going to stop or was about to suddenly pull out on us (also the place for many fatal accidents). Those sort of scenarios look like an utter nightmare for those trying to develop autonomous cars.

Jader1973

4,024 posts

201 months

Friday 8th July 2016
quotequote all
I'm not sure getting cars to talk to each other and the road infrastructure (e.g. traffic lights, roadworks) will be that easy because it requires a globally common system and a network with massive capability and no possibility of outage.

If it happens I think it will be decades away.

There are already examples of automated cars not being able to merge because the computer views the move as unsafe, whereas a human just sticks the indicator on and goes for it because the assume (hope) the traffic they are merging into will create a big enough gap.

WestyCarl

3,270 posts

126 months

Friday 8th July 2016
quotequote all
NDNDNDND said:
This doesn't make sense to me at all. A person who is incompetent and constantly distracted doesn't want an automated car? Why wouldn't they? In my experience, those most cynical about the automated car are those who like and care about driving the most.
I like driving and work in the Auto Industry, however I would pay for an Auto Pilot switch for those time when the journey is boring.

skyrover

12,680 posts

205 months

Friday 8th July 2016
quotequote all
Jader1973 said:
I'm not sure getting cars to talk to each other and the road infrastructure (e.g. traffic lights, roadworks) will be that easy because it requires a globally common system and a network with massive capability and no possibility of outage.

If it happens I think it will be decades away.

There are already examples of automated cars not being able to merge because the computer views the move as unsafe, whereas a human just sticks the indicator on and goes for it because the assume (hope) the traffic they are merging into will create a big enough gap.
How many rural areas in the UK have broadband?

The UK is a comparatively small island compared to the vast rural areas in some countries.

We are a very very long way off IMO.

hairyben

8,516 posts

184 months

Friday 8th July 2016
quotequote all
Jader1973 said:
I'm not sure getting cars to talk to each other and the road infrastructure (e.g. traffic lights, roadworks) will be that easy because it requires a globally common system and a network with massive capability and no possibility of outage.

If it happens I think it will be decades away.

There are already examples of automated cars not being able to merge because the computer views the move as unsafe, whereas a human just sticks the indicator on and goes for it because the assume (hope) the traffic they are merging into will create a big enough gap.
In a world of automonous cars wont the car needing to merge just communicate this and be allowed in by the closest automonous car?

otolith

56,279 posts

205 months