Fatal Tesla crash, software based issue?

Fatal Tesla crash, software based issue?

Author
Discussion

V8LM

5,176 posts

210 months

Monday 4th July 2016
quotequote all
NJH said:
Computers can only do as well as the people who programmed them, ...
Not necessarily, especially given a fixed limit of time.

NJH

3,021 posts

210 months

Monday 4th July 2016
quotequote all
jkh112][quote said:
Then there is the big field of Systems Safety and Systems Engineering both of which seem to be non-existent in the automotive industry despite the rapid adoption of highly integrated and complex electronics (maybe better outside the UK but here there is no such thing).
l
I agree with most of your post, but not the section above. There are automotive companies in the UK investing in Systems Safety and Systems Engineering and they are applying techniques from industries such as nuclear power and aviation. This may not be across the whole industry but there are some who take it seriously.
To be fair I know of one large midlands based automotive company which has made a very sizeable investment recently in IT tools for doing Systems Engineering, trying to do the right thing is at least 50% of the battle which should be applauded. The attitude or rather approach of Tesla (if what is reported in this thread is accurate) is really quite shocking though in contrast. I suspect they will be taken to the cleaners in court in the US.

Starfighter

4,940 posts

179 months

Monday 4th July 2016
quotequote all
In this case we have a dead driver. Whist software may have been in error in that case and possibly compounded by the DVD player being spoken of it was the driver that was killed. Sometime, possibly soon, there will be someone else killed by the system. That will make the legal liability issues even more interesting especially when the questions are asked about how the car is programmed to react in term of 3 Rule compliance.

This is a tragedy for those directly involved but I do question the amount of faith the driver put in to the automated systems. He appears to be happy using autopilot outside of its intended highway environment and set the speed offer to 5 over irrespective of conditions. There may some Darwin involved in this.

Do we have actual example of an iPhone software issue causing death? I don't mean people walking out in to traffic when suffering screen fixation or jumping from the factory roof.

anonymous-user

55 months

Monday 4th July 2016
quotequote all
(Not replying to anyone here).

It doesn't matter that the car didn't recognise the obstacle, and to an extent the whole system could have failed, and still it's 100% the driver's fault. This level of automation requires the driver to monitor the road as if they were driving, and warnings are given as such.

To me this level of automation is largely pointless and offers no benefit. It causes people to become bored since it asks them to sit and watch. I can see the benefits of queue assist and radar cruise control, but steering that I need to monitor? No thank you.

Legally, Tesla are probably not going to find themselves losing a court case over this. What needs to be looked at is the language Elon Musk uses when talking about this feature at public events. Also, calling it autopilot is grossly overdoing it. I get the feeling that the general public see Tesla's autopilot as more capable than it is i.e. it does not need to be monitored. Volvo's system does the same thing, and it's clearly presented as an assistant, not an autopilot.

We might see tighter controls in place on systems like this now. I think the Tesla and Volvo systems only require the driver to have their hands on the wheel, and it will continue. Obviously you could then look away. Maybe the next step is that NHTSA will say that eye tracking must be used to confirm the driver is looking ahead.

Edited by RenOHH on Monday 4th July 21:39

The Vambo

6,670 posts

142 months

Monday 4th July 2016
quotequote all
Starfighter said:
Do we have actual example of an iPhone software issue causing death? I don't mean people walking out in to traffic when suffering screen fixation or jumping from the factory roof.
For clarity, you don't want death due to usage beyond the manufacturers suggested usage, iphone/bus or Autopilot/DVD player.

You would like an example of someone stabbed by their iPhone or a Model S doing a drive by?

NJH

3,021 posts

210 months

Monday 4th July 2016
quotequote all
V8LM said:
NJH said:
Computers can only do as well as the people who programmed them, ...
Not necessarily, especially given a fixed limit of time.
If your talking about AI I wasted 2 years of my life doing R&D in that area, the ability of an AI program (i.e. not real AI) to do anything is again limited by its human programming as that sets the bounds of what the AI program can work with. If safety critical systems which haven't been human engineered well enough is terrifying then the idea of getting an AI program to run a safety critical function is somewhere well beyond terrifying. For starters it couldn't even be analysed a priori so everything learned over the past 60 years about Hazard Analysis is thrown away and replaced by trust that the computer will learn how to do it. If you meant something else then you will have to enlighten me. Note that offline learning or training of parameters to put into defined algorithms is just another thing which then has to be safety analysed as part of a design, expressed in some form of requirements specification and handed to a software development team to code into a real product. Nothing new about that but the thing that is fundamental is the end system must be able to be analysed by functional safety expertise who can state the systems hazards, causes and consequences otherwise all bets are off IMHO. This is why basic concepts such as determinism often pop up because if we can't say that a system will do z when x and y conditions happen then we are pretty much into guess work.

Devil2575

13,400 posts

189 months

Monday 4th July 2016
quotequote all
NJH said:
Devil2575 said:
Not a Tesla or an Apple fanboy, just a person who recognises that computers can do a lot of things far better than people can.
Computers can only do as well as the people who programmed them, who in turn are only as good as the requirements placed on them by whomever did the systems design and analysis, which in turn is only as good at dealing with hazards as the functional safety analysis done on the system (where system includes the user, training, handbooks, through life maintenance etc. etc.).
Yes but, and this is the key, when you program it you do with the benefit of time and a clear unemotional head. Imagine driving a car and every time you had to make a decision you did so without emotion and with all the time in the world to make the correct and most appropriate action.





Edited by Devil2575 on Monday 4th July 22:32

Devil2575

13,400 posts

189 months

Monday 4th July 2016
quotequote all
NJH said:
If your talking about AI I wasted 2 years of my life doing R&D in that area, the ability of an AI program (i.e. not real AI) to do anything is again limited by its human programming as that sets the bounds of what the AI program can work with. If safety critical systems which haven't been human engineered well enough is terrifying then the idea of getting an AI program to run a safety critical function is somewhere well beyond terrifying. For starters it couldn't even be analysed a priori so everything learned over the past 60 years about Hazard Analysis is thrown away and replaced by trust that the computer will learn how to do it. If you meant something else then you will have to enlighten me. Note that offline learning or training of parameters to put into defined algorithms is just another thing which then has to be safety analysed as part of a design, expressed in some form of requirements specification and handed to a software development team to code into a real product. Nothing new about that but the thing that is fundamental is the end system must be able to be analysed by functional safety expertise who can state the systems hazards, causes and consequences otherwise all bets are off IMHO. This is why basic concepts such as determinism often pop up because if we can't say that a system will do z when x and y conditions happen then we are pretty much into guess work.
I think you're missing the point somewhat. The car doesn't need genuine AI, in the same way that a computer can beat a chess master without possessing genuine AI.
Chemical plants these days are run by computers, they don't have AI but what they do have is the ability to control thousands of different parameters simultaneously while running the plant as hard as possible at the limit of it's envelope. Human beings couldn't hope to operate it half as well and we know this because they used to.

GT119

6,834 posts

173 months

Monday 4th July 2016
quotequote all
Starfighter said:
In this case we have a dead driver. Whist software may have been in error in that case and possibly compounded by the DVD player being spoken of it was the driver that was killed. Sometime, possibly soon, there will be someone else killed by the system. That will make the legal liability issues even more interesting especially when the questions are asked about how the car is programmed to react in term of 3 Rule compliance.

This is a tragedy for those directly involved but I do question the amount of faith the driver put in to the automated systems. He appears to be happy using autopilot outside of its intended highway environment and set the speed offer to 5 over irrespective of conditions. There may some Darwin involved in this.

Do we have actual example of an iPhone software issue causing death? I don't mean people walking out in to traffic when suffering screen fixation or jumping from the factory roof.
At least one person has died when their iPhone caught fire in bed when they were sleeping I believe. Was this software or hardware, I'm not sure. The fault in this particular accident appears to be the truck driver turning across the path of the car which had right of way and wasn't speeding. Now if the truck was fitted with some level of autonomy and it was two computers that caused the crash I'd understand the outcry. In this case it appears human error was the root cause.

Devil2575

13,400 posts

189 months

Monday 4th July 2016
quotequote all
GT119 said:
At least one person has died when their iPhone caught fire in bed when they were sleeping I believe. Was this software or hardware, I'm not sure. The fault in this particular accident appears to be the truck driver turning across the path of the car which had right of way and wasn't speeding. Now if the truck was fitted with some level of autonomy and it was two computers that caused the crash I'd understand the outcry. In this case it appears human error was the root cause.
The cause of the outcry is a deep desire from some for this to fail because they don't want it to succeed.

skyrover

12,682 posts

205 months

Monday 4th July 2016
quotequote all
Devil2575 said:
I think you're missing the point somewhat. The car doesn't need genuine AI, in the same way that a computer can beat a chess master without possessing genuine AI.
Chemical plants these days are run by computers, they don't have AI but what they do have is the ability to control thousands of different parameters simultaneously while running the plant as hard as possible at the limit of it's envelope. Human beings couldn't hope to operate it half as well and we know this because they used to.
Factories are run by computers which are overseen by humans, keeping a close eye on sorting out the problems which inevitably turn up.

hairyben

8,516 posts

184 months

Monday 4th July 2016
quotequote all
Am i reading the reports right, the newly roofless car carried on for a bit with a (headless?) Driver? That sounds like something out of a dodgy b movie!

RobDickinson

31,343 posts

255 months

Monday 4th July 2016
quotequote all
yes it carried on for a bit. Cars do that, its called momentum. Could even have been driven too.

Was autopilot still on and still trying to drive? unlikely I think as it went off road pretty quick.

Drives foot could have been on accelerator or just freewheeling or skidding. Who knows. Nothing different here to anything else unless autopilot didnt disengage at all.

Jader1973

4,046 posts

201 months

Tuesday 5th July 2016
quotequote all
Devil2575 said:
I think you're missing the point somewhat. The car doesn't need genuine AI, in the same way that a computer can beat a chess master without possessing genuine AI.
Chemical plants these days are run by computers, they don't have AI but what they do have is the ability to control thousands of different parameters simultaneously while running the plant as hard as possible at the limit of it's envelope. Human beings couldn't hope to operate it half as well and we know this because they used to.
A chemical plant monitors set parameters and is programmed to manage those e.g. Pressure not to exceed x, temp not to exceed y.

Operating a car will require an element of real time decision making that can't be easily programmed.

For example how to assess the risk associated with and react to:
A dog on its own on the pavement.
A pair of legs moving across the front of a parked car.
Wheelie bins at the side of the road on a windy day.
Car had just parked and the driver is likely to get out.

If it simply reacts with "I'm possibly going to hit that" then it will keep stopping.

Good example is current ACC - they only start accelerating once a car is fully clear of the sensor. Drivers see the slower car indicate and start to move across so start accelerating before the car is fully clear of the lane. The driver makes a judgement based on experience that the computer can't (at the moment).

RobDickinson

31,343 posts

255 months

Tuesday 5th July 2016
quotequote all
Yep driving is complicated.

But even google isnt trying learning systems so much as programmed responses. They dont want it trying to make decisions on its own that may not be repeatable.

Looking in front and working on that info is easy.

its the rest that isnt, prediciton looking further ahead and off road, kids playing football, dodering pensioner, car at t junction, cyclists etc etc, in so many road conditions ( tight old town centre, snow, rain, country roads etc).

Even googles cars dont work at the moment in bad conditions, and they work to precise maps of small areas.

Tesla probably have the most real world driving/road data out of anyone at the moment though.

anonymous-user

55 months

Tuesday 5th July 2016
quotequote all
RobDickinson said:
yes it carried on for a bit. Cars do that, its called momentum. Could even have been driven too.

Was autopilot still on and still trying to drive? unlikely I think as it went off road pretty quick.

Drives foot could have been on accelerator or just freewheeling or skidding. Who knows. Nothing different here to anything else unless autopilot didnt disengage at all.
A witness said it drove for over 300 yards(about 270 metres) after the crash. Another said the roofless car plus dead body drove faster than the '85' she was driving. The witness also said it looked like it was still steering before finally hitting telegraph pole.

' the driver of the tractor-trailer — who was obeying traffic laws at the time of the incident and was not issued any summonses. '


Edited by The Spruce goose on Tuesday 5th July 01:25

98elise

26,761 posts

162 months

Tuesday 5th July 2016
quotequote all
Jader1973 said:
Devil2575 said:
I think you're missing the point somewhat. The car doesn't need genuine AI, in the same way that a computer can beat a chess master without possessing genuine AI.
Chemical plants these days are run by computers, they don't have AI but what they do have is the ability to control thousands of different parameters simultaneously while running the plant as hard as possible at the limit of it's envelope. Human beings couldn't hope to operate it half as well and we know this because they used to.
A chemical plant monitors set parameters and is programmed to manage those e.g. Pressure not to exceed x, temp not to exceed y.

Operating a car will require an element of real time decision making that can't be easily programmed.

For example how to assess the risk associated with and react to:
A dog on its own on the pavement.
A pair of legs moving across the front of a parked car.
Wheelie bins at the side of the road on a windy day.
Car had just parked and the driver is likely to get out.

If it simply reacts with "I'm possibly going to hit that" then it will keep stopping.

Good example is current ACC - they only start accelerating once a car is fully clear of the sensor. Drivers see the slower car indicate and start to move across so start accelerating before the car is fully clear of the lane. The driver makes a judgement based on experience that the computer can't (at the moment).
Of course real time decision making can be programmed. I've worked on fully autonomous weapons platforms that will threat assess and engage many many targets at once. That's 1960's technology which went from requirements to delivery in a handful of years.

I'm currently a business analyst so I write software requirements + functional and technical spec's. Drawing the process model for driving car and reacting to risk would be reasonably simple . The hard bit is getting it to recognise and what other vehicles/people are doing.

It needs to see what we see, and understand it as we do. The decision making from that point is far better handled by a machine.

FurtiveFreddy

8,577 posts

238 months

Tuesday 5th July 2016
quotequote all
The Spruce goose said:
A witness said it drove for over 300 yards(about 270 metres) after the crash. Another said the roofless car plus dead body drove faster than the '85' she was driving. The witness also said it looked like it was still steering before finally hitting telegraph pole.
That's not what the witnesses said. As usual, a few comments made to local news by people at the scene become distorted 'facts'.

The woman in the car driving at 85 was passed by the Tesla before it crashed into the truck.

The car ended up about 300 yards from where it crashed, as it might even the wheels weren't being driven and it wasn't being steered.

The car missed a couple of trees then hit a telegraph pole which brought it to a stop. That proves nothing.

It might be better to wait until the NHTSA have concluded their investigation before deciding who or what was to blame.

anonymous-user

55 months

Tuesday 5th July 2016
quotequote all
skyrover said:
Factories are run by computers which are overseen by humans, keeping a close eye on sorting out the problems which inevitably turn up.
Exactly. Engineers are in place to clear line faults, with another group employed to engineered to conceive and develop the management system. Then various third parties for the robots and automation systems. Added to this line operators to carry out tasks still not suited to mechanical means.

anonymous-user

55 months

Tuesday 5th July 2016
quotequote all
FurtiveFreddy said:
.
https://www.youtube.com/watch?v=gQNMvHbL3jU

he says he was a witness and spoke to the women who said it passed her over the 85 she was doing.


Edited by The Spruce goose on Tuesday 5th July 11:39