Uber driverless car in fatal accident

Uber driverless car in fatal accident

Author
Discussion

Mr2Mike

20,143 posts

255 months

Friday 23rd March 2018
quotequote all
Brother D said:
Here is someone driving the same route with a camera actually relecting real world light levels vs some crappy dash cam released to the police

https://youtu.be/1XOVxSCG8u0
The difference in light levels between that video and the original is staggering. Many people are wondering if the original video has been "adjusted" and I have to admit the evidence is compelling, unless Uber normally fit welding masks to their cameras.

Uber Video just before impact.
Above video at same location.

jonah35

3,940 posts

157 months

Friday 23rd March 2018
quotequote all
Perhaps a view many don’t share and perhaps I’d be different if it was my family involved in the crash

But

Whenever life changing technology has ever come about in the past sadly there is always a risk and loss of life. Eg building sky scrapers years ago, building railroads, venturing to new lands in ships years ago, air travel and space exploration. It is sadly par for the course

Am I also the only one that things quite a lot of that was the pedestrians fault?

AreOut

3,658 posts

161 months

Friday 23rd March 2018
quotequote all
jonah35 said:
Am I also the only one that things quite a lot of that was the pedestrians fault?
I think everybody agrees it was pedestrians fault as well, that's not the question. The problem is it wasn't only the pedestrians fault...

captainaverage

596 posts

87 months

Friday 23rd March 2018
quotequote all
jonah35 said:
Perhaps a view many don’t share and perhaps I’d be different if it was my family involved in the crash

But

Whenever life changing technology has ever come about in the past sadly there is always a risk and loss of life. Eg building sky scrapers years ago, building railroads, venturing to new lands in ships years ago, air travel and space exploration. It is sadly par for the course

Am I also the only one that things quite a lot of that was the pedestrians fault?
All of those things weren't exactly shoved/forced into your face e.g shorter buildings weren't taken away from you for the sake of moving everyone into tall buildings. People who walked into the taller building did it at their own accord mostly. With autonomous driverless cars will come the "ban of manual driven cars by 2099 etc to save lives" just like they've tried to "ban" ICE cars in many countries.

Think of this:
Externally governed Autonomous houses is the new technology and it's great, the only problem is your heating is controlled automatically so if you want a little more heat, it won't be possible because not turning up the heat is helping "save the environment" and on top of that manual controlled heating will be banned "to help save the environment"

I also agree it's the pedestrians fault too. The other two things (human and car) could've done better and maybe only hurt her instead of killing her.

Mr2Mike

20,143 posts

255 months

Friday 23rd March 2018
quotequote all
captainaverage said:
All of those things weren't exactly shoved/forced into your face e.g shorter buildings weren't taken away from you for the sake of moving everyone into tall buildings. People who walked into the taller building did it at their own accord mostly.
I'd assumed Jonah was referring to the high death rate of construction workers on early high rise buildings. If so then it's not really relevant. Whilst obviously undesirable, someone being killed through working in a high risk job is rather different to random members of the public being killed by new technology failing, especially when the primary aim of said technology is to reduce accidents. Even when said members of the public appear to possess such little intelligence that they place themselves into obvious danger without a second thought.

That said, assuming visibility was more like the second video posted that someone filmed on a phone:

1) The driver should have been able to see the cyclist well before the Uber footage shows her, if the driver had looked.
2) The driver is there to cope with failures of the autonomy.

Mitigating the driver, the more reliable the autonomy gets, the less the driver needs to intervene, the more bored/distracted the driver will become and the less able they will be to cope with any failures. It's a dangerous game.

Gandahar

9,600 posts

128 months

Friday 23rd March 2018
quotequote all
Roll back the complexity a bit and go back to basics.

1. The first steam engine to your kettle where you flip the switch and "fire and forget"

2. The first jet engine to your latest trip this weekend from Heathrow to Perth nonstop by Dreamliner.

3. GPS, the first experiments for the military you never heard went wrong ( ever ) ....... to your Tom Tom.

All of the above experiments to modern accepted life had one thing in common, they were used in an environment where they pushing the envelope, and that really is a big thing, pushing the envelope, was done way out of the way. Even the first nuclear pile was done under a Chicago squash court in the 40's during the war years when everyone was playing tennis and basketball.

Ok that is rather tongue in cheek, but they did actually do that, just during WW2 you had to make the odd compromise on "if things went a bit pear shaped". It was only Chicago!

https://www.uchicago.edu/features/how_the_first_ch...

but that was world war 2 so had to rush things. Finally shipped off to out of the way New Mexico for tube alloy and trinity.

In summary, doing this sort of testing in the 21st public eye is a masochists dream. Chuck Yeager would not be impressed.



Edited by Gandahar on Friday 23 March 11:13

Gandahar

9,600 posts

128 months

Friday 23rd March 2018
quotequote all
jonah35 said:
Am I also the only one that things quite a lot of that was the pedestrians fault?
Let me put a new thought up there.




Imagine if she had an autonomous bicycle.

Would the accident have happened? scratchchin


We'd have an autonomous bicycle being pushed being hit by an autonomous car with driver using phone.


That might show technology needs a new firmware revision.






Edited by Gandahar on Friday 23 March 11:25

Tuna

19,930 posts

284 months

Friday 23rd March 2018
quotequote all
98elise said:
Cars don't need AI. It's a rules based environment. People cannot process unexpected things very well, infact when presented with a sudden unexpected event we tend to freeze or panic.

Driving a car can be broken down into a set of rules, and when something is experienced outside of the rules then stop. A computer can do this without getting tired, enraged, looking at Facebook, doing it's makeup etc. It can also have eyes in the back of its head, or eyes that can detect objects outside our range.
There's so much misunderstanding of what the AI does in a car. It's not making choices about who or what to run over, it's making sense of the sensor data and generating the set of choices about how the car can (safely) achieve its goal. AI in these cars is deciding whether the thing at the side of the road is a tree, a bicycle, a road sign or a traffic light. The rules about what you do in each case are indeed quite simple, but first you have to 'know' what the thing is.

The same goes for traffic lanes. Our inbuilt intelligence can look at a road scene and say "yes, that's where I should go to carry straight on", but AI is needed to figure out what is a white line, what is a random road marking and what is some paint spilled by a lorry. Driverless cars are packed with AI technology to interpret what it 'sees' - and then the rules are simple. It turns out distinguishing a road sign from a KFC sign, from someone wearing a t-shirt with the Ghostbusters logo is very difficult.

swisstoni

16,985 posts

279 months

Friday 23rd March 2018
quotequote all
The cynic in me thinks that there is very little testing going on here and an awful lot of plain old racking up of mileage for promotional purposes.

Jonesy23

4,650 posts

136 months

Friday 23rd March 2018
quotequote all
Tuna said:
There's so much misunderstanding of what the AI does in a car. It's not making choices about who or what to run over, it's making sense of the sensor data and generating the set of choices about how the car can (safely) achieve its goal. AI in these cars is deciding whether the thing at the side of the road is a tree, a bicycle, a road sign or a traffic light. The rules about what you do in each case are indeed quite simple, but first you have to 'know' what the thing is.

The same goes for traffic lanes. Our inbuilt intelligence can look at a road scene and say "yes, that's where I should go to carry straight on", but AI is needed to figure out what is a white line, what is a random road marking and what is some paint spilled by a lorry. Driverless cars are packed with AI technology to interpret what it 'sees' - and then the rules are simple. It turns out distinguishing a road sign from a KFC sign, from someone wearing a t-shirt with the Ghostbusters logo is very difficult.
That isn't AI. That's (mostly) Computer Vision. Interesting stuff albeit pretty simple to do these days; OpenCV does almost anything you'd ever want. Sign recognition used to be hard, these days it's an afternoon demo.

Mostly when people talk 'AI' they don't really mean AI. And even if they did mean to use AI in autonomous cars they need to understand what specifically they mean and whether it's a proper solution or just technical wkery because AI is a popular concept right now.

akirk

5,389 posts

114 months

Friday 23rd March 2018
quotequote all
Jonesy23 said:
That isn't AI. That's (mostly) Computer Vision. Interesting stuff albeit pretty simple to do these days; OpenCV does almost anything you'd ever want. Sign recognition used to be hard, these days it's an afternoon demo.

Mostly when people talk 'AI' they don't really mean AI. And even if they did mean to use AI in autonomous cars they need to understand what specifically they mean and whether it's a proper solution or just technical wkery because AI is a popular concept right now.
exactly - there is so much guff talked in this area - AI is artifical intelligence - i.e. not from a living being
there is no intelligence in any of this - it is simply following pre-programmed logic - if A -> B

as for whether it was the pedestrian's fault
- yes, every human has a responsibility to be careful about how they cross a road
- no, every driver has a bigger responsibility to not kill someone - the pedestrian didn't cause the accident - the driver did - I would expect every civilised society to have an approach where someone in charge of several tonnes of metal should be cautious about how it was used...

on the evidence we have seen so far there is a clear cut responsibility in the Uber / driver / system camp - and therefore, no it was not the pedestrian's fault...

stanwan

1,895 posts

226 months

Friday 23rd March 2018
quotequote all
Mr2Mike said:
The difference in light levels between that video and the original is staggering. Many people are wondering if the original video has been "adjusted" and I have to admit the evidence is compelling, unless Uber normally fit welding masks to their cameras.

Uber Video just before impact.
Above video at same location.
We’ve been studying that video frame by frame. It looks like it has been modified - when the car near the cyclist the light doesn’t illuminate her correctly. It looks like a portion of the image has been deliberately masked out or darkened to make it appear that she jumped out in front of the vehicle.

Shouldn’t matter if it was pitch black - aren’t the infra red sensors capable of detecting pedestrians anyway?

Tuna

19,930 posts

284 months

Friday 23rd March 2018
quotequote all
Jonesy23 said:
That isn't AI. That's (mostly) Computer Vision. Interesting stuff albeit pretty simple to do these days; OpenCV does almost anything you'd ever want. Sign recognition used to be hard, these days it's an afternoon demo.

Mostly when people talk 'AI' they don't really mean AI. And even if they did mean to use AI in autonomous cars they need to understand what specifically they mean and whether it's a proper solution or just technical wkery because AI is a popular concept right now.
You're right in that I was misusing AI when I should have said ML (Machine Learning). For sure OpenCV is great for picking out specific object types and fiducial markers, but from what I've read of the research, it's not the driving force in autonomous vehicles where whole scene interpretation is needed and you're doing structure from motion etc as well. Certainly Nvidia are providing custom processors that are designed to handle neural networks very efficiently in automotive applications for exactly that reason. This is very much ML territory (and yes, if you want to be picky, OpenCV does provide access to the better known neural net based algorithms).

One very specific example is that last year Google said that it is deprioritising autonomous vehicles as an area of research. It sited problems with interpreting scenes reliably and picked out as an example that robustly detecting traffic lights was still a problem. Afternoon demos are great, but that's not quite what you want to be behind the wheel of a two tonne vehicle is it?

So no - computer vision is not a 'solved problem' and nor in this context is it 'easy'.

havoc

30,062 posts

235 months

Friday 23rd March 2018
quotequote all
98elise said:
Kawasicki said:
Computers are dumb. Either you use amazing sensors and data sharing between vehicles to get around that or you build a computer with as much sense and perception as the driver you intend to replace.

Amazing sensors and data sharing has potential, amazing AI is a long way off. By the time AI is good enough to replace a moderately competent driver we will have much bigger things to enjoy/fear than self driving cars!
Cars don't need AI. It's a rules based environment. People cannot process unexpected things very well, infact when presented with a sudden unexpected event we tend to freeze or panic.

Driving a car can be broken down into a set of rules, and when something is experienced outside of the rules then stop. A computer can do this without getting tired, enraged, looking at Facebook, doing it's makeup etc. It can also have eyes in the back of its head, or eyes that can detect objects outside our range.
The road MAY be a rules-based environment but not entirely - humans are frequently illogical when adult and sober/unaltered, so kids, drunks, stoners and the mentally-impaired have NO hope of 100% compliance with those 'rules'.

...and what you're suggesting is that the potential punishment for breach of the rules is getting hit and killed by an autonomous car because you the child/drunk/etc. were acting 'outside the rules' and it couldn't predict your actions. So I think you've got a fking screw loose...


Oh, and humans ARE good at REACTING to unexpected events. Processing CONSCIOUSLY, maybe less-so, but the subconscious routines that you've built up will see MOST drivers brake and/or swerve when an obstacle appears in their path. That Uber car did neither.



rxe's post is a very good one too

akirk

5,389 posts

114 months

Friday 23rd March 2018
quotequote all
ash73 said:
akirk said:
Jonesy23 said:
That isn't AI. That's (mostly) Computer Vision. Interesting stuff albeit pretty simple to do these days; OpenCV does almost anything you'd ever want. Sign recognition used to be hard, these days it's an afternoon demo.

Mostly when people talk 'AI' they don't really mean AI. And even if they did mean to use AI in autonomous cars they need to understand what specifically they mean and whether it's a proper solution or just technical wkery because AI is a popular concept right now.
exactly - there is so much guff talked in this area - AI is artifical intelligence - i.e. not from a living being
there is no intelligence in any of this - it is simply following pre-programmed logic - if A -> B
Any system that can learn and take actions to maximise its chances of achieving a goal is AI. It doesn't need to be conscious to be AI. For example a neural net can learn new patterns that weren't directly programmed; if it detects a new pattern it might ask what the object is, or test all the response outcomes itself by scoring them, then it will recognise the object in the future and know what to do. This is also how humans learn.
In fact it has to be NOT conscious to be AI smile
Learning of new patterns though is not real AI - real AI is the ability to jump logic / to derive logic / to create new logic - adding a new pattern to your data bank and evaluating it against knowns, and then scoring based on a known algorithm is not AI... reality is that we know so little about how the human brain works we are a very long way from being able to replicate even a small part of it... however for Autonomous cars, the majority of what is needed is not AI, so it is not an issue - we don't yet need the car to create original poetry to read to its occupants yet!

TooMany2cvs

29,008 posts

126 months

Friday 23rd March 2018
quotequote all
Digger said:
Pedestrians should always have right of way regardless. It is but a simple fact.
Nobody has "Right of way" ANYWHERE.

Pedestrians often have priority over motorised traffic - but not always. That little red man? That's quite explicitly telling a pedestrian they don't have priority. The green man tells them they do. Most five year olds understand that.

In the US, they're really quite strict about pedestrians not having priority except where they explicitly are allowed to cross. It's called "Jaywalking", and they fine people for doing what she was doing with her bike.

The difference between right of way and priority... If the car had right of way, she would be to blame for causing the collision. Priority means that the car driver has to avoid it if at all possible.

otolith

56,091 posts

204 months

Friday 23rd March 2018
quotequote all
havoc said:
Oh, and humans ARE good at REACTING to unexpected events.
Not always correctly though.

http://www.dailyecho.co.uk/news/9582278.Young_woma...

kev1974

4,029 posts

129 months

Friday 23rd March 2018
quotequote all
TooMany2cvs said:
Digger said:
Pedestrians should always have right of way regardless. It is but a simple fact.
Nobody has "Right of way" ANYWHERE.

Pedestrians often have priority over motorised traffic - but not always. That little red man? That's quite explicitly telling a pedestrian they don't have priority. The green man tells them they do. Most five year olds understand that.

In the US, they're really quite strict about pedestrians not having priority except where they explicitly are allowed to cross. It's called "Jaywalking", and they fine people for doing what she was doing with her bike.
Only illegal in some states (although Arizona appears to be one of them), and even then, rarely enforced.

The only place I have ever been where I found the locals to absolutely refuse to cross the road away from proper crossings was Poland. Spent a few months working there and the guys were totally paranoid about being done for jaywalking.

Murph7355

37,708 posts

256 months

Friday 23rd March 2018
quotequote all
kev1974 said:
Only illegal in some states (although Arizona appears to be one of them), and even then, rarely enforced.

The only place I have ever been where I found the locals to absolutely refuse to cross the road away from proper crossings was Poland. Spent a few months working there and the guys were totally paranoid about being done for jaywalking.
Germany used to be quite hot on it too. I long ago got a proper bking from a Polizei there for doing that.

akirk

5,389 posts

114 months

Friday 23rd March 2018
quotequote all
ash73 said:
akirk said:
ash73 said:
akirk said:
Jonesy23 said:
That isn't AI. That's (mostly) Computer Vision. Interesting stuff albeit pretty simple to do these days; OpenCV does almost anything you'd ever want. Sign recognition used to be hard, these days it's an afternoon demo.

Mostly when people talk 'AI' they don't really mean AI. And even if they did mean to use AI in autonomous cars they need to understand what specifically they mean and whether it's a proper solution or just technical wkery because AI is a popular concept right now.
exactly - there is so much guff talked in this area - AI is artifical intelligence - i.e. not from a living being
there is no intelligence in any of this - it is simply following pre-programmed logic - if A -> B
Any system that can learn and take actions to maximise its chances of achieving a goal is AI. It doesn't need to be conscious to be AI. For example a neural net can learn new patterns that weren't directly programmed; if it detects a new pattern it might ask what the object is, or test all the response outcomes itself by scoring them, then it will recognise the object in the future and know what to do. This is also how humans learn.
In fact it has to be NOT conscious to be AI smile
Learning of new patterns though is not real AI - real AI is the ability to jump logic / to derive logic / to create new logic - adding a new pattern to your data bank and evaluating it against knowns, and then scoring based on a known algorithm is not AI... reality is that we know so little about how the human brain works we are a very long way from being able to replicate even a small part of it... however for Autonomous cars, the majority of what is needed is not AI, so it is not an issue - we don't yet need the car to create original poetry to read to its occupants yet!
I disagree. The coding or logic of a neural net algorithm doesn't need to be modified to learn, it learns by recognising and storing new data patterns and identifying the best actions by scoring the outcomes (or by asking its programmer!). As it learns it becomes better at making strategic choices and exceeds the sum of its parts.

Also, if a sufficiently advanced AI became conscious, and even reprogramed itself, then it would still be artificial intelligence because it's a machine, although it might be afforded the same rights as humans.

Your definition is too narrow, it's like saying life isn't life unless it can do calculus.
We will have to disagree - all that a machine based logic can do sits within the original coding - all it can 'learn' is to make choices based upon what it stores in its data bank - it can never break out of that - real intelligence has non-rational jumps and changes which come from nowhere other than the 'genius' of that brain - current machine logic can not write something creative - it can randomly assemble words and it can even analyse other examples of books to determine what is popular - but it can never create something new - something not based on the logic it holds - other than through randomisation - it can never be a Milton, or Bacon, an Austen of Dickens...

So, current levels of machine 'AI' are assemblers and organisers / prioritisers and logicians - they are never creators...