Tesla and Uber Unlikely to Survive (Vol. 2)

Tesla and Uber Unlikely to Survive (Vol. 2)

TOPIC CLOSED
TOPIC CLOSED
Author
Discussion

NDNDNDND

2,018 posts

183 months

Monday 26th October 2020
quotequote all
Whereas this is a bit of a stshow?

https://youtu.be/wPJc9_gJHtM

He even says he find using it extremely nerve-wracking, gives anecdotes of having to stop it pulling out in front of cars, he has to stop it sideswiping a car on the freeway and then clips a median taking a corner at the end of the video.

To be fair, it's pretty much exactly how I'd expect Tesla "FSD" to behave.

Tuna

19,930 posts

284 months

Monday 26th October 2020
quotequote all
gangzoom said:
Perfect weather/lighting but impressive the AP system 'saw' pedestrians across on the other side of the road a long way off. 4.45 mark.

https://youtu.be/HH3WV2UVAsI
It's incredibly misleading to focus on one or two scenarios where FSD is unexpectedly good. Writing software (and training neural networks) for situations we think are tricky turns out to be quite easy - on a case by case basis. The problem is, identifying every one of those cases, and training for every one of those situations is a never ending task.

It's the situations where it fails that are significant, because we need 'nine nines' reliability before we can consider this to be a safe system to release for general, un-trained, unattended usage.

And just consider simple things like jaywalking rules in the US. They have completely different protocols for people crossing the street than they do over here - are we going to be delivered software that has been trained on the behaviour of American pedestrians? The problem with the sort of "self training" networks that Tesla is relying on is that it's very hard to inspect them to know what exactly are the rules they've learned. We can observe their behaviour, but that does not actually tell us how they behave in edge cases. Unfortunately, it's the edge cases are the ones that end up causing accidents.

andy43

9,705 posts

254 months

Monday 26th October 2020
quotequote all
Sticking a Bluetooth earbud signal scanner on the Tesla nose will cover 99% of UK jaywalkers.
You are right though, it’s that one percent of unknowns - horses, kids on bikes, lunatics - I suspect that’s possibly where the phantom braking comes from that everybody complains about.

Tuna

19,930 posts

284 months

Monday 26th October 2020
quotequote all
andy43 said:
Sticking a Bluetooth earbud signal scanner on the Tesla nose will cover 99% of UK jaywalkers.
You are right though, it’s that one percent of unknowns - horses, kids on bikes, lunatics - I suspect that’s possibly where the phantom braking comes from that everybody complains about.
Please don't suggest the UK has "jaywalkers" - that law is an American oddity we shouldn't import. smile

Phantom braking is a good example. If they cannot completely eliminate that unexpected behaviour without a complete ground up re-write, what other unexpected twitches are hiding in the neural net?

lothianJim

2,274 posts

42 months

Monday 26th October 2020
quotequote all
Subscriptions are starting next year, so that will be a better indicator on how monetisable the evolving product is, at any given point in it's evolution.

This vid has a drone following. Hope they have a permit for the drone!

https://www.youtube.com/watch?v=iKlpCG367AE

NDNDNDND

2,018 posts

183 months

Monday 26th October 2020
quotequote all
lothianJim said:
Subscriptions are starting next year, so that will be a better indicator on how monetisable the evolving product is, at any given point in it's evolution.

This vid has a drone following. Hope they have a permit for the drone!

https://www.youtube.com/watch?v=iKlpCG367AE
I'm pretty impressed with that drone - particularly the way it seemed to negotiate around the overhead cables, telegraph poles and trees.

The Tesla nearly drives straight into a parked car, though.

The idea that the general public might have access to this in a couple of months time is fking terrifying, frankly. I don't want to share the roads with these. I sincerely hope Tesla 'FSD' is not permitted in the UK.

lothianJim

2,274 posts

42 months

Monday 26th October 2020
quotequote all
NDNDNDND said:
I'm pretty impressed with that drone - particularly the way it seemed to negotiate around the overhead cables, telegraph poles and trees.

The Tesla nearly drives straight into a parked car, though.

The idea that the general public might have access to this in a couple of months time is fking terrifying, frankly. I don't want to share the roads with these. I sincerely hope Tesla 'FSD' is not permitted in the UK.
Yes that was a close one, 1:30 for the curious

ZesPak

24,427 posts

196 months

Monday 26th October 2020
quotequote all
NDNDNDND said:
The idea that the general public might have access to this in a couple of months time is fking terrifying, frankly. I don't want to share the roads with these. I
sincerely hope Tesla 'FSD' is not permitted in the UK.
While I agree, let's not forget that the public already has access to 2T killing machines.
Don't forget that the US alone has on average over 16000 car crashes a day, giving that tesla has a 0.3% market share, that's about 50/day for them.
In those, over 100 people die each day. Again, that's 2 each week for Tesla.

Now let's halve these.
Could you imagine that Autopilot, doing 25 accidents/day, killing one person each week, will ever get pushed through?

They'd need to decimate that number before that happens.

On the other hand, there is a lot of data, and if they get close to human numbers, the data is there to get it even lower.
As a future perspective, this is amazing to me. Currently, no thanks.

Edited by ZesPak on Monday 26th October 12:24

SWoll

18,369 posts

258 months

Monday 26th October 2020
quotequote all
ZesPak said:
NDNDNDND said:
The idea that the general public might have access to this in a couple of months time is fking terrifying, frankly. I don't want to share the roads with these. I
sincerely hope Tesla 'FSD' is not permitted in the UK.
While I agree, let's not forget that the public already has access to 2T killing machines.
Don't forget that the US alone has on average over 16000 car crashes a day, giving that tesla has a 0.3% market share, that's about 50/day for them.
In those, over 100 people die each day. Again, that's 2 each week for Tesla.

Now let's halve these.
Could you imagine that Autopilot, doing 25 accidents/day, killing one person each week, will ever get pushed through?

They'd need to decimate that number before that happens.

On the other hand, there is a lot of data, and if they get close to human numbers, the data is there to get it even lower.
As a future perspective, this is amazing to me. Currently, no thanks.

Edited by ZesPak on Monday 26th October 12:24
This. All a numbers game, it doesn't need to be perfect just better than the large number of inattentive and poor drivers on the road.

Cheeses of Nazareth

789 posts

51 months

Monday 26th October 2020
quotequote all
SWoll said:
This. All a numbers game, it doesn't need to be perfect just better than the large number of inattentive and poor drivers on the road.
But it does.

The beauty of inattentive poor drivers, is that there is no-one else to blame.

But if those people are now relying on your software , then there is someone to blame... faster than you can say ' have you been involved in an accident that wasn't your fault'?

Its not about being 'possible' , its about being able to insure the risk . If its isn't 100% ( and it never can be) , then the monetary value of that risk is incalculable.

Fair play to Elon for getting people to pay up front for something that doesn't work, and developing it for free though.

Cheeses of Nazareth

789 posts

51 months

Monday 26th October 2020
quotequote all
SWoll said:
This. All a numbers game, it doesn't need to be perfect just better than the large number of inattentive and poor drivers on the road.
But it does.

The beauty of inattentive poor drivers, is that there is no-one else to blame.

But if those people are now relying on your software , then there is someone to blame... faster than you can say ' have you been involved in an accident that wasn't your fault'?

Its not about being 'possible' , its about being able to insure the risk . If its isn't 100% ( and it never can be) , then the monetary value of that risk is incalculable.

Fair play to Elon for getting people to pay up front for something that doesn't work, and developing it for free though.

lothianJim

2,274 posts

42 months

Monday 26th October 2020
quotequote all
You are discussing hypotheticals. For foreseeable future, any driver who buys FSD is fully responsible for the supervising the car and correcting errors.

As long as fsd + supervisor is safer. (Which is probably true even with this bug fest. ) there is no logical reason to ban it.

NDNDNDND

2,018 posts

183 months

Monday 26th October 2020
quotequote all
lothianJim said:
You are discussing hypotheticals. For foreseeable future, any driver who buys FSD is fully responsible for the supervising the car and correcting errors.

As long as fsd + supervisor is safer. (Which is probably true even with this bug fest. ) there is no logical reason to ban it.
Judging by the videos I've seen, the only way this FSD could be safer is that it's so nerve-wracking to use that no one can risk looking at their phone for a moment because it might do something completely unpredictable at any moment...

As ever, it's the psychology of it that'll be dangerous. It's the Joshua Browns of the world, whose utter and un-founded faith in the technology means they're simply not present when it all goes wrong.

If anything, the better the system gets, the less safe it will be. As mistakes become less frequent, complacency creeps in, meaning the car manager is even less engaged with what's going on and is even less able to mash the brake in a panic when it decides to drive straight at a parked car.

Edited by NDNDNDND on Monday 26th October 13:33

ZesPak

24,427 posts

196 months

Monday 26th October 2020
quotequote all
I've driven a LOT of km's with autopilot.
In the end it's something you learn to trust and use as a piece of technology.

Just as I trust my car to brake from 160-0 in a certain distance, or the accelerator to respond in a certain way.

lothianJim

2,274 posts

42 months

Monday 26th October 2020
quotequote all
NDNDNDND said:
Judging by the videos I've seen, the only way this FSD could be safer is that it's so nerve-wracking to use that no one can risk looking at their phone for a moment because it might do something completely unpredictable at any moment...

As ever, it's the psychology of it that'll be dangerous. It's the Joshua Browns of the world, whose utter and un-founded faith in the technology means they're simply not present when it all goes wrong.

If anything, the better the system gets, the less safe it will be. As mistakes become less frequent, complacency creeps in, meaning the car manager is even less engaged with what's going on and is even less able to mash the brake in a panic when it decides to drive straight at a parked car.

Edited by NDNDNDND on Monday 26th October 13:33
As you say, beta testers are on high alert so likely a big reason why accident rate is (presumably) lower than the average. I say presumably otherwise it would be banned in no time.

I’m curious why you are so confident that accident rate will increase as the ‘mix’ between human alertness and system performance shifts.

More likely I think is accident rate will stay stable or drop as the balance gradually shifts. If the accident rate increases it’s game over for public fsd so I don’t imagine that being allowed to happen.

gangzoom

6,297 posts

215 months

Monday 26th October 2020
quotequote all
NDNDNDND said:
The Tesla nearly drives straight into a parked car, though.

The idea that the general public might have access to this in a couple of months time is fking terrifying, frankly. I don't want to share the roads with these. I sincerely hope Tesla 'FSD' is not permitted in the UK.
It's actually pretty amazing Tesla can get their cars to even attempt to take a STOP junction with no prior HD mapping (I presume). I would say the logic of even simple STOP junctions in the US is as hard to program as anything we see on UK roads.

What's more amazing is our 2017 car will be able to do the same, and yet every day - even today, people are going out and buying brand new cars that will never gain any functionality once they leave the show room.

This clearly very early adopter tech, and Tesla haven't even released to the general public expect for a few hand picked people in the US - Most of whom seem to have had a phone call from Tesla explaining what they were getting them selves into, no doubt there was/is plenty of legal disclaimer to get through before Tesla pushed the software to their cars.

How or if the system can improve is the next step - It would be interesting to see how well a Waymo Robotaxi would have done on the same stretch of road.

Edited by gangzoom on Monday 26th October 13:57

NDNDNDND

2,018 posts

183 months

Monday 26th October 2020
quotequote all
ZesPak said:
I've driven a LOT of km's with autopilot.
In the end it's something you learn to trust and use as a piece of technology.

Just as I trust my car to brake from 160-0 in a certain distance, or the accelerator to respond in a certain way.
You'll trust it, at least until the next update when it might start doing something completely different:

https://teslamotorsclub.com/tmc/threads/the-cancer...

Given Tesla's track record on this, it doesn't bode well for FSD. It might drive down a certain street a hundred times, and then a software update means it might do anything...

Edited by NDNDNDND on Monday 26th October 14:10

Brother D

3,720 posts

176 months

Monday 26th October 2020
quotequote all
lothianJim said:
NDNDNDND said:
Judging by the videos I've seen, the only way this FSD could be safer is that it's so nerve-wracking to use that no one can risk looking at their phone for a moment because it might do something completely unpredictable at any moment...

As ever, it's the psychology of it that'll be dangerous. It's the Joshua Browns of the world, whose utter and un-founded faith in the technology means they're simply not present when it all goes wrong.

If anything, the better the system gets, the less safe it will be. As mistakes become less frequent, complacency creeps in, meaning the car manager is even less engaged with what's going on and is even less able to mash the brake in a panic when it decides to drive straight at a parked car.

Edited by NDNDNDND on Monday 26th October 13:33
As you say, beta testers are on high alert so likely a big reason why accident rate is (presumably) lower than the average. I say presumably otherwise it would be banned in no time.

I’m curious why you are so confident that accident rate will increase as the ‘mix’ between human alertness and system performance shifts.

More likely I think is accident rate will stay stable or drop as the balance gradually shifts. If the accident rate increases it’s game over for public fsd so I don’t imagine that being allowed to happen.
They are already 10 times safer than normal cars running auto pilot on average:

https://cleantechnica.com/2020/08/01/tesla-autopil...







Tuna

19,930 posts

284 months

Monday 26th October 2020
quotequote all
SWoll said:
This. All a numbers game, it doesn't need to be perfect just better than the large number of inattentive and poor drivers on the road.
You are choosing who gets involved in an accident - the classic trolley problem talks about exactly this issue.

What if FSD is particularly bad at spotting small children? Sure, you could say it's reduced the number of accidents, but it may have increased the number of children being hit. Is that acceptable?

It's not just a question of better overall accident rates. Are some accident types more likely now? Is the outcome of accidents worse for those that are involved? Are the accidents that Teslas do get involved in more avoidable by human drivers?

It's really far more complicated than "the software just has to get good enough" - we have to have agreement on what "good enough" actually means, and some way of proving that the software consistently delivers that, in all conditions, in all locations, and that it continues to do so every time they change or update it.

"Oh we'll just give it out and see what happens" is not a reassuring approach.

lothianJim

2,274 posts

42 months

Monday 26th October 2020
quotequote all
Human drivers don't perform consistently in all conditions. Yet insurance companies still have to estimate risk for different occupations, ages, engine size....

I got a quote for my performance, that was nearly twice the price that I was quoted a few days earlier. Turns out buying a 500bhp car with one days notice, was interpreted to mean I was more likely to make compulsive and unpredictable errors than someone who planned weeks ahead.

If anything, I would guess actuaries have an easier time working out premiums for cars that are mostly driven on autopilot, as even with all it's flaws, it's probably more predictable kind of unpredictability, than the average human driver.



TOPIC CLOSED
TOPIC CLOSED