Tesla and Uber Unlikely to Survive...

Tesla and Uber Unlikely to Survive...

TOPIC CLOSED
TOPIC CLOSED
Author
Discussion

DonkeyApple

55,296 posts

169 months

Tuesday 16th October 2018
quotequote all
On the motorway does it see/sense a car indicating and have a default response?

Twice on the M40 now I have been in the middle lane with an X approaching in the outer lane, I have set my indicator going at the moment the car goes into my blind spot as I plan to pull out as the Tesla has passed but each time the Tesla has executed a near emergency stop and disappeared promptly. It made me wonder if the car was spotting my indicator and calculating that to mean that I was going to suddenly pull in to the side of it?

AstonZagato

12,704 posts

210 months

Tuesday 16th October 2018
quotequote all
I don't think it is anywhere near that sophisticated. However, I might be wrong.

Burwood

18,709 posts

246 months

Tuesday 16th October 2018
quotequote all
DonkeyApple said:
On the motorway does it see/sense a car indicating and have a default response?

Twice on the M40 now I have been in the middle lane with an X approaching in the outer lane, I have set my indicator going at the moment the car goes into my blind spot as I plan to pull out as the Tesla has passed but each time the Tesla has executed a near emergency stop and disappeared promptly. It made me wonder if the car was spotting my indicator and calculating that to mean that I was going to suddenly pull in to the side of it?
The M40 Tesla Troll is revealed biggrin

gangzoom

6,300 posts

215 months

Tuesday 16th October 2018
quotequote all
DonkeyApple said:
On the motorway does it see/sense a car indicating and have a default response?
With a resolution of 640x416 I'm amazed it can even 'see' any cars let alone indicators. You guys are very brave letting a beta test algorithm 'drive' these 2ton+ machines using such low res camera inputs.

Even on a 4K screen playing something like Forza horizon 4 with a Xbox One S I struggle to see anything like real world depth/path of travel.

A 4K screen is roughly 8.3MB of data/resolution versus 0.3MB on a 640x480 feed....I don't think your need 4K level of visual clarity for self driving but trying to do it with 640x480 feeds is crazy!! - Just think about it would you drive any car today if the windows were blacked out and all you could see was a 640x480 screen??

But even at 1080P resolution your looking at 2MB images, say you aim for 60FPS for processing and than 8 cameras, thats 960MB PER SECOND of raw data coming in. I doubt the current AP 2.0/2.5 CPU can even handle just receiving x8 1080P streams let alone process and act on it. If I tried to feed that much data to my MacBook Pro i recon itll just set it self on fire even preparing to handle the data.

I recon Tesla can get there, but the amount of computing power needed isnt going to be something you can just clip in like Musk is suggesting. Nissan did a proper demo of their propilot stuff 12 months ago with a Leaf, but the boot/passenger space was literally stuff full of computers and it was drawing a decent amount of charge when operating....Am not parting with my money anytime soon for Full Self Driving.

Edited by gangzoom on Tuesday 16th October 12:04

gangzoom

6,300 posts

215 months

Tuesday 16th October 2018
quotequote all
Heres Johnny said:
I think thecsensir mix just doesn’t cut it. You can’t talk away cars crashing into fire trucks as ‘ AutoPilot can’t see stationary objects, it’s driver error’ and continually say AutoPilot is beta, and think the same fundamental technology will cope with all that and much more
Thats purely because the resolution isnt there. Say a stationary firetruck initially occupies just 5% of screen data when first 'seen', on a 640x480 stream that is a tiny amount of data to work out whats going on, the raw data isnt there. Theres a good reason why VGA resolution is long been superseded.

Once your up to 1080P even far away objects become more obvious. We manage fine with our eyes - which have a tiny focal area/very limited colour filed and often all kinds of refraction errors in the lens. The key though is the occipital lobe and our brains ability to process the information at will. This is partly what Google is trying to do with the Pixel 3, improve image quality by trying to add/manipulate the data using software rather than have 4-5 lens on the back of a phone.


Edited by gangzoom on Tuesday 16th October 12:17

anonymous-user

54 months

Tuesday 16th October 2018
quotequote all
gangzoom said:
....Am not parting with my money anytime soon for Full Self Driving.

Edited by anonymous-user on Tuesday 16th October 12:04
Another amazing turnaround from the guy who, not that long ago, was musing about sitting in the back working whilst the car drove him home.......... wink

Heres Johnny

7,229 posts

124 months

Tuesday 16th October 2018
quotequote all
gangzoom said:
Thats purely because the resolution isnt there. Say a stationary firetruck initially occupies just 5% of screen data when first 'seen', on a 640x480 stream that is a tiny amount of data to work out whats going on, the raw data isnt there. Theres a good reason why VGA resolution is long been superseded.

Once your up to 1080P even far away objects become more obvious. We manage fine with our eyes - which have a tiny focal area/very limited colour filed and often all kinds of refraction errors in the lens. The key though is the occipital lobe and our brains ability to process the information at will. This is partly what Google is trying to do with the Pixel 3, improve image quality by trying to add/manipulate the data using software rather than have 4-5 lens on the back of a phone.


Edited by gangzoom on Tuesday 16th October 12:17
Our eyes move all the time, our brain works amazingly well based on a life time (literally) of judging depth perception based on the stereoscopic image but also our expectation of what its seeing - we've all seen the optical illusions that fool the eye. We've also all driven at a time where we've had to drop a sun visor, screen our eyes, squint and frankly taken pot luck, none of which they can do. But thats what Tesla seem to be reliant on.
Look at the fiasco with wipers, The low resolution doesn't help.

Anyway... he's just announced a $5k processor upgrade required.. but not clear if that includes FSD or is an extra.

RobDickinson

31,343 posts

254 months

Tuesday 16th October 2018
quotequote all
gangzoom said:
Thats the £9k question for me....

I haven't paid to activate AP on our car, and looking at the crude data from v8.0 gald I didnt. Tesla does however seem to be moving very quickly with neural net development, if I was 100% convinced Tesla is confident of delivering FSD on our 2017 build X I would pay the £9k FSD fee.

But we all know how IT development goes. You can change the CPU, but what about the bus, ram speed, storage etc etc. Socketed CPUs are around but how many people actually change a CPU instead of do a full upgrade?

I think I'll have to try v9.0 before deciding. Am very happy with everything else our current X does, has enough speed/range/space for us as the main family car for the foreseeable future.

But I'm still very undecided/sure about A.P, FSD is something I would 100% order if turly available...Still its nice Tesla gives you the choice about FSD potential on a car built 18 months ago smile.


Edited by gangzoom on Tuesday 16th October 05:53
https://electrek.co/2018/10/16/tesla-neural-net-computer-production-elon-musk/amp/

gangzoom

6,300 posts

215 months

Wednesday 17th October 2018
quotequote all
Heres Johnny said:
We've also all driven at a time where we've had to drop a sun visor, screen our eyes, squint and frankly taken pot luck, none of which they can do.
Cameras are actually much much better than our eyes in tricky high dynamic range situations. For a start pointing a camera at the sun doesn't cause irreversible damage. You can simply limit the range of light you want to process.

I took this photo a while back, the camera was pointing straight at the sun, but because I was using a LCD EVF I could still 'see' the helicopter to focus on rather than been blinded.



Your point about 'pot luck' describes exactly what AI and Self driving is about. The challenge isnt about sensors or cameras, its about building a machine that can construct a 3D world with what ever sensor input and than navigate it using the intelligence any human could to get to their destination.

The current AP system is a long long way from that, adaptive curise control with lane keep assist is dumb and dumber. But version 9.0 does seem to have moved things fowards a step, and Tesla is making real visible progress. But even a 2000% boost (what ever figure Musk quoted) to the current AP CPU might not be enough to achive true autonomy.

But than again an AI neural net *might* be able to figure out the challenges needed for driving in a more efficent and different way from our brains. That is the real excitement for AI, not sensors or processing, but potentially totally new cognitive ways to tackle problems. Frankly am thankful to be alive at this time, the potential changes coming to the world is phenomenalm.

If Tesla cannot do it someone like Google will, the photos people are taking with Pixel 3 already is showing better software to help improve what current hardware can do is the next big leap in tech heading our way. Why do you need 4 cameras when one with the support of AI magic can achive a better image smile.



Edited by gangzoom on Wednesday 17th October 07:17

RobDickinson

31,343 posts

254 months

Wednesday 17th October 2018
quotequote all
We have the hardware, the question is making it small, cheap and efficient enough to put into a car and that needs specialization rather than brute strength.

I know Nvidia is looking at the same/similar thing possibly next year using around 500w

gangzoom

6,300 posts

215 months

Wednesday 17th October 2018
quotequote all
RobDickinson said:
We have the hardware, the question is making it small, cheap and efficient enough to put into a car and that needs specialization rather than brute strength.
Not sure if that is ture. Can you references any papers where people have achieved a neural net good enough for self driving in any context?

DeepMind is clearly the poster child of AI, getting multiple Nature publications can only be bettered by a Nobel prize (which I'm sure is just a matter of when not if), yet I don't think I've seen anything from them about Self driving?? - Though am happy to be pointed to any literature.

gangzoom

6,300 posts

215 months

Wednesday 17th October 2018
quotequote all
In answer to my own question looks like DeepMind are working on navigation- 5th Nature paper in a few years......am in the wrong job frown.

https://www.nature.com/articles/s41586-018-0102-6....

RobDickinson

31,343 posts

254 months

Wednesday 17th October 2018
quotequote all
We certainly haven't perfected the software but have a pretty good idea of what size the problem is and what specialized hardware is needed to run it within an order of magnitude etc anyhow.

Google/waymo have been doing it for a while with cars stuffed with hardware and shrinking it down to an actual running public service now, albeit very limited.

A system taking up all of the luggage area drawing 5000w isn't going to work.

I don't expect it to be switched on over night for every situation. It'll be slowly rolled out into more situations and then edge cases will take more time.

DonkeyApple

55,296 posts

169 months

Wednesday 17th October 2018
quotequote all
gangzoom said:
In answer to my own question looks like DeepMind are working on navigation- 5th Nature paper in a few years......am in the wrong job frown.

https://www.nature.com/articles/s41586-018-0102-6....
How are the people working on the AP systems factoring in a car’s response to a pedestrian stepping out in front of it?

For me this is the fundamental issue for urban use as I can see how they can program a system with enough variables to be better than the average driver on the open road but I cannot fathom how it can drive across many global cities if the fear of harm is removed.

There seems to be a cultural, social and even political hurdle within humanity that seems to be being ignored.

All papers on the subject discuss how AI will cope better than humans in accidental situations but not about deliberate.

It’s simple human nature that if a future outcome or response is known then it can be acted upon by all. Where the outcome is not known then natural risk calculations control us and while this varies widely among all humans 99% of us operate within a very small range when it comes to risk taking. It is this human relationship to risk that keeps pedestrians on pavements. Not rules or laws but human nature. It’s this subtle but overwhelmingly powerful range within a group of humans that has defined how many movements of people operate. Without this these current systems will stop working.

Maybe looking at the AI studies within robots holds a clue in that they all show anthropomorphism as being hugely defining and important in achieving key goals but you can never make use of that with a car.

An AP car is just a mobile ZX81. Expecting all humans to treat such a vehicle with its known outcomes in the same manner as a human piloted vehicle with all its risks, flaws, variables even down to the colour, size or sex of the driver is seemingly the elephant in the room.

RobDickinson

31,343 posts

254 months

Wednesday 17th October 2018
quotequote all
what are you wittering on about now you sound like a pensioner just discovering telephones

DonkeyApple

55,296 posts

169 months

Wednesday 17th October 2018
quotequote all
RobDickinson said:
what are you wittering on about now you sound like a pensioner just discovering telephones
Rob, it’s above your intellect and I don’t have any colouring pens to hand. Leave it to the grown-ups. Why don’t you run along and find some more propaganda to post links to. Good girl.

RobDickinson

31,343 posts

254 months

Wednesday 17th October 2018
quotequote all
I dont think technology is really your thing mate how about leave it alone and let people who have a clue deal with ith

DonkeyApple

55,296 posts

169 months

Wednesday 17th October 2018
quotequote all
RobDickinson said:
I dont think technology is really your thing mate how about leave it alone and let people who have a clue deal with ith
Open your mind. It’s clear that it’s very closed.

Try considering crowd thinking of humans. Try imagining the wider world and how humans interact with objects and expand that to how humans interact with objects that work in a known manner versus those that behave randomly. There’s lots of proven science on the subject. And if you can find an article that discusses this in the context of AI cars and not just humanoid robots then I will be genuinely impressed.

RobDickinson

31,343 posts

254 months

Wednesday 17th October 2018
quotequote all
Gosh yes those people at google cant have thought about pedestrians.

RobDickinson

31,343 posts

254 months

Wednesday 17th October 2018
quotequote all
DonkeyApple said:
And if you can find an article that discusses this in the context of AI cars and not just humanoid robots then I will be genuinely impressed.
Predicting pedestrian road-crossing assertiveness for autonomous vehicle control :
https://www.google.com/url?sa=t&rct=j&q=&a...

A Scenario-Adaptive Driving Behavior Prediction Approach to Urban Autonomous Driving:
https://www.google.com/url?sa=t&rct=j&q=&a...

Communicating Awareness and Intent in Autonomous Vehicle-Pedestrian Interaction
https://www.google.com/url?sa=t&rct=j&q=&a...


Edited by RobDickinson on Wednesday 17th October 09:05

TOPIC CLOSED
TOPIC CLOSED