Tesla unlikely to Survive (Vol. 3)

Tesla unlikely to Survive (Vol. 3)

Author
Discussion

skwdenyer

16,527 posts

241 months

Wednesday 24th April
quotequote all
I thought the “man dresses as a driver’s seat” was a Ford stunt?

soupdragon1

4,067 posts

98 months

Thursday 25th April
quotequote all
skwdenyer said:
soupdragon1 said:
Chasing Potatoes said:
Right - but they reported the entire thing was canned. And it doesn't appear to be. What a Model 2 looks like is another thing entirely.
If anyone doesn't view this announcement as anything but 'knee jerk' then I don't know what to say.

Its quite bemusing that of all the products in the pipeline (semi truck, roadster, Model 2, Cybertruck) that only the Cybertruck gets to see the light of day. And look at that launch. They're an established business now, but still in start-up mode, which is concerning. Flip flopping on the product roadmap is a sign of poor leadership and/or lack of confidence. My underlying take is that they aren't confident that the all new M2 is the correct direction of travel, so they're going to pull a few bits from the parts bin and make a new model out of those instead. That's what we heard yesterday.
Apple recently canned its car project after spending over $10bn. Is that the "flip flopping" of other than an established business? Dyson brought a car to pre-production level, spending £500m, then dropped it. Many products, projects and programmes are dropped, delayed, or recast in light of changing economic realities. In the car world, a fwe notable examples have included the Porsche 989, the Range Rover SV Coupe, the 2021 Jaguar XJ, but there are many more. Mazda prepared to launch a whole brand in the US (Amati, to compete against Infiniti, Acura and Lexus), but pulled the plug at the last minute.

Tesla doesn't have an unlimited supply of funds. It has a bit less than $30bn on hand IIRC, but its net debt has crept up quite a lot over recent months as it has been caught in a price war coupled with rapidly-expanding R&D spend. What we heard yesterday is that it is intending to offer attracting leasing deals on M3 (starting at $299 per month) in order to stimulate further demand. But the price war in China is obviously a concern for them. If you look at some of the cheap EVs coming out of China right now, it is infeasible that many are actually profitable - there's a battle for market share.

In terms of the Semi, as per yesterday's release (and as confirmed by the drone warriors), work appears to have actually started on the factory intended to produce them at scale. Of course, that's just a building - it can be repurposed if necessary. But there's actual, tangible investment happening.

What I thought more interesting from the earnings call was the de-emphasis of Tesla's own battery projects. 18 months ago, in-house 4680 production was a linchpin of future strategy. Today it (and the wider Tesla cell production) is being re-cast as a hedge against spikes in global cell prices. So now 4680 ramp is being spoken of only in terms of leading Cybertruck ramp. This does make some sense. Just as with the BEV space in general, it was arguably necessary for Tesla to get in and drive progress forward, but over time the market will be able to innovate far faster than Tesla can alone. Panasonic and LG are reportedly ramping 4680 production. But it may also be emblematic of a business biting off more than it can chew.

In terms of accelerating the product roadmap, given the regulatory and production challenges associated with getting a new model to market, it is clearly not just a question of flicking a switch and delivering a new model; recall that the workshop manual is usually the longest lead-time item on any new vehicle programme! So whatever comes down the line is going to need to be a variant of an existing model. Per yesterday, this is said to incorporate some elements of new production methods.

So the question is, what? The logical step would be to take the M3 platform, add in some gigacastings and a structural pack to reduce the BOM further. My gut instinct would be something akin to the original BMW Compact - a truncated, hatchback M3 with one or more gigacastings, and a lower starting price.
Those other companies aren't flip flopping, they're shelved products, same as the M2. However, due to market sentiment of that M2 news, Musk flip flopped, and has decided to build a car from the parts bin.

This change in strategy is investor/share price led, rather than company led. That's the absurdity of the situation. The tail is wagging the dog.

Gone fishing

7,232 posts

125 months

Thursday 25th April
quotequote all
soupdragon1 said:
Those other companies aren't flip flopping, they're shelved products, same as the M2. However, due to market sentiment of that M2 news, Musk flip flopped, and has decided to build a car from the parts bin.

This change in strategy is investor/share price led, rather than company led. That's the absurdity of the situation. The tail is wagging the dog.
If somebody doesn't want to see it, they don't want to see it. It doesn't matter what logic you lay out.

As for what.. smaller battery, rear wheel drive, remove the rear screen, a few gigacastings, 2 exterior colours, 1 interior colour, pile them high, try and sell them cheap. The China market has gone for Tesla at the cheap end, in the US because of tax credits they're already cheap although that bubble could burst, Europe company car buyers won't be bothered unless its significantly cheaper on the monthly.

soupdragon1

4,067 posts

98 months

Thursday 25th April
quotequote all
Gone fishing said:
soupdragon1 said:
Those other companies aren't flip flopping, they're shelved products, same as the M2. However, due to market sentiment of that M2 news, Musk flip flopped, and has decided to build a car from the parts bin.

This change in strategy is investor/share price led, rather than company led. That's the absurdity of the situation. The tail is wagging the dog.
If somebody doesn't want to see it, they don't want to see it. It doesn't matter what logic you lay out.

As for what.. smaller battery, rear wheel drive, remove the rear screen, a few gigacastings, 2 exterior colours, 1 interior colour, pile them high, try and sell them cheap. The China market has gone for Tesla at the cheap end, in the US because of tax credits they're already cheap although that bubble could burst, Europe company car buyers won't be bothered unless its significantly cheaper on the monthly.
Yes, there are certainly plenty of cost savings to go after. Glass roof removed, no heated seats, manual seats rather than powered to add to yours. We'll have to wait and see what Tesla comes up with.

Interesting your point around company car buyers. Almost 80% of EV's sold in the UK are company car type sales, with just over 20% of EV purchases coming from private/retail buyers. Typically, most company cars are the low spec trims and if this new M2 isn't different enough from the existing cars, it won't generate very much new volume for Tesla. Just less M3/MY and more sales going into the M2, resulting in an overall revenue drop.

On the other hand, plenty of room for Tesla to pull a rabbit out of the hat, but with the recent knee jerk decision making, I don't hold out much hope.

Investor day wasn't that long ago, and the M2 was touted as a 50% reduction in build cost with 40% reduction in factory space footprint with this all new unboxing method. The biggest change to car manufacturing since Henry Ford came up with the production line, investors were told.

But here we are, again. Battery day promised revolutionary 4680 cells, still waiting years later. Investor day promised the unboxing method manufacturing and FSD has been round the corner for best part of a decade. I'm surprised that people still have faith in Elon Musk after all these big promises with no tangible results. The only thing Investors are basically getting these days are Elons shower thoughts that aren't based on hard facts or proofs of concept.


skwdenyer

16,527 posts

241 months

Thursday 25th April
quotequote all
On 4680 cells, they are here, they’re in production, and multiple suppliers are producing or gearing up to produce them. Tesla themselves seem to be still in “production hell” over them, and clearly they’re a bottleneck for CT production. Industry is adopting the form factor, and that appears to be contributing to a global fall in cell prices.

Apart from not being here fast enough, what’s the beef with 4680?

soupdragon1

4,067 posts

98 months

Thursday 25th April
quotequote all
skwdenyer said:
On 4680 cells, they are here, they’re in production, and multiple suppliers are producing or gearing up to produce them. Tesla themselves seem to be still in “production hell” over them, and clearly they’re a bottleneck for CT production. Industry is adopting the form factor, and that appears to be contributing to a global fall in cell prices.

Apart from not being here fast enough, what’s the beef with 4680?
Here is a quick summary of battery day. Not exactly on track, is a fair summary.

https://www.theverge.com/2020/9/22/21450840/tesla-...

skwdenyer

16,527 posts

241 months

Friday 26th April
quotequote all
soupdragon1 said:
skwdenyer said:
On 4680 cells, they are here, they’re in production, and multiple suppliers are producing or gearing up to produce them. Tesla themselves seem to be still in “production hell” over them, and clearly they’re a bottleneck for CT production. Industry is adopting the form factor, and that appears to be contributing to a global fall in cell prices.

Apart from not being here fast enough, what’s the beef with 4680?
Here is a quick summary of battery day. Not exactly on track, is a fair summary.

https://www.theverge.com/2020/9/22/21450840/tesla-...
That summary doesn’t put timelines around anything.

The “$25k car” was a 2018 ambition. In 2025 money that’s in real terms a $33k car. It seems to me very likely they can come up with something based off of the M3 to reach that price point, without need of as much revolutionary tech as M2 was supposed to include.

Obviously one of the big drivers are battery costs. 4680 was in large part touted as being a way of cutting battery costs by 56% in real terms by 2025. That would set a target price in $/kWh down from 156 (in 2020) to about $95 in 2025. That’s a global average. I don’t have current relative prices, but in 2022 US & EU prices were 24% and 32% higher than in China. Which is of course a problem for anyone not sourcing in China.

Battery tech is a really interesting problem. When Tesla started with the original Roadster, people laughed at them for using "laptop batteries" rather than "proper" traction batteries. Demonstrating how that worked kick-started a lot of the BEV revolution. But the *spec* of batteries necessary was rather different, hence the partnership with Panasonic, the original Gigafactory, and so on.

As uptake and demand grew, so did price and supply issues. 4680 is a way of improving both price and density. Tesla "invented" the 4680 cell, and started getting into production. By doing so, they kicked-off another small revolution. A number of vendors have geared up for production, and other manufacturers are starting to use them.

From Tesla's PoV, job done. They can and will continue to produce the newer 2nd generation 4680s used in the CT, but they don't need to produce all they require for use, and market forces will likely continue to push down the price.

Battery tech is really now starting to hot up. Look no further, say, than Samsung's announcements, e.g. https://www.theregister.com/2024/04/25/samsung_bat... - 80% charge in a few minutes, 20 year life, and a viable solid state option.

Tesla no longer needs to be driving cell development; the market is now mature enough to deliver really serious improvements all by itself.

If anyone thought that, long term, Tesla's USP / distinguishing feature / whatever was going to be in battery tech, they were blind to the obvious realities of the market. What Tesla needed to do was enough to ensure the market moved fast enough for their needs, and to demonstrate that the company's performance wasn't simply in thrall to market forces.

For some this may look like flip-flopping, or false promises, or whatever. But we're not talking about the car industry of yore here; this is a fast-moving tech business, in which stuff that looks impossible today may already be old hat in 3 years time. Tesla has ridden the curve very well indeed.

But the future is, as Musk says, autonomy. That's where Tesla is still in a really interesting position. Mobileye (recall, Tesla's original partner), was bought by Intel for $17bn, amidst promises of their own self-driving revolution. A limited IPO (Intel sold just 5%) valued the business at the same $17bn. Its had a bit of a rollercoaster, and is now valued at about $24bn, but seems to have a pretty tough market.

The recent past is littered with autonomy dead ends. Ford gave up on L4 and wrote off a $2.7bn investment in Argo AI in 2022 (it had partnered with VW). Uber sold its self-driving business to Aurora Innovation in 2020 (whose stock has lost nearly 80% of its value since 2022). Didi spun theirs out to Xpeng, who likewise have lost most of their share value.

Mobileye are still in the game, of course, announcing recently a project with VW to deliver L4 autonomous driving on the ID.Buzz platform from 2026. Their Mobileye SuperVision product seems to be promising essentially the same sort of performance as we've seen from Mercedes recently - point-to-point automatic navigation on highways and urban expressways. Their next product is Mobileye Chauffeur (which will be used on the VW ID.Buzz), which seems to put in a bunch of redundant systems to replace the driver's eyes in monitoring what's happening. It still isn't what we'd call "Full Self Driving" however; it is a combination of mapping, other data sources, cameras, sensors, etc. It is reliant upon the quality of mapping and other data available from infrastructure operators.

MobilEye drive is the analogue in the urban "robotaxi" market - very detailed mapping of defined geographical areas.

That mapping is absolutely key. To deliver self-driving, Mobileye literally have to map the world. Now, I've actually run a start-up project aimed at delivering this sort of mapping (for different purposes). It is unbelievably difficult to deliver anything that's actually reliable enough to use for autonomous navigation. Not at all impossible, and I hugely admire them for trying to do it. But it isn't delivering what most would call "real" self-driving.

Now, that 2026 timeline seems agressive. In 2018 they said EyeQ4 was launching then and would allow "eyes off" autonomy, whilst EyeQ5 would launch in 2020 and offer "mind off" driving. That obviously didn't happen.

In 2023, they said EyeQ6 would deliver all the good stuff, and it would launch in 2024/25. Now it is another year, and they've split their offering into "EyeQ6 Lite" and "EyeQ6 High" (only the latter will deliver the good stuff; only the former seems to have lots of orders).

They've just released Q1/2024 revenues down 48% YoY. Their main revenues right now are in more "humble" ADAS systems. It is clever stuff all on its own, and really does perform. But it is ultimately a standard Tier 2 type of business, which ships hardware to Tier 1s to sell to manufacturers to bolt into cars, and is susceptible to competition.

As a Tier 2 supplier, it is also interesting that Mobileye concedes its SuperVision systems (which are more advanced) will be *less* profitable for it than its regular ADAS business.

Why's all this relevant? Because a lot of people love to hate on Tesla for seeming to promise stuff they don't deliver. But that's just the same everywhere else:

2016, BMW and Mobileye announce partnership to deliver "fleets of self-driving vehicles to the streets by 2021": https://www.eetimes.com/why-intel-got-inside-bmw-m... FCA joined the deal in 2017: https://www.press.bmwgroup.com/global/article/deta...

The goals were pretty punchy for that one.

ArsTechnica said:
In a statement, Mobileye co-founder Amnon Shashua said, "we look forward to work together with our partners at BMW and Intel and share our experiences and knowhow in artificial intelligence, machine learning and tool chains for validation and testing as part of the series development for the 2021 launch. We believe the 2021 will mark the beginning of a transformative change starting with level-5 ride sharing in geo-fenced areas and moving forward, given the ability to crowd source high-definition map making, to a level-5 “everywhere” activated level-5 experience."
2018, VW and Mobileye announce robotaxis launching in Israel end of 2021, full commercialisation in 2022: https://web.archive.org/web/20181101095534/https:/... which also didn't happen. And so on.

Mobileye purchased Moovit, announced they'd partner with Sixt to deliver robotaxis in Munich under the Moovit co-brand, and then in 2023 seemed to drop the idea: https://seekingalpha.com/article/4618522-mobileye-...

So how does this relate to Tesla? The primary difference is that none of Mobileye's roadmap solves the generalised self-driving problem. It is entirely dependent upon very high resolution, constantly-updated, highly-detailed mapping.

Now, for Mobileye's business, this is good. It provides vendor lock-in, enabling it not only to keep selling hardware to its customers, but to bind them to its software and services. Ironically, there's actually rather less commercial imperative for them to solve the generalised self-driving problem.

And despite their map-based approach, they've consistently failed to meet any of the targets they've promised.

Tesla, on the other hand, is trying to do something different. It is in effect attempting to bypass the need to map the world at any resolution greater than Google offers, remove the drain on revenues associated with Mobileye's service (and the potential for price gouging down the line), and take ownership of the tech stack.

That Tesla's FSD now (version 12.3.5 I think) seems to be able to meet or exceed the capabilities of Mobileye's platform despite (a) many years less development, and (b) a total absence of the sort high-resolution mapping Mobileye requires, is a pretty remarkable achievement.

Mobileye are, of course, now downplaying the mapping angle. Check out this recent video showing off their system's ability to navigate around Shanghai: https://www.youtube.com/watch?v=svo3QN5y4F0 Note how they claim that the system is "noticing" the left-turn lane, when in fact their system already knows which lane is which (that's the point of the HD mapping). It is an impressive system, sure, but it is not notably any better than Tesla's latest system. The fact that the system is shipping in the Zeekr001 is no more impressive than Teslas shipping with FSD. And remember, Mobileye are doing this with cameras, lidar and radar (which is, again, natural for them - they want to sell more units of hardware...).

Now, that doesn't mean I think Tesla is doing everything right. But I think some of the observations about Tesla's strategy choices are a bit off and reflect a lack of understanding of (a) what they're trying to do, and (b) how quickly a lot of things are changing.

soupdragon1

4,067 posts

98 months

Friday 26th April
quotequote all
You quite rightly point out the flaws of others for autonomous driving. No dispute here with what you said.

You'll struggle to convince me that Tesla approach is the best with vision only.

Eg:



Now a human here will pull the visor down and/or put on sunglasses. And this instance brings us to a general point about dynamic range of a camera vs the human eye.

Even a high end camera lens at £10k is hugely inferior to the human eye, not to mention the high end sensor in the DSLR camera. In high end camera set ups, users still need to take 3 pictures at different exposures (multi bracketing) to get a picture that resembles what we see with our eyes. And that's a still picture.

It's actually not possible to do multi bracketing exposures in video, unless you had 3 camera set ups in each camera location. With real time video, you have 1 exposure set up. You either see the darker shadows well, or the bright parts well, but not both together. The video camera MUST throw away some of the image, just like the image above. The bright area contains lots of image, you just aren't able to see it due to limitations of the camera.

Ever see a football match on TV in bright sunshine where the stadium shadow covers a large portion of the pitch? Cameraman has a nightmare trying to broadcast a good image. Yet, if you're sitting in the stadium, you can see all the players perfectly fine due to the human eyes superior dynamic range. This example is of a mild dynamic range problem, and then you've got more aggressive range issues like the picture above. Vision only hasn't a hope of working correctly in such scenarios. You can't throw extra compute at a physics problem.

And I'm talking about large high end cameras with very large, sophisticated multi lens glass set ups. Exchange that big stuff with compact lenses with the accompanying smaller sensors and you are left with even lower dynamic range, vignetting, barrel distortion, chromatic aberation and grain/noise.

Tesla are walking into a dead end for Robotaxi. I believe it's possible in optimal conditions but that's not a recipe for success. Working sometimes is not a realistic business plan.

I'll order my Robotaxi for the morning trip to work. No problem sir, you're booked in, let's just hope it's not too sunny/foggy etc.

skwdenyer

16,527 posts

241 months

Friday 26th April
quotequote all
soupdragon1 said:
You quite rightly point out the flaws of others for autonomous driving. No dispute here with what you said.

You'll struggle to convince me that Tesla approach is the best with vision only.

Eg:



Now a human here will pull the visor down and/or put on sunglasses. And this instance brings us to a general point about dynamic range of a camera vs the human eye.

Even a high end camera lens at £10k is hugely inferior to the human eye, not to mention the high end sensor in the DSLR camera. In high end camera set ups, users still need to take 3 pictures at different exposures (multi bracketing) to get a picture that resembles what we see with our eyes. And that's a still picture.

It's actually not possible to do multi bracketing exposures in video, unless you had 3 camera set ups in each camera location. With real time video, you have 1 exposure set up. You either see the darker shadows well, or the bright parts well, but not both together. The video camera MUST throw away some of the image, just like the image above. The bright area contains lots of image, you just aren't able to see it due to limitations of the camera.

Ever see a football match on TV in bright sunshine where the stadium shadow covers a large portion of the pitch? Cameraman has a nightmare trying to broadcast a good image. Yet, if you're sitting in the stadium, you can see all the players perfectly fine due to the human eyes superior dynamic range. This example is of a mild dynamic range problem, and then you've got more aggressive range issues like the picture above. Vision only hasn't a hope of working correctly in such scenarios. You can't throw extra compute at a physics problem.

And I'm talking about large high end cameras with very large, sophisticated multi lens glass set ups. Exchange that big stuff with compact lenses with the accompanying smaller sensors and you are left with even lower dynamic range, vignetting, barrel distortion, chromatic aberation and grain/noise.

Tesla are walking into a dead end for Robotaxi. I believe it's possible in optimal conditions but that's not a recipe for success. Working sometimes is not a realistic business plan.

I'll order my Robotaxi for the morning trip to work. No problem sir, you're booked in, let's just hope it's not too sunny/foggy etc.
You're quite right about dynamic range. Your chosen photo is quite interesting, because in that *precise* situation a driver should stop (although they don't). I was actually driving in just such a scenario yesterday afternoon; with the sun lower than the sun visor could cope with, it wasn't safe for me to drive faster than a few mph. The human eye isn't infinitely capable in such circumstances!

The human eye is indeed remarkable, delivering fully 20+ stops of adjustment. Furthermore, the eye is able to perform (in effect) dynamic range compensation because the eye is constantly moving and reacting. Our brain's idea of what our eyes see is in fact mostly out of whack with what the eye actually sees.

Bracketing doesn't require quite the 3-camera set-up you describe. So long as the frame rate is sufficient, and the camera equipment good enough, multiple frames can be taken at different exposures. This is commonly carried out by, say, smartphone cameras, and is referred-to as "HDR." This problem has been studied extensively, and its use in autonomous vehicles had appeared in the published literature, for instance here: https://www.sciencedirect.com/science/article/abs/...

Continuing with the smartphone analogy, the progress in that space has been remarkably rapid. This is a trivial example of what's "standard" in pretty cheap cameras:



Now, this is where the needs of a camera for human viewing and one for machine viewing diverge. We don't necessarily care whether the image looks beautiful, but we do care if it has enough information. That opens the door to tech that you won't see in your smartphone (because it doesn't produce beautiful images).

Here's a flyer for a commercial solution: https://dce9ugryut4ao.cloudfront.net/LUCID-TritonH...

And some examples of what is achievable:



Now, I'm not saying that Tesla or others have solved every problem in that space. But the technology exists to - for many if not most practical purposes - mitigate the issue you raise.

RichardM5

1,741 posts

137 months

Friday 26th April
quotequote all
From personal experience with the 'Full Self Driving' option, I can categorically say with 100% certainty that the current hardware is not capable of achieving this.

There are far too many conditions under which it just can't work with cameras alone. If the conditions are perfect, then maybe, but all the time under all conditions, not a hope in hell.

- It's dark on a road with no street lights. Cameras blocked, FSD disabled.
- The sun is low, below about 45 degrees. Cameras blocked, FSD disabled.
- It's raining. Cameras blocked, FSD disabled.
- It's foggy, or even slightly misty. Cameras blocked, FSD disabled.
- Snowing. Cameras blocked, FSD disabled.

For gods sake, they can't even get the wipers working correctly 50% of the time, they wipe when there is not a cloud in the sky and they don't wipe when the windscreen is dangerously obscured. How do they expect to get FSD working?

The rear camera is a joke when conditions are 'dirty'. There is no protection or cleaning, within a relatively short distance it can be covered in dirt you can barely see anything useful.

I find the car exceedingly frustrating. For may things it's great at getting from A to B (I have other cars for petrol head thrills), but my god the things that don't work that could so easily with 'conventional' technology drives you nuts.

If they are relying on FSD to put them ahead of the competition, they are doomed.


TheRainMaker

6,344 posts

243 months

Friday 26th April
quotequote all
RichardM5 said:
From personal experience with the 'Full Self Driving' option, I can categorically say with 100% certainty that the current hardware is not capable of achieving this.

There are far too many conditions under which it just can't work with cameras alone. If the conditions are perfect, then maybe, but all the time under all conditions, not a hope in hell.

- It's dark on a road with no street lights. Cameras blocked, FSD disabled.
- The sun is low, below about 45 degrees. Cameras blocked, FSD disabled.
- It's raining. Cameras blocked, FSD disabled.
- It's foggy, or even slightly misty. Cameras blocked, FSD disabled.
- Snowing. Cameras blocked, FSD disabled.

For gods sake, they can't even get the wipers working correctly 50% of the time, they wipe when there is not a cloud in the sky and they don't wipe when the windscreen is dangerously obscured. How do they expect to get FSD working?

The rear camera is a joke when conditions are 'dirty'. There is no protection or cleaning, within a relatively short distance it can be covered in dirt you can barely see anything useful.

I find the car exceedingly frustrating. For may things it's great at getting from A to B (I have other cars for petrol head thrills), but my god the things that don't work that could so easily with 'conventional' technology drives you nuts.

If they are relying on FSD to put them ahead of the competition, they are doomed.
I said this years ago when we had loons on here saying, "Next year, the cars will drive themselves" I just can't see it working with cameras alone. Until the cars are talking to each other this is all a dead end IMO.

skwdenyer

16,527 posts

241 months

Friday 26th April
quotequote all
RichardM5 said:
From personal experience with the 'Full Self Driving' option, I can categorically say with 100% certainty that the current hardware is not capable of achieving this.

There are far too many conditions under which it just can't work with cameras alone. If the conditions are perfect, then maybe, but all the time under all conditions, not a hope in hell.

- It's dark on a road with no street lights. Cameras blocked, FSD disabled.
- The sun is low, below about 45 degrees. Cameras blocked, FSD disabled.
- It's raining. Cameras blocked, FSD disabled.
- It's foggy, or even slightly misty. Cameras blocked, FSD disabled.
- Snowing. Cameras blocked, FSD disabled.

For gods sake, they can't even get the wipers working correctly 50% of the time, they wipe when there is not a cloud in the sky and they don't wipe when the windscreen is dangerously obscured. How do they expect to get FSD working?

The rear camera is a joke when conditions are 'dirty'. There is no protection or cleaning, within a relatively short distance it can be covered in dirt you can barely see anything useful.

I find the car exceedingly frustrating. For may things it's great at getting from A to B (I have other cars for petrol head thrills), but my god the things that don't work that could so easily with 'conventional' technology drives you nuts.

If they are relying on FSD to put them ahead of the competition, they are doomed.
I'm not suggesting that the current setup can achieve this. I wouldn't be at all surprised for Tesla to announce that the next gen of hardware (which Musk said on the earnings call is going into production next year) is ultimately required to achieve everything that FSD should deliver.

But Hardware 4 supports HDR shooting at 10 bit and 40 FPS - whether the camera capability is used or not is dependent upon available compute - per: https://www.notateslaapp.com/news/679/tesla-s-fsd-...

If the camera itself is sufficiently capable, is there any reason why software and/or firmware can't introduce bracketing to deliver sufficient dynamic range? But I can imagine this might require the Hardware 5 processor.

Mobileye have been using HDR for some time: https://fleetsafe.com.au/wp-content/uploads/2019/0... - note that's the (now-discontinued) retrofit system.

I do know that supply chain is offering such sensors to manufacturers, for instance this from 2019: https://site.eettaiwan.com/events/iovev2019/dl/02_...

It is important to note that Tesla is not the only team working on camera-only (or primarily-camera) autonomy. For instance, Horizon Robotics' Matrix Superdrive solution uses a camera-first approach: https://en.horizon.cc/driving-solutions/

I also wouldn't be surprised if Tesla does implement one or more low-cost LiDAR scanners. But I suspect it will try to solve the FSD problem with cameras, and then add LiDAR to handle secondary object detection. Solving the driving problem in a single domain (vision) is ultimately much more viable for the *general case* (i.e. no HD maps) than trying to do sensor fusion (which adds massive complexity and damages the prospect of convergence in the current NN-based approach).

Out of interest, which hardware and software do you have experience with on Tesla?

Durzel

12,276 posts

169 months

Friday 26th April
quotequote all
Hardware 4 is very new. What does that mean for cars built before it started being added to cars? That they won't realistically achieve full self driving, despite Tesla's proclamations to the contrary?

skwdenyer

16,527 posts

241 months

Friday 26th April
quotequote all
Durzel said:
Hardware 4 is very new. What does that mean for cars built before it started being added to cars? That they won't realistically achieve full self driving, despite Tesla's proclamations to the contrary?
On further investigation, it seems that HW3 also has HDR capability per: https://zhuanlan.zhihu.com/p/641289190

And Tesla provided (IIRC) free upgrades for customers with HW2 and HW2.5 to HW3. So there's a minimum hardware level out there for all practical purposes.

It may be possible that the better HW4 platform can be used to collect training data that is easier to auto-label, in turn creating a model that can be deployed onto HW3. That would correlate with other known data.

AFAIK HW3 can't be upgraded to HW4 (different physical form-factors).

I'd strongly suggest that HW5 is going to be needed (at the least) for a "Robotaxi" offering. With luck, there will be an upgrade path for HW4 -> HW5. It may be, if/when things get more robust, an upgrade path for HW3 -> HW5 might be offered - because the installed base of HW3 vehicles who might want to take up a subscription will be so large as to make it commercially viable to do.

Without true under-the-hood and/or insider knowledge, it is hard to say for sure what's happening or is going to happen.

Gone fishing

7,232 posts

125 months

Friday 26th April
quotequote all
I certainly wouldn't believe anything Notateslaapp post, they know nothing, all they ever do is reprint any tweet they see and often embelish it. They went big time on the front bumper camera for instance

But back on FSD... this is why it will never be L5, this was a few days ago, chilly over night but by 10AM, sunny day... and one camera could only see this



There's a heap of resilience it needs to have added to make it even L3

soupdragon1

4,067 posts

98 months

Friday 26th April
quotequote all
skwdenyer said:
You're quite right about dynamic range. Your chosen photo is quite interesting, because in that *precise* situation a driver should stop (although they don't). I was actually driving in just such a scenario yesterday afternoon; with the sun lower than the sun visor could cope with, it wasn't safe for me to drive faster than a few mph. The human eye isn't infinitely capable in such circumstances!

The human eye is indeed remarkable, delivering fully 20+ stops of adjustment. Furthermore, the eye is able to perform (in effect) dynamic range compensation because the eye is constantly moving and reacting. Our brain's idea of what our eyes see is in fact mostly out of whack with what the eye actually sees.

Bracketing doesn't require quite the 3-camera set-up you describe. So long as the frame rate is sufficient, and the camera equipment good enough, multiple frames can be taken at different exposures. This is commonly carried out by, say, smartphone cameras, and is referred-to as "HDR." This problem has been studied extensively, and its use in autonomous vehicles had appeared in the published literature, for instance here: https://www.sciencedirect.com/science/article/abs/...

Continuing with the smartphone analogy, the progress in that space has been remarkably rapid. This is a trivial example of what's "standard" in pretty cheap cameras:



Now, this is where the needs of a camera for human viewing and one for machine viewing diverge. We don't necessarily care whether the image looks beautiful, but we do care if it has enough information. That opens the door to tech that you won't see in your smartphone (because it doesn't produce beautiful images).

Here's a flyer for a commercial solution: https://dce9ugryut4ao.cloudfront.net/LUCID-TritonH...

And some examples of what is achievable:



Now, I'm not saying that Tesla or others have solved every problem in that space. But the technology exists to - for many if not most practical purposes - mitigate the issue you raise.
Was fully aware of HDR video (I enjoy my HDR home cinema set up at home) but I dismissed it as unusable due to latency, however I didn't know about Lucid and Sony developments. That's very impressive.

I'm not sure how this all fits in with Tesla to be honest, as according to Elon, the HW3 and above cars can be robotaxis, without any of this tech. On the one hand you are saying the tech is getting better and will overcome some of the big vision only issues (and on preliminary reading of this tech, I can see that pathway and agree with you on the opportunity that could unlock) but then how do we reconcile that HW3 and cars can be robotaxis as well?

Thats what Elon is telling investors. Now I might trust him a little more if he talked about this kind of tech. I have no problem switching away from my negative thesis when presented with such information, but I'm sure you can understand why I remain very sceptical about the words that exit from Elons mouth. HW3 doesn't cut it for me, for the reasons we've just discussed. HW5 and above? Ok, I've certainly got a more open mind about that now.

Thanks for the links, I'll read them fully at some point, but it seems its a great development on the path to autonomy.

durbster

10,288 posts

223 months

Friday 26th April
quotequote all
How much is it going to cost to cover the car in high-end cameras with sophisticated lenses?

A top end smartphone camera must be £600-800 of the retail price of the phone and you'd need at least, what, eight of them to even begin to achieve anything useable in different conditions.

Then there's the on-board computing power you need to be processing these images in real time. We're talking about eight or more 4k video streams to process which is serious computing power, even before you get to the image analysis.

So these cars will need significantly more sophisticated hardware to achieve FSD via image processing, yet Tesla have spent the last few years removing stuff, not upgrading it.

I'm not buying it. It doesn't add up.

TheDeuce

21,729 posts

67 months

Friday 26th April
quotequote all
durbster said:
How much is it going to cost to cover the car in high-end cameras with sophisticated lenses?

A top end smartphone camera must be £600-800 of the retail price of the phone and you'd need at least, what, eight of them to even begin to achieve anything useable in different conditions.

Then there's the on-board computing power you need to be processing these images in real time. We're talking about eight or more 4k video streams to process which is serious computing power, even before you get to the image analysis.

So these cars will need significantly more sophisticated hardware to achieve FSD via image processing, yet Tesla have spent the last few years removing stuff, not upgrading it.

I'm not buying it. It doesn't add up.
Are you serious? The bulk buy price of the best smartphone cameras is probably less than $20

Agree that the compute power and AI to sort out the input is a leap, but the hardware camera cost is very little.

And as per my previous post a few pages back, the first 95% of FSD was just 5% of the work required to make it truly dependable - they're light years off actual useful FSD for sure.

RichardM5

1,741 posts

137 months

Friday 26th April
quotequote all
skwdenyer said:
Out of interest, which hardware and software do you have experience with on Tesla?
Hardware 3, latest UK software.

I'm not necessarily saying that a vison only system won't work, but that it won't work with the hardware currently installed in any current Tesla.

Of course advances in camera technology with multiple sensors targeted at different light intensities, or even frequencies, will make them more versatile than the human eye. But the cameras in current Tesla's are cheap and cheerful things more akin the the web cam in your laptop than a high end device designed for optimum near simultaneous very low and very bright light conditions.

skwdenyer

16,527 posts

241 months

Saturday 27th April
quotequote all
RichardM5 said:
skwdenyer said:
Out of interest, which hardware and software do you have experience with on Tesla?
Hardware 3, latest UK software.

I'm not necessarily saying that a vison only system won't work, but that it won't work with the hardware currently installed in any current Tesla.

Of course advances in camera technology with multiple sensors targeted at different light intensities, or even frequencies, will make them more versatile than the human eye. But the cameras in current Tesla's are cheap and cheerful things more akin the the web cam in your laptop than a high end device designed for optimum near simultaneous very low and very bright light conditions.
The only thing I believe I currently know is that both HW3 & HW4 cameras are HDR capable.

It turns out that you need surprisingly little resolution and optical fidelity to do object detection. Most people when starting out playing in this field feed their detection algorithms with images of far too high a quality, resulting in massive spikes in compute requirement for no actual benefit. We humans like to see nice shiny 4K images, but the detection algo really doesn’t care.

The secret sauce is in the optimisation of the image pipeline for the intended purpose, the inference engine, and the general compute algorithms.

So long as the cameras can see enough to get the detail they require, that’s about it.

There are some cool tricks you can use if you do have a higher-resolution sensor, such as foveation. But all the cool stuff you see in many demos I could broadly replicate for you with a simple HD camera, a Raspberry Pi and a (now) pretty cheap GPU or something like a Coral TPU.

I’m talking stuff like this from MobilEye:



(I might need a touch more compute to do that at 40 FPS, to be fair, but not by much)

That colour coding is just a simple segmentation algorithm that needs surprisingly cheap (nowadays) hardware to run. Object detection (putting boxes around stuff and identifying it) is another (today) trivial problem. Figuring out the extents of the road and so on are also straightforward in compute terms.

The secret sauce is the training. Not training for every possible case (impossible), but training to know what a road looks like, where the edges are, what the likely behaviour of a moving vehicle is, and so on.

That’s what Tesla is throwing masses of effort at right now with Dojo. And where they differ fundamentally from MobilEye - who instead rely on HD mapping to tell the car where the road is, and then use the onboard sensors to find where reality today differs from the map.

In terms of the obscured camera, you’re right, that’s a problem. For FSD, just asking the human to clean tye cameras is easy. For robotaxis, not so much.

The mistake many make is in assuming that LiDAR is any better. It isn’t. If the sensor is obscured by mud or whatever, it is dead.

Radar might be better, but that has a bunch of other issues to worry about.

Ironically, the optimum solution might be an Optimus robot sitting in the driver’s seat smile