Guy Martin

Author
Discussion

SystemParanoia

14,343 posts

199 months

Sunday 26th November 2017
quotequote all
ash73 said:
Not impressed with this Devbot team at all, they didn't understand Guy's feedback that it needed to be smoother and just turned the wick up.

"It didn't do what the code said it should do"... yeah right.

You can keep the Tesla, I'll take the Volvo.
A set of sensors to monitor tyre temps wouldn't have taken alot to integrate either.

Jonesy23

4,650 posts

137 months

Sunday 26th November 2017
quotequote all
The way that thing worked was incredibly crude, hard to tell where the dev time had gone.

For a start if they'd done some integration with existing COTS traction and stability algorithms they'd have been a lot further along!

Also applying control inputs smoothly and appropriately is basic old stuff.

Plus a good demo of why people don't usually engineer control systems the way they did, and stay a mile away from AI concepts - it's usually a good idea to understand how the thing behaves so it doesn't randomly go wrong and 'we don't know what's happened'.

Adrian W

13,897 posts

229 months

Sunday 26th November 2017
quotequote all
I was recently told by someone who knows, AI cars are a long long way away and the technology required doesn't exist yet, the software cannot recognise a singularity and make a decision, the example given was, if someone is standing by a zebra crossing is he going to cross or is he walking by or just standing there, the only safe thing to do is stop, the only solution currently available is to remove all zebra crossings

ecsrobin

17,147 posts

166 months

Monday 27th November 2017
quotequote all
On the other hand the kit that plugged straight into his van I was very impressed with including the lightning quick gear change.

768

13,711 posts

97 months

Monday 27th November 2017
quotequote all
Jonesy23 said:
The way that thing worked was incredibly crude, hard to tell where the dev time had gone.
Yeah, it was like all it did was learn the track layout and then used binary controls.

It seemed to have zero understanding of grip levels, let alone handling at the limit of it, which left me entirely unsurprised it had crashed before and crashed again. They just upped a single value until it fell off the track. I'd struggle to even call that AI.

I'd imagine, given Guy's feedback, that they've not worked with a driver either. I'm not surprised the engineers didn't want a go in it!

Guy is very impressive though, really enjoying the TV he's putting together.

anonymous-user

55 months

Monday 27th November 2017
quotequote all
Adrian W said:
the example given was, if someone is standing by a zebra crossing is he going to cross or is he walking by or just standing there, the only safe thing to do is stop, the only solution currently available is to remove all zebra crossings
And how does a human make this decision? if we can do it (and drivers pass pedestrians on zebra crossings every day) then a computer can do it. The difference? the computer can make much more rational judgments, as it doesn't care about being "late for that meeting" or "moron in the van who just cut me up" etc etc!



Gargamel

15,015 posts

262 months

Monday 27th November 2017
quotequote all
Max_Torque said:
And how does a human make this decision? if we can do it (and drivers pass pedestrians on zebra crossings every day) then a computer can do it. The difference? the computer can make much more rational judgments, as it doesn't care about being "late for that meeting" or "moron in the van who just cut me up" etc etc!
We are reading body language and other very subtle visual clues, are they looking at a phone, looking at the car, are they moving away or toward, quickly or slowly, are they focused or distracted.

We read and assimilate that based on experience in a tiny tiny amount of time, and very accurately too. It is not impossible for a computer to "learn" how, but there are a huge range of variables, and even humans don't always get this right.




King Herald

23,501 posts

217 months

Monday 27th November 2017
quotequote all
Gargamel said:
We are reading body language and other very subtle visual clues, are they looking at a phone, looking at the car, are they moving away or toward, quickly or slowly, are they focused or distracted.

We read and assimilate that based on experience in a tiny tiny amount of time, and very accurately too. It is not impossible for a computer to "learn" how, but there are a huge range of variables, and even humans don't always get this right.
When you see a woman on one side of a main road waving to urge a small child in school uniform to run across the road to her, I doubt a computer would be able to anticipate the carnage about to ensue, start to brake, and be able to stop comfortably in time, and then wind a window down to inform the stupid cow how stupid a cow she is.

Like I had to do this morning. ragerage

Mark Benson

7,523 posts

270 months

Monday 27th November 2017
quotequote all
Gargamel said:
Max_Torque said:
And how does a human make this decision? if we can do it (and drivers pass pedestrians on zebra crossings every day) then a computer can do it. The difference? the computer can make much more rational judgments, as it doesn't care about being "late for that meeting" or "moron in the van who just cut me up" etc etc!
We are reading body language and other very subtle visual clues, are they looking at a phone, looking at the car, are they moving away or toward, quickly or slowly, are they focused or distracted.

We read and assimilate that based on experience in a tiny tiny amount of time, and very accurately too. It is not impossible for a computer to "learn" how, but there are a huge range of variables, and even humans don't always get this right.
Where AI will take the greatest steps is where we can show it examples of scenarios and it can 'learn' itself. We're a way off that in terms of driverless cars I feel.

For instance there are great steps being made in analysis of medical scans, an AI program can be shown thousands of scans of say, hearts in various stages of health or disease and it can look for similarities. Given enough examples the program can then reliably spot issues on scans it's given to analyse.
We're not far off the day when hospital radiology departments can upload their scans in real time and only those which flag up will be sent to a clinician to investigate.

But heart scans are very specific and consistent, so the 'learning' process is easier to control and thousands of classified examples are readily available to feed the process. In the instance above with the Zebra crossing, the programmers would need thousands of examples of thousands of people near thousands Zebra crossings for the machine to be able to 'learn' similarities in the way in which people behave before they exhibit a certain action - much harder to obtain examples to feed the program and infinitely more variables to consider.

My feeling is that fully autonomous vehicles are a way off yet - I have no doubt that's where we're heading but AI has far more useful contributions to make to life before it's a tool for driving me safely home pissed from a night out.

Biker 1

7,746 posts

120 months

Monday 27th November 2017
quotequote all
I thought that AI means a machine not just self-learning, but becoming self-aware. I don't think we're there by a long way.
The vehicles in this show were all very impressive, but they were responding to programming & certainly not self-aware.

Adrian W

13,897 posts

229 months

Monday 27th November 2017
quotequote all
Max_Torque said:
Adrian W said:
the example given was, if someone is standing by a zebra crossing is he going to cross or is he walking by or just standing there, the only safe thing to do is stop, the only solution currently available is to remove all zebra crossings
And how does a human make this decision? if we can do it (and drivers pass pedestrians on zebra crossings every day) then a computer can do it. The difference? the computer can make much more rational judgments, as it doesn't care about being "late for that meeting" or "moron in the van who just cut me up" etc etc!
This is the guy who was explaining it to me. http://www.csap.cam.ac.uk/network/william-webb/

But if you know better!!

youngsyr

14,742 posts

193 months

Monday 27th November 2017
quotequote all
King Herald said:
Gargamel said:
We are reading body language and other very subtle visual clues, are they looking at a phone, looking at the car, are they moving away or toward, quickly or slowly, are they focused or distracted.

We read and assimilate that based on experience in a tiny tiny amount of time, and very accurately too. It is not impossible for a computer to "learn" how, but there are a huge range of variables, and even humans don't always get this right.
When you see a woman on one side of a main road waving to urge a small child in school uniform to run across the road to her, I doubt a computer would be able to anticipate the carnage about to ensue, start to brake, and be able to stop comfortably in time, and then wind a window down to inform the stupid cow how stupid a cow she is.

Like I had to do this morning. ragerage
People will adapt their behaviour to lower the risk of being a pedestrian around autonomous cars.

A good example of this is my local high street, the council dropped the speed limit to 20 mph and installed cobble stones instead of tarmac to slow cars down. My estimate is that this has simply doubled (at least) the amount of people who will cross in front of moving cars, despite there being 3 traffic light controlled crossings within 400 yds. Humans are very capable of assessing risk and mitigating it to an acceptable standard, similarly they will accept a certain amount of risk if it saves them time and effort.

I guarantee you people will pay more attention before crossing the road if they know there's no human behind the wheel to allow for their lack of attention. Similarly parents will be much more concerned about teaching their kids the green cross code.



youngsyr

14,742 posts

193 months

Monday 27th November 2017
quotequote all
Gargamel said:
We are reading body language and other very subtle visual clues, are they looking at a phone, looking at the car, are they moving away or toward, quickly or slowly, are they focused or distracted.

We read and assimilate that based on experience in a tiny tiny amount of time, and very accurately too. It is not impossible for a computer to "learn" how, but there are a huge range of variables, and even humans don't always get this right.
Agreed, but really there's no need for a computer to be able to read body language. All it needs to know is: "If' there's a human near a crossing, slow down and prepare to stop. If there is a human at a crossing stop".

That really is it.

Humans themselves will very quickly be aware of how AI cars behave and will adapt their behaviour to lower their own risk. They will very quickly stop not paying attention around crossings, even if it takes a near miss or even a collision for them to learn the lesson.

Let's not forget that human drivers hit pedestrians on crossings quite often too - I've witnessed one accident and a school friend was involved in another - both entirely the fault of the pedestrian running out in front of the moving car.

I suspect the second collision would have been avoided by an AI car - the child tore out from the drvier's blind spot on a scooter. An AI car doesn't have blind spots.


Edited by youngsyr on Monday 27th November 15:30

anonymous-user

55 months

Monday 27th November 2017
quotequote all
youngsyr said:
Humans themselves will very quickly be aware of how AI cars behave and will adapt their behaviour to lower their own risk. They will very quickly stop not paying attention around crossings, even if it takes a near miss or even a collision for them to learn the lesson.
They wont, they will still make the same stupid mistakes as now, walking around in their own world with their attention on their smartphone with music drowning out any noise, which will be even less prevalent with electric cars. I'd be amazed if the incidents of pedestrian injuring accidents doesn't go up.

fatandwheezing

415 posts

159 months

Monday 27th November 2017
quotequote all
I too was quite stunned how basic the Roborace systems were. They are meant to be racing in packs of 20 this time next year aren't they?
I've had "debates" with people at work about the attraction of an autonomous car race with no human element, but I didn't realise that it appears to be a single team developing the algorithm. Surely 20 cars all running to the exact same driving algorithm will just result in either utter carnage, or an Abu Dhabi style snake of cars never overtaking.

Do want to make my own crappy Raspberry Pi tea machine though.

AppleJuice

2,154 posts

86 months

Monday 27th November 2017
quotequote all
768 said:
Yeah, it was like all it did was learn the track layout and then used binary controls.
Didn't BMW do something similar with an E90 330i in 2007?

Progress or what?

Edited by AppleJuice on Monday 27th November 15:59

curlie467

7,650 posts

202 months

Monday 27th November 2017
quotequote all
I totally forgot he had that Volvo.
I'd watch an episode all about that thing!

Six Fiend

6,067 posts

216 months

Monday 27th November 2017
quotequote all
Firstly - love the Volvo! What a noise smile

It seemed to me that an F1 car is more advanced in terms of traction control and tyre temp etc. Shouldn't that all be applied to the AI car from the start?

How can they have not had a driver working on the project? Guy's input in what was presumably a day or two highlighted some very basic shortcomings of the systems. Not for one minute suggesting I could do a better job than those chaps but they did seem to have missed some very obvious points.

Enjoyed the teasmade being able to make a cuppa based on questions but as someone has already said it's not true learning AI.

I like watching Guy as a presenter. His no faffing about attitude is refreshing. His garage was well equipped too!

The Dangerous Elk

4,642 posts

78 months

Monday 27th November 2017
quotequote all

youngsyr

14,742 posts

193 months

Monday 27th November 2017
quotequote all
jsf said:
youngsyr said:
Humans themselves will very quickly be aware of how AI cars behave and will adapt their behaviour to lower their own risk. They will very quickly stop not paying attention around crossings, even if it takes a near miss or even a collision for them to learn the lesson.
They wont, they will still make the same stupid mistakes as now, walking around in their own world with their attention on their smartphone with music drowning out any noise, which will be even less prevalent with electric cars. I'd be amazed if the incidents of pedestrian injuring accidents doesn't go up.
And I'd be amazed if they didn't go down - AI cars don't have bllind spots, don't fall asleep, drive drunk, speed, panic or get distracted.

On top of that, most people will realise that AI cars can't predict their actions as well as humans, so most people will be more careful as pedestrians, just as they are when they go abroad and aren't sure of the local traffic.