See the World Through the Eyes of a Tesla

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
Tesla made some big changes to their autopilot with the v9 update. The neural net can now use all the car's cameras to recognize things on the road, and one particular Tesla owner managed to overlay the computer's metadata on top of the camera feeds.

Check out the footage here, or watch a more problematic video where the Tesla failed to recognize some road debris here.

Well, as the great (I think) v8.1 autopilot footage was released it was a bit marred by a quickly followed v9 update where Tesla improved their offering quite a bit (took away some of the new thingies since then). More importantly, they started using all the cameras and that created quite some problems in how to capture those esp. without Tesla actually cooperating. The solution I chose was to just limit framerate on all the cameras but main one (the storage driver on ape can handle only at most 80MB/sec). 9fps is used for the side cameras and 6 fps for fisheye (the car actually gets 36fps feeds from all cameras bu backup where it gets 30fps) and even that tends to overwhelm the drive from time to time and this is visible in still frames here and there. One of the problem is the backup camera is actually quite a bit unreliable and in many runs there's no output captured from it. As such I decided not to collect it at all for now. (you know it's not working on your cam when on a trip the cars come behind your car real close at a traffic light and nothing shows on the IC, surprisingly the CID backup cam display still works, so Tesla decided to just paper over the old "freeze frame" issue but not the autopilot problem of the same). It's also notable that different cameras get different detection rates it appears and since I cannot predict which ones are which sometimes detections seem to be a little bit off - know it's most likely a sampling artifact.
 
that's pretty nuts seeing just how much it's keeping track of at the same time.
 
Vision systems are always neat to watch. The nerd in me always thinks of terminator vision.
 
I was watching this and realized I recognized the place. This is downtown Knoxville, TN.
 
That's only a fraction of it, likely. But to put it in context, the human mind does this subconciously. So just imagine the amount of data we're actually processing from our sensors at any given moment.

It is impressive what the human mind can process, autonomous vehicles are far more advanced. While we have two visual inputs then memory/audio Inputs to intelligently guess/assume what's not in our visual field of view.

Autonomous vehicles actually know what's around the vehicle at all times with lidar (waymo/gm), 360 cameras (Tesla), and radar
 
That's only a fraction of it, likely. But to put it in context, the human mind does this subconciously. So just imagine the amount of data we're actually processing from our sensors at any given moment.

We surpassed what our brains could do a long time ago. And with the average persons reflexes being around 100ms, we beat that as well. We learn and adapt and think in a different way, but we certainly don't process faster. Not to mention our data recording is terrible. And we are easily distracted de-prioritizing important life threatening things for something flashy like a billboard or a text message.

Just look at how long it takes a person to decide what to eat for dinner. :p
 
That's only a fraction of it, likely. But to put it in context, the human mind does this subconciously. So just imagine the amount of data we're actually processing from our sensors at any given moment.

Our vision isn't all that impressive and tech has long past it. What's impressive is how much we are able to do with this info thanks to our brains (whitch we are FAR from being able to replicate)
 
It is impressive what the human mind can process, autonomous vehicles are far more advanced. While we have two visual inputs then memory/audio Inputs to intelligently guess/assume what's not in our visual field of view.

Autonomous vehicles actually know what's around the vehicle at all times with lidar (waymo/gm), 360 cameras (Tesla), and radar

Unless the target is a lady pushing a bicycle in the dark. The last report I read said that car saw her in plenty of time to avoid and spent several seconds switching between a target ID of person, bicycle, and basically WTF instead of simply steering around an unknown target.

Another advantage humans have is our sensors are passive. 1 human or 10,000, our sensors don't interfere with each other. Not sure we will be able to say the same once we have a 1000 cars in a urban street grid all transmitting lidar and sonar pulses which are bouncing off the people, other cars, and buildings.
 
We surpassed what our brains could do a long time ago. And with the average persons reflexes being around 100ms, we beat that as well. We learn and adapt and think in a different way, but we certainly don't process faster. Not to mention our data recording is terrible. And we are easily distracted de-prioritizing important life threatening things for something flashy like a billboard or a text message.

Just look at how long it takes a person to decide what to eat for dinner. :p
That's why AI in video games is so good?
 
That's why AI in video games is so good?

Completely different thing. The processor is controlling the environment, sound placement, npcs, all the graphics. Meanwhile the players brain is allowing drool to fall out the side of his mouth while he is only utilizing his thumbs and maybe a couple fingers. Meanwhile target correction is added to assist the player in aiming so they don't get discouraged.

We were talking about the capability, not what is scripted for a video game.

Our creativity is what seperates us and puts us above a computer. Our processing power is not.
 
Computer Vision is definitly very impressive. And it may very well excel at processing particular types of inputs it was trained for beyond human vision. But I definitely disagree that computer vision at this current moment is "more complex" or "processes more data" than human vision. The entirety of human vision, from processing primitive shapes and lines, to being able to recognize and classify an enormous amount of objects and use that as a feedback to look at the original image again is impressive.

Show the average human a random photo of a scene and he's going to be able to extrapolate and deduce SO MUCH MORE information from that photo than any computer vision system. Things like motion can be deduced from a still photo because of the amount of information a human can get from a photo (guy riding a bicycle? you can probably guess roughly how fast he is going and in which direction. ). See some weird contraption? You can probably begin seeing individual parts on the contraption and start deducing what the contraption might do based on parts you see.

There is alot of "RawInput <-> Basic shapes <-> Gradient/Lighting and stereoscoping vision <-> 3d object modeling <-> object classification and deconstruction <-> reanalyze image based on new object data" processing going on, and it is not linear like I wrote it and it goes back and forth between one step and another to provide amazing amounts of information.

You can probably argue much of what I'm describing isn't vision, but I think that it's so intertwined in humans it's hard to say where vision stops and where other brain processes begin.

edit: Disclaimer: I am not a vision in expert in neither machine learning nor human cognitive abilities
 
Last edited:
Did anyone notice how much the car drifted in to the other lane at 5:30 also seems that it had no idea to slow down that much on the exit
 
Thats what you got out of all that?
i mean if i was in the other lane and it decided to do that i would not be that happy. It is pretty neat to see the overlays on the cameras to see it function.
 
Unless the target is a lady pushing a bicycle in the dark. The last report I read said that car saw her in plenty of time to avoid and spent several seconds switching between a target ID of person, bicycle, and basically WTF instead of simply steering around an unknown target.

Another advantage humans have is our sensors are passive. 1 human or 10,000, our sensors don't interfere with each other. Not sure we will be able to say the same once we have a 1000 cars in a urban street grid all transmitting lidar and sonar pulses which are bouncing off the people, other cars, and buildings.

I never imagined I'd be defending the concept of autonomous cars, but it's clear that Uber's system isn't living up to the potential here in so many ways. It's still pretty cool to see this video with augmented data from the Tesla systems; it demonstrates the potential of object sensing and tracking in all directions in a way that's somewhere between hard and impossible for a human to do consistently at this level -- although I'm far from ready to trust something like this to drive for me -- I'd love to see a similar video from Waymo, I think they have made a lot of different decisions on what to look for, and it would be pretty cool to see the difference.
 
I never imagined I'd be defending the concept of autonomous cars, but it's clear that Uber's system isn't living up to the potential here in so many ways. It's still pretty cool to see this video with augmented data from the Tesla systems; it demonstrates the potential of object sensing and tracking in all directions in a way that's somewhere between hard and impossible for a human to do consistently at this level -- although I'm far from ready to trust something like this to drive for me -- I'd love to see a similar video from Waymo, I think they have made a lot of different decisions on what to look for, and it would be pretty cool to see the difference.

You know what though...I will trust it to drive as long as I can override it anytime I want. I will not trust a computer to drive for me w/out that ability unless they invent a way to transfer my consciousness into a new body postmortem.
 
Unless the target is a lady pushing a bicycle in the dark. The last report I read said that car saw her in plenty of time to avoid and spent several seconds switching between a target ID of person, bicycle, and basically WTF instead of simply steering around an unknown target.

Another advantage humans have is our sensors are passive. 1 human or 10,000, our sensors don't interfere with each other. Not sure we will be able to say the same once we have a 1000 cars in a urban street grid all transmitting lidar and sonar pulses which are bouncing off the people, other cars, and buildings.

One pedestrian death out of how many million miles with uber's inferior technology. How about waymo: 0 out of 10 million autonomous miles. Tesla 0 out of 1 billion miles
 
Last edited:
I was watching this and realized I recognized the place. This is downtown Knoxville, TN.

Yep...I was like this is cool...then, wait a minute, that's Summit Hill Dr....all the way out to the West Hill's exit and to West Town Mall.
 
I never imagined I'd be defending the concept of autonomous cars, but it's clear that Uber's system isn't living up to the potential here in so many ways. It's still pretty cool to see this video with augmented data from the Tesla systems; it demonstrates the potential of object sensing and tracking in all directions in a way that's somewhere between hard and impossible for a human to do consistently at this level -- although I'm far from ready to trust something like this to drive for me -- I'd love to see a similar video from Waymo, I think they have made a lot of different decisions on what to look for, and it would be pretty cool to see the difference.

I am not against the concept of autonomous cars. I was addressing a post that implied such cars have everything around them properly identified all the time. According to the report I read, the Uber car didn't. The sad thing is that death was probably avoidable. The car DID detect the lady+bicycle in plenty of time to avoid. The code put indentifying the target at a higher priority then avoiding the collision. It used all of the time between the initial detection several seconds before impact and the impact trying to figure out what it was going to hit. Complex targets like lady+bicycle or homeless person in shaggy blanket pushing shopping cart full of crap with a dog in tow are things these cars are going to have to identify and avoid.
 
Kinda waited to see if it started marking people as "threats".
 
That is some awesome programming to be sure! Of course replacing the Model 3 windshield is $1800! My buddy found this out with 2018 Audi A4. Cracked window (covered). ADAS re-calibration by Audi, 8 hours, $850!

So not so sure people are prepared for the cost to maintain these systems. Human cost (coffee and a doughnut). :)
 
That is some awesome programming to be sure! Of course replacing the Model 3 windshield is $1800! My buddy found this out with 2018 Audi A4. Cracked window (covered). ADAS re-calibration by Audi, 8 hours, $850!

So not so sure people are prepared for the cost to maintain these systems. Human cost (coffee and a doughnut). :)

What would your buddy's A4 windshield replacement have to do with a Model 3?
 
That is actually cool, We can say still has some quirks into it, but technology really have gone a long way.
 
Back
Top