Tesla made some big changes to their autopilot with the v9 update. The neural net can now use all the car's cameras to recognize things on the road, and one particular Tesla owner managed to overlay the computer's metadata on top of the camera feeds. Check out the footage here, or watch a more problematic video where the Tesla failed to recognize some road debris here. Well, as the great (I think) v8.1 autopilot footage was released it was a bit marred by a quickly followed v9 update where Tesla improved their offering quite a bit (took away some of the new thingies since then). More importantly, they started using all the cameras and that created quite some problems in how to capture those esp. without Tesla actually cooperating. The solution I chose was to just limit framerate on all the cameras but main one (the storage driver on ape can handle only at most 80MB/sec). 9fps is used for the side cameras and 6 fps for fisheye (the car actually gets 36fps feeds from all cameras bu backup where it gets 30fps) and even that tends to overwhelm the drive from time to time and this is visible in still frames here and there. One of the problem is the backup camera is actually quite a bit unreliable and in many runs there's no output captured from it. As such I decided not to collect it at all for now. (you know it's not working on your cam when on a trip the cars come behind your car real close at a traffic light and nothing shows on the IC, surprisingly the CID backup cam display still works, so Tesla decided to just paper over the old "freeze frame" issue but not the autopilot problem of the same). It's also notable that different cameras get different detection rates it appears and since I cannot predict which ones are which sometimes detections seem to be a little bit off - know it's most likely a sampling artifact.