Computer driven cars are incapable of handling anything that the programmer didn't first anticipate. They're not even close to the level of being unable to get it wrong yet.
Not exactly. While there probably is some classic if-then, switch, and looping type of algorithmic programming in these vehicles, the heavy load of computation is most likely handled by neural network type of architectures. Take a look at
http://pages.cs.wisc.edu/~bolo/shipyard/neural/local.html for a pretty good basic level description of these. At a fundamental level, what is being done is to create a computational structure which attempts to replicate a brain like structure of interconnected neurons, although with orders of magnitude fewer artificial neurons than a real human brain. There are several different training algorithms for these, but they all involve presenting massive amounts of data to the network along with expected outcomes for each data set. The network “learns” how to respond to the data in the set during the training phase, and the results are verified by presenting other data sets, which it did not see in training, to the network and seeing if it behaves in the proper fashion. For example, a network can be trained to identifiy faces in a picture by giving it many (thousands usually) pictures containing faces with the location of the faces identifed.
The limitations are several.
1. Typically, the engineers don’t really know how the network is making its decisions. (See the limitations section in the linked article)
2. The quality of the results depends on the input data. For instance, let’s say that the faces in the training set in my example all had blue eyes because the engineer didn’t think to check for that. It may be that instead of recognizing faces, the network really is only finding blue eyes, and a test with faces of brown eyed people would fail. The key to a robust network is to have a robust set of training data.
3. The network may be learning something entirely different from what we think it is. It will always get the correct results on the training set, and may even get the correct results on the verification set, but may get completely wrong results in the real world if the training and verification datasets aren’t large and varried enough.
4. The larger and more complex the network, the larger the data sets required to train and verify are.
For driverless cars, one huge issue is creating the massive set of training data. The network itself also has to be large enough to handle the huge variety of situations it will encounter in the real world. Likely there are multiple networksin a car.
There’s probably one which identifes road boundries and lanes. There’s likely one (or more) which are used to classify objects the sensors are seeing. And there is probably one taking these pieces of data and making decisions on what to do. Likely there is also some if-then classical algorithmic code involved also.
It’s an extremely complex and challenging problem, which requires massive amounts of known good data for training.
Complete speculation here, but for example, what if the uber car were trained to recognize pedestrians with data sets only taken when the pedestrian wasn’t with a bike? Or perhaps trained to recognize bikers also? Maybe the neural network classified the victim as a biker and not a pedestrian and had been trained to evaluate that a biker would be out of the way in time? It’s possible that it miss-classifed the situation.
Sent from my iPad using Tapatalk