A new study from Kings College London reveals that driverless car systems have a bias problem. The study examined eight AI-powered pedestrian detection systems used in autonomous driving research and found that these systems were significantly better at detecting adult pedestrians than children, and also better at detecting light-skinned pedestrians compared to dark-skinned ones. Furthermore, the AI systems struggled to spot dark-skinned people in low light settings, making them less safe at night. The researchers attribute these biases to the data used to train the AI, which predominantly features adults and light-skinned individuals.
According to Dr. Jie Zhang, one of the study authors, fairness in AI systems should involve treating privileged and under-privileged groups equally. However, the study suggests that autonomous vehicle manufacturers may be running into the same bias issues due to using similar open-source systems for pedestrian detection. Although the study did not test the exact software used by driverless car companies, it raises concerns about safety as autonomous cars become more prevalent. Notably, companies like Waymo and Cruise have already faced accidents and protests in San Francisco over their driverless car operations.
The researchers highlight that biases in AI algorithms often stem from the biases present in the training data and the minds of those who create them. This is seen in other AI technologies like facial recognition software, which consistently exhibits lower accuracy with women, dark-skinned individuals, and Asian people. Despite these concerns, the embrace of AI technology continues. Facial recognition technology, for instance, has wrongly implicated innocent black individuals in arrests.