Traffic-sign recognition is one of the key tools required for autonomous vehicles. Thanks to road-sign standardization, this technology is well-suited to machine-learning and deep-learning processes that can identify images.
It runs into difficulties, however, when signs are intentionally defaced to trick algorithms into reading them differently. With a few strategically placed pieces of tape, a person can trick an algorithm into viewing a stop sign as if it were a 45 mile-per-hour speed limit sign, according to researchers. No serious incidents of this type have been identified outside of laboratory or test environments.
The threat of such attacks poses a huge issue for Bosch, which develops vehicle components such as sensors and cameras for tasks including traffic-sign recognition. The company is researching countermeasures, said Michael Bolle, its chief technology officer.
Rather than pulling back on AI, Mr. Bolle said that the solution has been to double down on it. The company has introduced a parallel AI process that uses computer vision, where algorithms seek to emulate human visual-processing systems. The idea is to analyze an object from two different perspectives and compare them against each other.
One system, for instance, uses deep-learning algorithms to identify the road sign and determine what it is telling the driver. This process can be fooled by manipulation. But by taking what Mr. Bolle calls a multipath approach, a second computer algorithm using computer vision analyzes the same information differently. It effectively acts as a check on the results of the first. If there is a discrepancy between the two, that can signal that someone is attempting to spoof the system.
The Bosch defense is based on the fact that this type of attack is designed to foil a particular element of the autonomous vehicle, in this case the neural network that is trained to identify images such as stop signs. The remedy involves the use of a separate system, computer vision, that wasn’t targeted by the malicious actors.
The defense is only triggered by a deliberate attack. The presence of dust or dirt on a sign is unlikely to cause issues with the AI’s functioning, but tape placed on a sign would.
“This isn’t a random result—it’s not like you put a sticker up there and all of a sudden something unexpected happens. This is targeted,” said Darren Shou, head of technology at software company NortonLifeLock Inc. The company, which sells Norton antivirus software and LifeLock identity-theft-protection products, was previously called Symantec Corp. The name change came when Symantec agreed to sell its enterprise-security business to Broadcom Inc. in November.
The hacking methods in question reflect on a new kind of cyberattack, in which hackers compromise the information that is fed into an algorithm. This low-tech form of hacking differs from traditional methods of attack, such as penetrating complex information-technology systems.
“When we talk about cybersecurity, we talk about hackers who come in our systems and change code and harm our systems. In the area of machine learning and AI, products and machines learn from data, and so the data itself can be part of the attack surface,” Mr. Shou said.