State and local authorities from New Hampshire to San Francisco have begun banning the use of facial recognition technology because the algorithms make lots of mistakes. Even if the tech gets more accurate, facial recognition could still unleash an invasion of privacy that could make anonymity impossible. Unfortunately, bans on its use by local governments have done little to curb adoption by businesses from start-ups to large corporations.
Automated facial recognition programs do have advantages, such as their ability to turn a person’s unique appearance into a biometric ID that can let phone users unlock their devices with a glance and allow airport security to quickly confirm travelers’ identities. To train such systems, researchers feed a variety of photographs to a machine-learning algorithm, which learns the features that are most salient to matching an image with an identity. The more data they amass, the more reliable these programs become.
Too often, though, the algorithms are deployed prematurely. In London, for example, police have begun using artificial-intelligence systems to scan surveillance footage in an attempt to pick out wanted criminals as they walk by—despite an independent review that found this system labeled suspects accurately only 19 percent of the time. An inaccurate system could falsely accuse innocent citizens of being miscreants, earmarking law-abiding people for tracking and harassment.
Some companies are attempting to improve their systems by feeding them more faces— but they are not always doing it in ethical ways. Google contractors in Atlanta, for example, have been accused of exploiting homeless people in the company’s quest for faces, buying their images for a few dollars, and start-up Clearview AI broke social media networks’ protocols to harvest users’ images without their consent. Such stories suggest that some companies are tackling this problem as an afterthought instead of addressing it responsibly.
Thus, federal regulations are clearly needed. They should require the hundreds of existing facial-recognition programs, many created by private companies, to undergo independent review by a government task force. The tech must meet a high standard of accuracy, and even if it meets this criterion, humans, not algorithms, should check a program’s output before taking action on its recommendations.
Automated facial recognition systems are trained by________.
generating biometric IDs for phone users
confirming travelers’ identities at airport
inputting photographs to the algorithm
matching images with identities manually
C