Rise Networks

IBM would no longer offer facial recognition technology for mass surveillance

IBM said yesterday that it won’t offer or develop general-purpose facial recognition technology to encourage responsible usage of tech by law enforcement. The company has been a big player in the field for years offering several solutions.

In a letter to Congress, IBM’s CEO, Arvind Krishna, addressed the deaths of George Floyd, Ahmaud Arbery, and Breonna Taylor, and said the company would like to work with officials to achieve racial equality.

Krishna suggested that there need to be key policy changes through police reform, responsible use of technology, and broadening skills and educational opportunities. To that end, he said that IBM won’t offer general-purpose facial recognition technology as it might be used for mass surveillance:

IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.

Facial recognition used for surveillance has been a worry for privacy experts. Earlier this year, The New York Times’ report exposed Clearview AI, a firm that developed a facial recognition system by scraping millions of sites, and selling the solution to hundreds of law enforcement agencies. The firm’s false claims of accuracy could’ve led to several false positives.

In March, India admitted that it used facial recognition tech to identify rioters that took part in commotion in the capital of New Delhi while US President Donald Trump was visiting the country. Last year, the Indian Express reported that Delhi police recorded footage of protests against India’s controversial Citizenship Amendment Act (CAA), and ran it through facial recognition software. Apart from this, China is famous for snooping on its citizens through numerous surveillance technologies, which include facial recognition tools.

In 2019, the National Institute of Standards and Technology (NIST) published a study that said facial recognition systems have a higher rate of false positives for Asian and African-American faces as compared to Caucasian faces.

All these examples point towards potential bias, the danger of mass surveillance, and targeted profiling through facial recognition. Last year, IBM released a huge dataset of diverse faces sourced from Flickr to reduce bias in AI. But a report from NBC found out that the company failed to notify people in those photos that their images were being used for this purpose.

While IBM pulling out of general-purpose AI for racial equality is a commendable step, it has to do much more than that. There are plenty of companies offering alternatives to IBM’s solutions. So, the tech giant will need to actively participate in reducing bias and policy formation that will stop facial recognition tech from being used for surveillance.

Source: TheNextWeb

0
Would love your thoughts, please comment.x
()
x
Scroll to Top

Download Data Science Career Guidance Packet

Provide the following information to download the data science career guidance packet