4 minutes
Thoughts on bias in facial recognition technologies
Problem Statement
With the onset of new Artificial Intelligence (AI) technologies one of the most pressing problems within countries is their indiscriminate use on their own people. This deployment represents a seismic change in the way countries police and surveil their own citizens. On top of being a surveillance tool it brings with itself more immediate issues such as bias in facial recognition. According to Joy Buolamwini, a researcher at MIT who specializes in understanding AI technologies, these facial recognition technologies are more likely to mis-identify people of color, which especially is a reason for concern for a country like India. Colorism in India was historically linked to caste systems in medieval ages across the Indian subcontinent, and furthermore fueled by the presence of the British. Thus, use of AI technologies represent a monumental challenge with the potential for creating or reinforcing existing biases.
The potential benefits of facial recognition are way too many for governments and organizations to pass up. With understaffed offices and lack of people with the required level of expertise, facial recognition and other AI technologies can be a major factor in the quest towards revamping the workforce and the economy. But this lack of representation of people of color susceptible to colorism in the datasets pose a challenge in building AI systems which are equitable. The challenge is especially so because being mis-identified by automated systems may mean the difference between receiving or missing out on government benefits, employment opportunities or something worse, like being incorrectly charged for crimes by law enforcement agencies.
Proposed Solution
I propose a multi-step approach to solving this issue. First, Policymakers need to enforce policies which encourage government bodies to adopt Privacy and Security principles (which include, but not limited to: Avoid creating and reinforcing existing bias, Be Socially beneficial) at each step of any process involving adoption or procurement of AI technologies, failing of which the work may not proceed further.
Second, the Government needs to leverage the data on the Aadhar platform while keeping privacy and security issues in mind. According to the latest figures as of 31 May 2020, the total number of Aadhar enrollment is at 1.25 Billion. This represents over 90% of the current Indian population. It’s an immense database of facial information covering a diverse set of facial features across the Indian mainland. Any exercise to source such a diverse dataset by individuals and organizations would prove to be prohibitively expensive thus putting the government in a position to leverage this as a platform for training data sourcing.
Individuals and organizations can be granted access to training data using a new AI technique called Federated Learning. Wikipedia describes Federated learning as a machine learning technique that trains an algorithm across multiple decentralized edge devices or servers holding local data samples, without exchanging their data samples. Research suggests that application of this technique can be achieved at scale required for India while preserving privacy.
Major Obstacles/Implementation Challenges
There always exists a possibility that this unprecedented access to facial data can be misused. Thus, legislation to facilitate access using robust multi-tiered authorization levels is required. Policy makers need to determine where the lines of appropriate use should be drawn and furthermore, help enforce policies with certain checks and balances that prevent a misuse of these actions.
India, being endowed with a quasi-federal form of government, has local governments who may not support policies drafted at the central level due to different local needs and challenges. Thus local governments with discussions with activists of their respective locations can help legislate policies which can be used as a testing ground for a broader implementation of policies at the central level.
Admittedly, this is not a policy that central authorities would ideally like, but it is one with the highest potential supporters, largely thanks to discourse at the lowest level. By working towards a society that is inclusive of all of its citizens, our community is strengthened and all people are afforded their constitutional right to equal opportunity.
References
Buolamwini, Joy (2018). “Study finds gender and skin-type bias in commercial Artificial intelligence systems” http://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-syste ms-0212
Shankar, Ravi (2007). “Fair Skin in South Asia: an obsession?”. Journal of Pakistan Association of Dermatologists. 17: 100–104 http://jpad.com.pk/index.php/jpad/article/viewFile/695/668
Mishra, Neha. (2015) “India and Colorism: The Finer Nuances”. Washington University Global Studies Law Review. https://openscholarship.wustl.edu/cgi/viewcontent.cgi?article=1553&context=law_globa lstudies
UIDAI. Retrieved 31 May 2020 https://uidai.gov.in/aadhaar_dashboard/index.php
K Bonawitz (2019). Towards Federated Learning at Scale: System Design - arXiv https://arxiv.org/pdf/1902.01046
741 Words
2020-05-31 07:55 +0530