AI in facial recognition technology creates ethical dilemmas.
“If we succeed in creating effective AI, it could be the greatest thing that ever happened to our civilization. Or it could be the worst thing that ever happened to us. We don’t know.” Stephen Hawking’s prophetic warning about artificial intelligence in 2017 hangs in the heavy air, wondering whether it will be a game-changer for humanity or a harbinger of terrible doom. According to my Columbia University colleague Grace Hamilton, the truth, like most disruptive technologies, is somewhere in between.
AI has undoubtedly ushered in a golden age of innovation. From the lightning-fast analysis of big data to the eerie prescience of predictive analytics, AI algorithms are transforming industries at breakneck speed. From facial recognition at airports to the mind-blowing capabilities of ChatGPT that can help you create a cohesive essay in seconds, AI is everywhere and silently shaping our daily lives.
This rapid integration has lulled many into a false sense of security. We have become accustomed to the invisible hand of AI and have often viewed it as the unavoidable domain of tech giants, or simply inevitable. Laws are heavily outdated and struggling to keep up with these digital cheaters. Let us sound a warning: AI’s impact on human rights is neither new nor inevitable. We must remember that technology has a long and checkered history of both defending and challenging our fundamental rights.
The rise of AI mirrors the explosive growth of the internet in the ’90s, when a laissez-faire approach helped give rise to tech giants like Amazon and Google. Thriving in an unregulated wilderness, these companies amassed mountains of user data, the lifeblood of AI development. Today, the result is an environment dominated by powerful algorithms, some of which are so sophisticated they can make fully autonomous decisions. This has revolutionized healthcare, finance, and e-commerce, but it has also opened a Pandora’s box of privacy and discrimination concerns.
Ultimately, AI algorithms are only as good as the data they are trained on. Biased data produces biased outcomes and perpetuates existing inequalities. Moreover, AI companies’ insatiable appetite for personal information raises serious privacy red flags. Balancing technological advances with protecting human rights will be the defining challenge of our time.
Consider facial recognition technology. In 1955, the FBI’s COINTELPRO program weaponized surveillance against Martin Luther King Jr., a chilling example of technology used to silence dissent. Then, in January 2020, Robert Williams responded to a knock on his front door. Williams, a Black man from Detroit, didn’t expect the sight that greeted him: police officers were at the door, ready to arrest him for a crime he didn’t commit. The charge? Stealing a collection of luxury watches from a high-end store. The culprit? A blurry CCTV image that matched imperfect facial recognition technology.
This wasn’t just a case of false positives. Rather, it was a clear example of how AI, and facial recognition in particular, can perpetuate racial bias with dire consequences. The images used by police were of poor quality, and an algorithm trained on a likely imbalanced dataset disproportionately misidentified Williams. As a result, Williams was separated from his family, languished in jail for 30 hours, his reputation was tarnished, and trust in the system was shattered.
But Williams’ story became more than just a personal injustice. He spoke out publicly, raising the reality that “many black people are not so lucky” and “no one should have to live with such fear.” With the support of the ACLU and the University of Michigan’s Civil Rights Litigation Initiative, he filed a lawsuit against the Detroit Police Department for violating his Fourth Amendment rights.
Williams’s case is not an isolated one, but a chilling reminder of the inherent dangers of relying on biased AI, especially in a critical task like law enforcement. As of 2016, Williams is one of 117 million people (almost half of all American adults) whose images are stored in facial recognition databases used by law enforcement.
In the vastness of facial recognition databases, bias is amplified. In fact, studies have shown that facial recognition algorithms have higher error rates when identifying people of color, with error rates highest for darker-skinned women, up to 34% higher than lighter-skinned men.
But there is a silver lining: Decentralized Autonomous Organizations (DAOs) like Decentraland offer a glimpse into the future of transparent, community-driven governance. Leveraging blockchain technology, DAOs empower token holders to participate in decision-making, facilitating a more democratic and inclusive approach to technology.
But DAOs are not without their flaws: Major security breaches in 2022 exposed the vulnerability of user data and highlighted the privacy risks inherent in their decentralized structure, and of course, the lack of centralized oversight can also make them a breeding ground for discriminatory practices.
The US Algorithmic Accountability Act (AAA) is a step in the right direction, aiming to shed light on the opaque world of AI algorithms. The AAA aims to foster a more transparent and accountable AI ecosystem by requiring companies to assess and report potential biases. Technological solutions are also emerging: diverse datasets and regular ethics audits are being implemented to ensure fairness in AI development.
The path forward requires a multi-pronged approach. Rigorous regulatory and ethical frameworks are essential to uphold human rights. DAOs should embed human rights principles into their governance structures and conduct regular AI impact assessments. Applying strict warrant requirements to all data, including internet activity, is essential to protect intellectual privacy and democratic values.
Legal systems must address the chilling effect of AI on free speech and intellectual pursuits. Regulating discriminatory uses of AI is paramount. Facial recognition technology should only be used as supporting evidence with built-in safeguards to ensure systemic bias is not perpetuated.
Finally, slowing the rapid development of AI is essential to allow regulation to keep up. A national council dedicated to AI legislation could ensure that human rights frameworks evolve alongside technological advances.
The bottom line? Transparency and accountability are essential. Companies must disclose biases, and governments must set best practices for ethical AI development. They must also ensure unbiased data sources, with diverse datasets trained with individual consent. Only by addressing these challenges can we harness AI’s immense potential and protect fundamental rights. The future depends on our ability to walk this tightrope and ensure technology serves humanity, not the other way around.
