Like a Black Mirror plot line come to life, Artificial Intelligence is pouring into our lives whether we know it or not. AI continues to evolve and gain traction across industries. AI implementation grew 37 percent during 2018, and 270 percent over the last four years according to research by Gartner. The success stories of AI and data-driven machine learning ranges from the light-hearted to life-saving. Google’s DeepMind researchers developed a machine that can mimic the thought processes of the human brain. In 2016, they created an AI that plays an ancient and complex Chinese strategy game called Go. Google’s AI beat the reigning world champion Lee Sedol four to one. Recently, China’s AI startup Infervision has taught AI to detect cancerous lung cells from images. This 30-second transformative report has already been implemented in 280 hospitals around the world. As more data is collected and algorithms advance, we will only see an increase of AI and its impact on our daily lives.
Unfortunately, there’s one huge problem. There exists an AI bias.
How does AI bias occur?
In order for AI to function there needs to be deep learning, where artificial neural networks learn from large amounts of data. These algorithms are inspired by the human brain. There are many points in the deep learning process where bias can surface. From framing the problem to collecting and preparing data, algorithms are set by data scientists who may lack the understanding to incorporate a full scope of context and data from diverse sources. Additionally, data science is typically guided by mathematical terms, balancing the false positive and false negative rates of a prediction system. However, when it comes to ethnicity, religion, sexual orientation, and gender, mathematical “balance” and predictive systems cannot be perfectly applied to such fluid concepts.
Who are the data scientists?
The data scientists overseeing the information being collected and analyzed come from a limited number of individuals, not a broad spectrum that makes up the human race. Stanford recently announced a new AI Institute to guide its research and ethics. According to the university, “designers of AI must be broadly representative of humanity.” Of the 120 faculty and tech leaders partnering on the initiative, not a single member of this “representative” group appeared to be black. This lack of diversity for AI research doesn’t end with universities. Google, Facebook, Amazon, IBM, and Microsoft launched the Partnership established to study and formulate best practices on AI technologies. Yet, there does not appear to be a black board member listed on their site. Additionally, the majority of their board is made up of men. This all mirrors an unfortunate trend in the AI industry. Minorities and women are severely underrepresented.
The damage of AI bias
The frightening part of AI bias is that it has the power to truly disrupt our lives, and not for the better. The unregulated deployment of AI has already made its way into surveillance, the criminal justice system, recruiting, education, the financial sector, and transportation. In 2017, Amazon had to abandon an AI recruiting tool they developed after discovering it was not gender neutral. Because most of the applicants were men, the system taught itself to favor male applications over female. Law enforcement agencies have begun incorporating the use of face recognition systems to help identify suspects and determine deployment. In 2018, a study led by MIT Media Lab researcher Joy Buolamwini discovered that gender classification systems sold by IBM, Microsoft, and Face++ had an error rate as much as 34.4 percentage points higher for darker-skinned females than lighter-skinned males. These highly skewed results should outrage us all.
What we can do?
As AI advisory boards and councils are being assembled across corporations and startups, we need to encourage company leaders to include board members from all races and backgrounds. We need to advocate for schools and companies to recruit diverse talent as professors and researchers. Lastly, companies need to be held accountable for the mistakes, hidden biases, and blind spots in their technologies.
If you’re a startup founder or CEO, before jumping on the latest AI integration that comes your way, take the time to investigate who developed the algorithms and the diversity of data. Diverse teams and datasets will offer a comprehensive approach to deep learning and deliver better, impartial, and impactful results.