The EqualAI Framework
We help companies reduce bias in their AI by addressing each touchpoint and the full spectrum of the issue.
Invest in the pipeline
Research suggests that homogeneous teams – like those that comprise many tech giants’ engineering corps – are more likely to generate biased algorithms than diverse teams. We have also seen that lack of diversity in tech creation could rise to a life or death issue given the ultimate uses for many of these technologies, such as self-driving cars or determination of who can access ventilators, critical health care services during a crisis or determining an individual’s fate in the criminal justice system.
Many reputable organizations are ensuring that our next generation of coders and tech executives represent a broader cross-section of our population. If you are serious about making better AI and ensuring more diversity in our tech creation and management, support these forward-thinking organizations (e.g., AI4All, Girls Who Code, Code.org, Black Girls Code, Kode with Klossy, etc.)
Hire and promote with your values
To create and sustain a diverse workplace, and produce better AI, you must train your employees to recognize and address implicit bias in hiring, evaluation and promotion decisions. Your AI programs must be in sync or they could impede your efforts. AI used for hiring, evaluations and terminations could be infused with bias and requires constant checks, as it will learn new patterns constantly and may result in inequitable employment decisions.
- Ensure you have a broad cross-section of diversity in each candidate panel.
- Ensure humans remain in decision-making processes and constantly check for biased patterns or outcomes.
Evaluate your data
Find out what you can about the datasets on which your AI was built and trained. Try to identify the gaps so that they can be rectified, addressed or at a minimum, clarified for users. See the EqualAI Checklist© for a starting place to evaluate your data sets and consider FTC guidance, including an evaluation to determine:
- How representative is the data set?
- Does the data model account for biases?
- How accurate are the predictions?
Test your AI
As with AI used in employment decisions, your AI programs and particularly customer-facing systems should be checked for bias on a routine basis. AI learns new patterns as it is fed new data and can adopt new biases. See the FTC blog and EqualAI Checklist© for additional guidance and enlist experts, such as ORCAA, to help you test your AI.
Redefine the team
AI products should be tested prior to release for and by those under-represented in its creation or in the underlying datasets. The diversity of your current team and represented in your datasets are not your ultimate fate. Once you have completed steps 3 and 4, you can determine potential end-users or those who could be impacted downstream that were insufficiently represented on your team and datasets. Ensuring you have done sufficient testing before your product goes to market is critical.
Artificial intelligence (AI) is transforming our society— enabling important and exciting developments that were unimaginable just a few years ago. With these immense benefits comes much responsibility.
EqualAI is a nonprofit organization, and a movement, focused on illuminating and reducing unconscious bias in the development and use of artificial intelligence.
Together with leaders and experts across business, technology, and government, we are developing standards and tools, as well as identifying regulatory and legislative solutions, to increase awareness and reduce bias.
Together we can create #EqualAI
As AI goes mainstream, help us remove unconscious biases and create it equally.