Practical Approaches for Fairness and Inclusion in AI

Much of the AI fairness debate focuses on outcome testing and statistical fairness measures for specific groups however, these mathematical approaches delay meaningful outcomes for people that are subjected to harm or simply unconsidered in AI solution development. From a theoretical perspective, much less attention is paid to the role of epistemology in framing “positive” AI solutions which limit the ability of these algorithms to achieve equitable and inclusive outcomes. In a practical sense, data representation is also a significant challenge for AI outputs to have value for historically and digitally excluded groups. My new research focus includes a few areas:

  • Social and racial justice impact framework for AI and technical research

  • Data representation methods and approaches for people in the margins and the masses

  • Data trust mechanisms to support individual human rights, privacy and benefit

In addition to these areas, I am interested in exploring womanism as a design framework for AI, environmental justice issues that may be addressed through technology, and empowering and participatory approaches to address digital inequality.

Next
Next

RISE