Document Type

Essay

Publication Title

Emory Law Journal Online

Abstract

This Essay describes what we call “the Black Nazi Problem,” a shorthand for the sometimes-jarring text and images produced by AI, from the incongruous—such as female Indian popes—to the outrageous—such as depicting minorities as their own historical oppressors, including Black Nazis. These images were the result of overzealous efforts by AI developers to correct for a lack of diverse representation in the training data used to create Generative AI models. The overrepresentation of white, fully-abled, Western men in images of high status categories, and the invisibility of women, people of color, and the disabled, except in low status categories, and the almost complete absence of realistic, non-sexualized images of women, plagues all text-to-image AI models. We argue that both the striking lack of diverse representation in the training data and the sometimes clumsy overcompensation for that bias lay bare social problems, rather than technological ones. The problem is not with AI technology as such—the problem is us: AI training data reflects an accumulation of historical biases and our current inequalities as well. There are four important elements about the creation process of AI that explain the Black-Nazi problem and expose broader problems about society: our history, the structure of society, our sometimes contradictory aspirations, and the aggregating process of AI image production. Understanding those aspects of the AI creation process reveals that AI’s foibles are a symptom of our ongoing struggle with the ramifications of past inequality and the difficulty of balancing inherently conflicting goals, such as aspirational diversity and historical accuracy. We draw out cultural, technological, policy, and legal implications of this problem. Altogether, the Black Nazi Problem gives us a window into other intractable socio-technical problems we need to confront in AI.

First Page

1

Publication Date

8-14-2024

Share

COinS