MIT Know-how Evaluation’s new AI Colonialism sequence, which will probably be publishing all through this week, digs into these and different parallels between AI growth and the colonial previous by analyzing communities which have been profoundly modified by the expertise. In half one, we head to South Africa, the place AI surveillance instruments, constructed on the extraction of individuals’s behaviors and faces, are re-entrenching racial hierarchies and fueling a digital apartheid.
Partly two, we head to Venezuela, the place AI data-labeling corporations discovered low-cost and determined staff amid a devastating financial disaster, creating a brand new mannequin of labor exploitation. The sequence additionally seems to be at methods to maneuver away from these dynamics. Partly three, we go to ride-hailing drivers in Indonesia who, by constructing energy by means of group, are studying to withstand algorithmic management and fragmentation. Partly 4, we finish in Aotearoa, the Maori title for New Zealand, the place an Indigenous couple are wresting again management of their group’s knowledge to revitalize its language.
Collectively, the tales reveal how AI is impoverishing the communities and international locations that don’t have a say in its growth—the identical communities and international locations already impoverished by former colonial empires. In addition they recommend how AI may very well be a lot extra—a approach for the traditionally dispossessed to reassert their tradition, their voice, and their proper to find out their very own future.
That’s in the end the purpose of this sequence: to broaden the view of AI’s impression on society in order to start to determine how issues may very well be totally different. It’s not potential to speak about “AI for everybody” (Google’s rhetoric), “accountable AI” (Fb’s rhetoric), or “broadly distribut[ing]” its advantages (OpenAI’s rhetoric) with out truthfully acknowledging and confronting the obstacles in the way in which.