Software, Algorithms and AI

Scholars & Thought Leaders

Joy Buolamwini 

Scientist, activist and founder of the Algorithmic Justice League, Joy Buolamwini examines racial and gender bias in facial analysis software. As a graduate student, Joy found an AI system detected her better when she was wearing a white mask, prompting her research project Gender Shades. This project uncovered the bias built in to commercial AI in gender classification showing that facial analysis technology AI has a heavy bias towards white males.

Virginia Eubanks

Virginia Eubanks is an Associate Professor of Political Science at the University at Albany, SUNY. Her writing about technology and social justice has appeared in Scientific AmericanThe NationHarper’s, and Wired. For two decades, Eubanks has worked in community technology and economic justice movements.

Meredith Broussard

Data journalist Meredith Broussard is an associate professor at the Arthur L. Carter Journalism Institute of New York University. Her academic research focuses on artificial intelligence in investigative reporting and ethical AI, with a particular interest in using data analysis for social good. She appeared in the 2020 documentary Coded Bias , an official selection of the Sundance Film Festival that was nominated for an NAACP Image Award. 

Christina Dunbar-Hester 

Professor Dunbar-Hester conducts research into the politics of technology.  She is interested in supervising research on social and cultural aspects of science and technology. She worked to establish the PhD certificate in Science & Technology Studies at USC and is a faculty affiliate with the Center on Science, Technology & Public Life, where she convenes a research group on urban ecosystems with researchers across southern California.

Mutale Nkonde 

Mutale Nkonde is the founding CEO of AI For the People (AFP) a non profit communications agency. AFP’s mission is to eliminate the under-representation of black professionals in the American technology sector by 2030. Prior to this Nkonde worked in AI Governance. During that time she was part of the team that introduced the Algorithmic and Deep Fakes Algorithmic Acts, as well as the No Biometric Barriers to Housing Act to the US House of Representatives.

Rediet Abebe 

Rediet Abebe is a computer scientist working in the fields of algorithms and AI, with a focus on equity and justice concerns. Abebe is a Junior Fellow at the Harvard Society of Fellows. She also co-founded and co-organizes MD4SG  and Black in AI

Safiya Umoja Noble

Dr. Safiya U. Noble is an Associate Professor at the University of California, Los Angeles (UCLA) in the Department of Information Studies (Links to an external site.) where she serves as the Co-Founder and Co-Director of the UCLA Center for Critical Internet Inquiry (C2i2) (Links to an external site.). She holds appointments in African American Studies and Gender Studies. She is a Research Associate at the Oxford Internet Institute at the University of Oxford and has been appointed as a Commissioner on the Oxford Commission on AI & Good Governance (OxCAIGG)

Sendhil Mullainathan

Sendhil Mullainathan is an American professor of Computation and Behavioral Science at the University of Chicago Booth School of Business. His current research uses machine learning to understand complex problems in human behavior, social policy, and especially medicine, where computational techniques have the potential to uncover biomedical insights from large-scale health data. He currently teaches a course on Artificial Intelligence.

Rashida Richardson

Rashida Richardson is a Visiting Scholar at Rutgers Law School and the Rutgers Institute for Information Policy and the Law, where she specializes in race, emerging technologies and the law. Rashida researches the social and civil rights implications of data driven technologies, including artificial intelligence, and develops policy interventions and regulatory strategies regarding data driven technologies, government surveillance, racial discrimination, and the technology sector.

Francesca Tripodi

Dr. Francesca Tripodi is a sociologist and media scholar whose research examines the relationship between social media, political partisanship, and democratic participation, revealing how Google and Wikipedia are manipulated for political gains.

Film & Video

How I’m Fighting Bias in Algorithms (2017)

MIT grad student Joy Buolamwini was working with facial analysis software when she noticed a problem: the software didn’t detect her face — because the people who coded the algorithm hadn’t taught it to identify a broad range of skin tones and facial structures. Now she’s on a mission to fight bias in machine learning, a phenomenon she calls the “coded gaze.”

How to Stop Artificial Intelligence from Marginalizing Communities? (2018)

Timnit Gebru, Stanford Alum and Co-Founder of Black in AI, shares remarkable insights to show how artificial intelligence is influencing thinking and decision-making in ways we didn’t imagine and must counter before it further marginalizes people…

How Biased Are Our Algorithms? (2014)

What do our algorithms say about our society? In this talk, social scientist Safiya Umoja Noble investigates the bias revealed in search engine results and argues why we have to be skeptical of the algorithms we rely upon every day.

Can Algorithms Reduce Inequality? (2018)

When Rediet Abebe arrived to study in the United States from Ethiopia, she was surprised to learn of educational inequalities of black and low-income students in the shadow of some of the world’s most prestigious universities. This talk discusses how AI can help reduce inequality, as well as new initiatives in AI to effect positive social change.

Coded Bias (2020)

*Full Film Available on Netflix. 1hr 25min* What does it mean when the technology that surrounds our lives is built on systemic racial and gender-based prejudices? This is the truth about the invisible forces that decide everyday human potential.

 

Voicing Erasure – A Spoken Word Piece Exploring Bias in Voice Recognition Technology  (2020)

Is it okay for machines of silicon and steel or flesh and blood to erase our contributions? Is it okay for machines to portray women as subservient? Is it okay Google and others to capture data without our knowledge? These questions and new research led by Allison Koenecke inspired the creation of “Voicing Erasure”: a poetic piece recited by champions of women’s empowerment and leading scholars on race, gender, and technology.

Source: Joy Buolamwini

Books & Articles

There’s Something Strange about TikTok recommendations (2020)

An AI expert recently spotted an unusual detail when trying to follow new people on TikTok.

By Rebecca Heilweil

Source: Vox

Biased Algorithms Are Easier to Fix Than Biased People (2019)

Racial discrimination by algorithms or by people is harmful — but that’s where the similarities end.

By Sendhil Mullainathan 

Source: New York Times

Who is making sure the AI Machines aren’t racists? (2021)

When Google forced out two well-known artificial intelligence experts, a long-simmering research controversy burst into the open.

Source: NYT

She’s Taking Jeff Bezos to Task (2021)

“Big Brother is watching. And he’s biased. M.I.T. computer scientist and digital activist Joy Buolamwini proved that the facial recognition technologies that are becoming ubiquitous often fail when it comes to women and people of color. She’s taken on big tech players that profit off facial recognition technologies, like Amazon, IBM and Microsoft. And she’s also concerned about government, particularly given the increased use of these technologies by the police.

Today on “Sway,” Buolamwini and Kara Swisher discuss taking Big Tech down a notch, why the mysterious company Clearview AI should concern anyone on social media, and how far the U.S. is from a China-style surveillance state.”

Source: New York Times | Opinion

Wrongfully Accused by an Algorithm (2020)

In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit. A nationwide debate is raging about racism in law enforcement. Across the country, millions are protesting not just the actions of individual officers, but bias in the systems used to surveil communities and identify people for prosecution.

Source: New York Times

Pattern Discrimination (2019)

Algorithmic identity politics reinstate old forms of social segregation—in a digital world, identity politics is pattern discrimination. It is by recognizing patterns in input data that artificial intelligence algorithms create bias and practice racial exclusions thereby inscribing power relations into media. How can we filter information out of data without reinserting racist, sexist, and classist beliefs?

 

By Clemens Apprich, Wendy Hui Kyong Chun, Florian Cramer, and Hito Steryl

Google blocks advertisers from targeting BLM Youtube videos (2021)

“Black power” and “Black Lives Matter” can’t be used to find videos for ads, but “White power” and “White lives matter” were just fine

By Leon Yin and Aaron Sankin

Source: The Markup

Data Feminism (2020)

A new way of thinking about data science and data ethics that is informed by the ideas of intersectional feminism.

By Catherine D’ignazio & Lauren F. Klein

Machine Bias: Investigating Algorithmic Injustice

“Machine Bias” is a series of articles published by ProPublica investigating algorithmic injustice and the formulas that influence our lives. Date range: 2015-2019

ProPublica

Amazon Fired Its Resume-Reading AI for Sexism

Algorithms are often pitched as being superior to human judgement, taking the guesswork out of decisions ranging from driving to writing an email. But they’re still programmed by humans and trained on the data that humans create, which means they are tied to us for better or worse. Amazon found this out the hard way when the company’s AI recruitment software, trained to review job applications, turned out to discriminate against women applicants.

Source: Popular Mechanics

By: David Grossman

One Month, 500,000 Face Scans: How China Is using AI to Profile a Minority (2019)

“The Chinese government has drawn wide international condemnation for its harsh crackdown on ethnic Muslims in its western region, including holding as many as a million of them in detention camps. Now, documents and interviews show that the authorities are also using a vast, secret system of advanced facial recognition technology to track and control the Uighurs, a largely Muslim minority. It is the first known example of a government intentionally using artificial intelligence for racial profiling, experts said.”

Source: New York Times

By Paul Mozur

Inside China’s Dystopian Dreams: AI, Shame and Lots of Cameras (2018)

With millions of cameras and billions of lines of code, China (Links to an external site.) is building a high-tech authoritarian future. Beijing is embracing technologies like facial recognition and artificial intelligence to identify and track 1.4 billion people. It wants to assemble a vast and unprecedented national surveillance system, with crucial help from its thriving technology industry.

Source: New York Times

By Paul Mozur

The algorithmic rise of the “alt-right” (2018)

As with so many technologies, the Internet’s racism was programmed right in—and it’s quickly fueled the spread of White supremacist, xenophobic rhetoric throughout the western world.

Daniels, J.