Learning from Timnit Gebru

This blog post includes a reflection on a recorded lecture by Dr. Timit Gebru, proposes a question for her based on this lecture and background research, and reflects on her talk to the wider Middlebury campus.
blog
Machine Learning
Author

Madeleine Gallop

Published

April 29, 2023

Introduction

Next Monday, Dr. Timnit Gebru is coming to Middlebury to visit our class and give a talk to the wider campus community. I am excited to have the opportunity to learn from Dr. Gebru next week, as I found her research, advocacy, persistence, and honesty in sharing her personal experiences to be incredibly inspiring (even reading through her Wiki page made me a little emotional!)

Dr. Gebru is a well-recognized voice in artificial intelligence and its ethical implications. She was born in Ethiopia and attended Stanford, then worked for Google and on projects such using Street View and deep learning to predict social variables of communities based on observations of cars in different places. Dr. Gebru experienced discrimination throughout her early life and professional career, and noticed a lack of Black researchers (and especially Black women) in AI. She founded Black in AI to provide a community for Black researchers in the field.

In 2019, Gebru and other researchers called out Amazon’s facial-recognition system for model discrimination against darker-skinned females, and in 2020, when Gebru was the co-lead of its Ethical Artificial Intelligence Team, Google dismissed her for refusing to withdraw an unpublished research paper she co-authored on the dangers of large language models (i.e. bais, costs, deception). The incident became a public controversy, as over 7,000 people signed a letter condemning her firing.

In December of 2021, Dr. Gebru launched the Distributed Artificial Intelligence Research Institute to research the effects of AI on marginalized communities.

Dr. Gebru’s 2020 Talk

In 2020, Dr. Gebru gave a talk as part of a Tutorial on Fairness, Accountability, Transparency, and Ethics (FATE) in Computer Vision at the conference on Computer Vision and Pattern Recognition.

In this talk, I really appreciated Dr. Gebru’s honest description of the lack of diversity and representation in artificial intelligence. I also found it interesting that she feels there have been improvements in diversity in the machine learning field, but that computer vision is not quite there yet. As a woman in STEM fields, I have definitely experienced being one of only a few women in classes or at conferences or other events. Even in my Conservation Planning class this semester (which is a technical geography course based in Python), I am one of two women in the class. While I have experienced sexism, I have obviously never experienced intersectional discrimination, and value Dr. Gebru’s candor in describing her experiences.

After speaking about her own experiences, Dr. Gebru discusses what computer vision is, and how it has the potential to disproportionately harm marginalized communities. She goes into the racist way that computer vision screens potential employees through facial recognition. She urges us to consider questions that we have engaged with a little bit in this course already, such as who produces these technologies, and what groups they serve/leave out. She brings up an important quote from Mimi Onuoha, which describes how “Every data set that involves people implies subjects and objects… it is imperative to remember that on both sides we have human beings.” She discusses how in ML and a variety of other fields, we do not see a diversity in datasets. But she notes that even in cases where we do, they do little good if we do not think critically about how we acquire them.

I think the main takeaways I have and one of the most important points that Dr. Gebru discusses is that making something fair is not equivalent to making it the same for everyone. Fairness is not about math or statistics, but more about society and social structure. Even tools that are inclusive of multiple communities can still do harm, and the tools that we rely on, such as in computer vision, disproportionately harm marginalized communities on a national level.

Question

You spoke at the end of your talk on fairness in computer vision at the conference on Computer Vision and Pattern Recognition about technology that harms and marginalizes various communities, as well as examples of refusal to combat these strategies. Since you gave this talk, have you seen improvements in either diversity in the field of computer vision or further examples of resistance to harmful technologies in the AI world?

Part 2: Dr. Gebru’s Talk “At” Middlebury

On April 24th, Dr. Gebru gave a talk on “Eugenics and the Promise of Utopia through Artificial General Intelligence” in which she ultimately explained and problematized the vision of “utopia” created by AI and calls for more accountability in the field.

Dr. Gebru opened her talk by discussing Artificial General Intelligence (AGI) and its connection to 20th century eugenics. She began by exemplifying the ways people at the forefront of this technology are trying to create an all-powerful god, and continued to describe 2nd wave eugenics as the desire to improve the human stock by designing more intelligent people and children. Dr. Gebru then explained some of the primary tenets of TESCREAL, focusing on transhumanism, or the aim to transcend humanity all together to fulfill our predestined potential. Beginning in the 1990s, AI builds on this idea, with some scientists aiming to create “posthumans” and a new superior species, leaving “legacy humans” behind, or perhaps destroying them for the greater good. Dr. Gebru shared tweets and papers, explaining the views of proponents of AGI and its connections to eugenics, mainly through the idea of utopia, but only for the best and brightest.

After discussing some of the letters of TESCREAL, Dr. Gebru explaind the history of AGI and the extent of its current funding (over $10 billion), and how this has led to large companies each creating larger and larger language models, each one claiming it will get us to this AGI utopia. Importantly, she discussed some of the hidden, negative aspects of this kind of technology. These effects include data theft, the use of often use other models such as Google Translate to get results (equivalent to training on the test data), and theft of numerous other datasets without authentically sourcing data. These practices divert funding away from other smaller organizations that are doing good work and specialize in a certain field (African language, for example). She also talked about the environmental costs of these large models, and the human cost for the workers abroad who moderate horrific content for little pay, all to help concentrate power in the hands of a few.

Dr. Gebru concluded her talk by noting that when AGI scientists, billionaires, and other supporters speak about the impending utopia, they discuss it as if it is science fiction, with machines becoming “all powerful” and escaping human control. She warned the audience that these machines are not hypothetical and uncontrollable. Real people build them and deploy them. By framing the AGI craze as a fantastic revolution, we take accountability away from the people and corporations responsible for the technology and for the harm they inflict. She ultimately calls for more regulation on the tech field, smaller, focused models, and collective action.

Overall, I agree with Dr. Gebru’s points. Before this talk, I had learned nothing about both the negative human and environmental effects of large models or about AGI and its utopian promises. I found Dr. Gebru’s inclusion of tweets and scientific papers describing AGI’s promises to be rather frightening, and while I had hoped for a little more clarification of the links between 20th century eugenics and AI, her conclusions definitely resonated with me. I agree that beyond speculation of whether or not we are heading towards a future where only the brightest will be able to survive as transhumans, large corporations that are profiting from the increasing ubiquity of their models must be held accountable for the real and serious negative effects of their work.

Part 3: Reflect on the Process

I really enjoyed interacting with Dr. Gebru and learning about her work. I learned a lot about AI from her evening presentation, and learned even more about the industry and equity during our in-class conversation with Dr. Gebru earlier in the day. Excitingly, I had the opportunity to ask Dr. Gebru the question I outlined above about whether or not equity in computer vision has increased and about further examples of resistance to harmful tech. Dr. Gebru answered the first part of the question with a definitive NO. She shared an anecdote about how the computer vision conference used to be called the NIPS conference, with nips.com leading people to a porn site. She talked about rampant harassment at this conference, and that people received death threats after calling for a name change. She said that because every change in the field is so contentious, it sometimes seems hardly worth it to fight for them. I found the history of the conference appalling and find it sad that even someone as inspiring and driven as Dr. Gebru feels that change is almost impossible to achieve in this field.

Nonetheless, Dr. Gebru spoke about some initiatives of resistance to harmful technologies such as a group at the University of Chicago that made a tool called GLAZE to protect artists’ work from big AI models. I am curious to learn about more companies that are doing good with computer vision and AI, as if I eventually work in these fields, I hope to be working for a company that promotes social justice and resistance.

Finally, like many of my classmates, I was inspired by Dr. Gebru’s reflection on imposter syndrome. She stated her frustration with the endless ability of unqualified men to achieve, noting that “imposter syndrome” is the wrong term for the feeling of under qualification as a woman or as a minority stems. Instead, this feeling stems from the conscious effort of people who do not value your work or expertise, perpetuated through institutions like sexism and racism. This answer served as a reminder to trust my experience and knowledge, and to always seek out supportive communities.

Thank you Dr. Gebru!

Link to the post on Git: https://github.com/madgallop/madgallop.github.io/blob/main/posts/gebru_blog/GebruBlogpost.ipynb