
Generating Hate: Uncovering Bias in AI Language Models

Speaker Series
Assistant Research Director, Center for Technology and Society, ADL
A recent report by the Center for Technology and Society (CTS) at the Anti-Defamation League (ADL) revealed concerning findings about bias in leading large language models (LLMs). The study, "Generating Hate: Anti-Jewish and Anti-Israel Bias in Leading Large Language Models," evaluated four prominent AI systems—OpenAI's GPT, Anthropic's Claude, Google's Gemini, and Meta's Llama—uncovering potential anti-Jewish and anti-Israel biases across all models. Notably, Meta's Llama exhibited the most pronounced biases, providing unreliable and sometimes false responses to sensitive topics. This report highlights inconsistencies in how these AI models handle political and historical subjects, including a higher refusal rate to answer questions about Israel and a troubling inability to reject antisemitic tropes and conspiracy theories. As these AI systems become embedded in classrooms, workplaces, and society, the findings underscore the urgent need for developers to implement stronger safeguards against bias.
This talk will be hosted via Zoom. Please register in order to get the link.
About the Speaker
Dr. Morgan Clark is the Associate Director of Research, Policy, and Advocacy at ADL's Center for Technology and Society. She earned her Ph.D. in Sociology from Northwestern University in 2023, where her dissertation focused on violence against women online. Dr. Clark’s research sits at the intersection of technology, hate, and inequality. At ADL, she leads research that examines how online platforms enable the spread of antisemitism, conspiracy theories, and other forms of identity-based hate.