Amir Tohidi

Research

As a computational social scientist, I study how people interact with information and how these interactions shape public opinion and behavior. My work combines causal inference, machine learning, and experimental design to examine modern information systems—especially the role of media bias. I use large language models (LLMs) to detect and quantify media bias at scale and to design experiments that evaluate its effects on public perception.


Publications


Conferences and Seminars


Working Papers

Language Models Reveal the Persuasive Power of Biased News
Amir Tohidi, Samar Haider, Duncan Watts

Mainstream media, with its broad reach, plays a central role in shaping public opinion and thus warrants close scrutiny. Subtle forms of media bias—such as selective fact presentation and tone—can meaningfully influence public attitudes, even when reporting remains factually accurate.
We introduce a novel framework that leverages large language models (LLMs) to generate synthetic news articles by systematically varying the selection and tone of content while holding factual accuracy constant.
In a large pre-registered randomized experiment (N = 2,141), we find that selective presentation of accurate information can significantly shift individuals’ policy views and emotional responses across diverse topics. Effects are stronger for negative framings and among less-informed individuals.
Our findings highlight the persuasive power of subtle media bias and demonstrate the potential of LLMs for scalable, controlled investigations of media effects.


Falsehoods Offer No Persuasive Advantage Over Selective Facts
Jennifer Allen, Amir Tohidi, Samar Haider, David Rothschild, Duncan Watts

Concerns about false information masquerading as news are widespread because falsehoods are seen as especially effective at misleading the public. However, factually accurate but selectively framed information can also shape beliefs and attitudes.
We conduct a randomized experiment using LLMs to synthetically generate news articles that vary systematically in frame (positive vs. negative) and veracity (selective facts, exaggerations, or fabrications).
While falsehoods influence opinions, their effects are no greater than those of selective factual presentation. Our results challenge the idea that lies are uniquely persuasive and underscore the significant impact of biased framing using real facts.


The Media Bias Detector: A Framework for Annotating and Analyzing the News at Scale
Samar Haider, Amir Tohidi, Jenny Wang, Timothy Dorr, David Rothschild, Chris Callison-Burch, Duncan Watts

Mainstream media influences public opinion not only through content but also through editorial choices about what to cover and how to frame it.
We introduce a large-scale, near real-time dataset and computational framework for systematically measuring selection and framing bias in news coverage.
Our pipeline combines LLMs with large-scale news scraping to produce structured annotations—including political lean, tone, topic, and event tagging—on hundreds of daily articles.
With over 150,000 articles processed in 2024 alone, this resource enables sentence-level, article-level, and publisher-level analysis. An accompanying interactive dashboard supports exploration.
The Media Bias Detector establishes a reusable methodology for scalable media bias research and opens new possibilities for academic inquiry and public accountability.