As a computational social scientist, I study how people interact with information and how these interactions shape public opinion and behavior. My work combines causal inference, machine learning, and experimental design to examine modern information systems—especially the role of media bias. I use large language models (LLMs) to detect and quantify media bias at scale and to design experiments that evaluate its effects on public perception.
Publications
-
Trends of Violence in Movies During the Past Half Century
JAMA Pediatrics
Babak Fotouhi, Amir Tohidi, Rouzbeh Touserkani, Brad Bushman
Link
-
Divergence Between Predicted and Actual Perception of Climate Information
PNAS Nexus
Amir Tohidi, Stefano Balietti, Sam Fraiberger, Anca Balietti
Link
- Designing and Implementing a Tool for Real-Time Selection and Framing Bias Analysis in News Coverage
- ACM Conference on Human Factors in Computing Systems (CHI)*
Jenny Wang, Samar Haider, Amir Tohidi, Anushkaa Gupta, Yuxuan Zhang, Chris Callison-Burch, David Rothschild, Duncan Watts
Link
-
Habits in Consumer Purchases: Evidence from Store Closures
Under review, Journal of Consumer Research
Amir Tohidi, Dean Eckles, Ali Jadbabaie
Link
- Native Ads and the Credibility of Online Publishers
Revise and Resubmit, Journal of Interactive Marketing
Amir Tohidi, Manon Revel, Dean Eckles, Adam Berinsky, Ali Jadbabaie
Link
Conferences and Seminars
- IC2S2, Norrköping, Sweden, July, 2025
- Social and Political Economy Conference: Polarization in the Age of AI and the Post-Truth Era, Johs Hopkins University, May 2025
- Conference on Digital Experimentation at MIT (CODE), Oct 2024
- IC2S2, Philadelphia, July 2024
- INFORMS Annual Meeting, Phoenix, Oct 2023
- Marketing Colloquia, Wharton School, Nov 2022
- INFORMS Annual Meeting, Indiana, Oct 2022
- Marketing Seminar, MIT, Feb 2022
- Behavioral Research Lab Seminar, MIT, Feb & April 2022
- Social Analytics Lab, MIT, Feb & April 2022
- IC2S2, ETH Zurich, July 2021
- Marketing Seminar, MIT, Sep 2020
- IC2S2, MIT, July 2020
- Political Methodology (PolMeth) Conference, June 2019
Working Papers
Language Models Reveal the Persuasive Power of Biased News
Amir Tohidi, Samar Haider, Duncan Watts
Mainstream media, with its broad reach, plays a central role in shaping public opinion and thus warrants close scrutiny. Subtle forms of media bias—such as selective fact presentation and tone—can meaningfully influence public attitudes, even when reporting remains factually accurate.
We introduce a novel framework that leverages large language models (LLMs) to generate synthetic news articles by systematically varying the selection and tone of content while holding factual accuracy constant.
In a large pre-registered randomized experiment (N = 2,141), we find that selective presentation of accurate information can significantly shift individuals’ policy views and emotional responses across diverse topics. Effects are stronger for negative framings and among less-informed individuals.
Our findings highlight the persuasive power of subtle media bias and demonstrate the potential of LLMs for scalable, controlled investigations of media effects.
Falsehoods Offer No Persuasive Advantage Over Selective Facts
Jennifer Allen, Amir Tohidi, Samar Haider, David Rothschild, Duncan Watts
Concerns about false information masquerading as news are widespread because falsehoods are seen as especially effective at misleading the public. However, factually accurate but selectively framed information can also shape beliefs and attitudes.
We conduct a randomized experiment using LLMs to synthetically generate news articles that vary systematically in frame (positive vs. negative) and veracity (selective facts, exaggerations, or fabrications).
While falsehoods influence opinions, their effects are no greater than those of selective factual presentation. Our results challenge the idea that lies are uniquely persuasive and underscore the significant impact of biased framing using real facts.
The Media Bias Detector: A Framework for Annotating and Analyzing the News at Scale
Samar Haider, Amir Tohidi, Jenny Wang, Timothy Dorr, David Rothschild, Chris Callison-Burch, Duncan Watts
Mainstream media influences public opinion not only through content but also through editorial choices about what to cover and how to frame it.
We introduce a large-scale, near real-time dataset and computational framework for systematically measuring selection and framing bias in news coverage.
Our pipeline combines LLMs with large-scale news scraping to produce structured annotations—including political lean, tone, topic, and event tagging—on hundreds of daily articles.
With over 150,000 articles processed in 2024 alone, this resource enables sentence-level, article-level, and publisher-level analysis. An accompanying interactive dashboard supports exploration.
The Media Bias Detector establishes a reusable methodology for scalable media bias research and opens new possibilities for academic inquiry and public accountability.