Logo
  • Research statement
  • What we offer

Luke Hewitt

Luke Hewitt

Twitter Wikimedia Commons

I’m a computational cognitive scientist researching persuasion and influence. Most recently I've worked on evaluating AI persuasion for OpenAI¹, UK AISI², and Transluce. For my PhD I studied generative models³ and political persuasion⁴ at MIT, then worked on simulating human experiments⁵ at Stanford.

I’m currently organizing the first Workshop on AI, Manipulation and Information Integrity at IASEAI 2026. Submit an abstract by Jan 10, or join the Apart Research Hackathon on Jan 9!

image
↓

Research

Recent highlights: Predicting results of social science experiments using large language models; The levers of political persuasion with conversational AI; How experiments help campaigns persuade voters

→ DeliberationBench: A normative benchmark for the influence of LLMs on users’ views Hewitt et al. (IASEAI, 2026)

→ The levers of political persuasion with conversational AI Hackenburg et al. (Science, 2025)

→ How will advanced AI systems impact democracy? Summerfield et al. (Nature Human Behavior, 2025)

→ Outcome-based Reinforcement Learning to Predict the Future Turtel et al. (TMLR, 2025)

→ Predicting results of social science experiments using large language models Hewitt*, Ashokkumar* et al. (in review)

→ Encouraging vaccination using the creativity and wisdom of crowds Tappin et al. (in review)

→ Large language models are more persuasive than incentivized human persuaders Schoenegger et al. (in review)

→ The impact of AI message-testing on public discourse Hewitt (IASEAI 2025)

→ Quantifying the returns to persuasive message-targeting using a large archive of campaigns’ own experiments* - Tappin, Hewitt, Coppock (APSA, 2024)

→ GPT-4o System Card: Persuasion OpenAI (2024)

→ How experiments help campaigns persuade voters: evidence from a large archive of campaigns’ own experiments Hewitt et al. (APSR, 2024)

→ Using survey experiment pre-testing to support future pandemic response Tappin and Hewitt (PNAS Nexus, 2024)

→ Listening with generative models Cusimano et al. (Cognition, 2024)

→ Quantifying the persuasive returns to political microtargeting Tappin et al. (PNAS, 2023)

→ Emotion prediction as computation over a generative Theory of Mind Houlihan et al. (Phil. Trans. A, 2023)

→ DreamCoder: growing generalizable, interpretable knowledge with wake-sleep bayesian program learning Ellis et al. (Phil. Trans. A, 2023)

→ Rank-heterogeneous effects of political messages: Evidence from randomized survey experiments testing 59 video treatments Hewitt et al. (working paper)

→ Hybrid memoised wake-sleep: Approximate inference at the discrete-continuous interface Le et al. (ICLR, 2022)

→ DreamCoder: Bootstrapping Inductive Program Synthesis with Wake-Sleep Library Learning Ellis et al. (PLDI, 2021)

→ Estimating the Persistence of Party Cue Influence in a Panel Survey Experiment Tappin et al. (JEPS, 2021)

→ Learning to learn generative programs with memoised wake-sleep Hewitt et al. (UAI, 2020)

→ Inferring structured visual concepts from minimal data Qian et al. (CogSci, 2019)

→ Learning to infer program sketches Nye et al. (ICML, 2019)

→ The Variational Homoencoder: Learning to learn high capacity generative models from few examples Hewitt et al. (UAI, 2018)

→ Auditory scene analysis as Bayesian inference in sound source models Cusimano et al. (CogSci, 2017)

CV

  • Research Fellow, Transluce
  • AI safety consulting, OpenAI
  • AI safety consulting, UK AI Security Institute
  • Co-founder, Rhetorical Labs; Fellow, Future of Life Foundation; Member, South Park Commons
  • Senior Research Fellow, Stanford
  • Research data scientist, Swayable (persuasion measurement and national opinion polling)
  • PhD in Computational Cognitive Science, MIT; MEng in Mathematical Computation, UCL