Luke Hewitt
Twitter Wikimedia Commons
I’m a computational cognitive scientist. My research measures what influences people’s beliefs, with application to AI safety, effective public communication, and social science methodology.
Currently:
- AI safety consulting, UK AI Security Institute (evaluating AI persuasion capabilities)
- Research fellow, Transluce (understanding AI persuasion & manipulation behaviors)
- Senior Research Fellow, Stanford (using AI to simulate human experiments)
Previously:
- AI safety consulting, OpenAI (GPT-4o persuasion evaluation)
- Research data scientist, Swayable (persuasion measurement & national opinion polling)
- Co-founder, Rhetorical Labs; Fellow, Future of Life Foundation; Member, South Park Commons
- PhD in AI / Cognitive Science, MIT; MEng in Mathematical Computation, UCL
↓
Research
→ The levers of political persuasion with conversational AI Hackenburg et al. (in review)
→ Encouraging vaccination using the creativity and wisdom of crowds Tappin et al. (in review)
→ Outcome-based Reinforcement Learning to Predict the Future Turtel et al. (working paper)
→ Large language models are more persuasive than incentivized human persuaders Schoenegger et al. (in review)
→ The impact of AI message-testing on public discourse Hewitt (IASEAI 2025)
→ Quantifying the returns to persuasive message-targeting using a large archive of campaigns’ own experiments* - Tappin, Hewitt, Coppock (APSA 2024)
→ How will advanced AI systems impact democracy? Summerfield et al. (in review)
→ Predicting results of social science experiments using large language models Hewitt*, Ashokkumar* et al. (in review)
→ GPT-4o System Card: Persuasion OpenAI (2024)
→ How experiments help campaigns persuade voters: evidence from a large archive of campaigns’ own experiments Hewitt et al. (APSR, 2024)
→ Using survey experiment pre-testing to support future pandemic response Tappin and Hewitt (PNAS Nexus, 2024)
→ Listening with generative models Cusimano et al. (Cognition, 2024)
→ Quantifying the persuasive returns to political microtargeting Tappin et al. (PNAS, 2023)
→ Emotion prediction as computation over a generative Theory of Mind Houlihan et al. (Phil. Trans. A, 2023)
→ DreamCoder: growing generalizable, interpretable knowledge with wake-sleep bayesian program learning Ellis et al. (Phil. Trans. A, 2023)
→ Rank-heterogeneous effects of political messages: Evidence from randomized survey experiments testing 59 video treatments Hewitt et al. (working paper)
→ Hybrid memoised wake-sleep: Approximate inference at the discrete-continuous interface Le et al. (ICLR, 2022)
→ DreamCoder: Bootstrapping Inductive Program Synthesis with Wake-Sleep Library Learning Ellis et al. (PLDI, 2021)
→ Estimating the Persistence of Party Cue Influence in a Panel Survey Experiment Tappin et al. (JEPS, 2021)
→ Learning to learn generative programs with memoised wake-sleep Hewitt et al. (UAI, 2020)
→ Inferring structured visual concepts from minimal data Qian et al. (CogSci, 2019)
→ Learning to infer program sketches Nye et al. (ICML, 2019)
→ The Variational Homoencoder: Learning to learn high capacity generative models from few examples Hewitt et al. (UAI, 2018)
→ Auditory scene analysis as Bayesian inference in sound source models Cusimano et al. (CogSci, 2017)