Luke Hewitt

Luke Hewitt

I work on computational & experimental tools for measuring what changes people’s beliefs/attitudes, with application to public health communication, effective advocacy, social science methodology, and AI safety. My research combines RCTs, LLMs, expert forecasting and hierarchical Bayesian models.

Currently:

  • I’m a Senior Research Fellow at Stanford PASCL, where I study the capacity of Large Language Models to predict treatment effects in social/behavioral sciences.
  • I’m co-director of Rhetorical Labs, a research collective which uses RCT experiments and machine learning to help public communication campaigns improve the impact of their messaging.
  • I’m co-PI for the SSRC Mercury Project team on Combatting health misinformation with community-crafted messaging.

Previously:

  • PhD in AI / Cognitive Science at MIT
  • Masters in Mathematical Computation at UCL
  • Research data scientist at Swayable (on RCT experiment/analysis methodology)

image

Academic research by topic

Persuasion / communication

Political persuasion How experiments help campaigns persuade voters: evidence from a large archive of campaigns’ own experiments (Hewitt et al. 2024) Quantifying the persuasive returns to political microtargeting (Tappin et al. 2022) Rank-heterogeneous effects of political messages: Evidence from randomized survey experiments testing 59 video treatments (Hewitt et al. 2022) Estimating the Persistence of Party Cue Influence in a Panel Survey Experiment (Tappin et al. 2021)

Public health Using in-survey randomized controlled trials to support future pandemic response (Tappin & Hewitt 2024)

Machine learning

Deep generative models Leveraging Large Language Models to Predict Results of Experiments in the Social Sciences (Hewitt*, Ashokkumar* et al., in prep.) The Variational Homoencoder: Learning to learn high capacity generative models from few examples (Hewitt et al. 2018)

Structured generative models Hybrid memoised wake-sleep: Approximate inference at the discrete-continuous interface (Le et al. 2022)Learning to learn generative programs with memoised wake-sleep (Hewitt et al. 2020)

Program synthesis DreamCoder: Bootstrapping Inductive Program Synthesis with Wake-Sleep Library Learning (Ellis et al. 2021) Learning to infer program sketches (Nye et al. 2019)

Cognitive science

EmotionEmotion prediction as computation over a generative Theory of Mind (Houlihan et al. 2023)

Perception Bayesian auditory scene synthesis explains human perception of illusions and everyday sounds (Cusimano et al. 2023)Auditory scene analysis as Bayesian inference in sound source models (Cusimano et al. 2017)

Concept learningInferring structured visual concepts from minimal data (Qian et al. 2019)