Skip to content

Schedule

This is the schedule for all the meetings. You can find the topics, the slides, and readings here.

Here is the general outline.

  • Week 1–2: Emergent phenomena in LLMs
  • Week 3–4: Mathematical structures of LLMs
  • Week 5–6: Statistical techniques for LLMs
  • Week 7: Case studies in domain applications

Legend:
- 📄 Reading, 🖥️ Slides, 🧪 Notebook/code, 👥 Discussion initiators

Weeks

Week Date Topic Materials Discussants
1 Jan 20 Intro + transformers basics 🖥️ L01 Matthias Katzfuß and Maja Waldro
1 Jan 22 Emergent abilities, prompting and in-context learning 🖥️ L02 Jack Sperling and Brendan Joyce
2 Jan 27 Out-of-distribution generalization, induction heads 🖥️ L03 Samuel Yeh and Eva Song
2 Jan 29 Chain-of-thought reasoning, reinforecement learning 🖥️ L04 Paul Kantor and Zhiqi Gao
3 Feb 3 Linear representation hypothesis, feature superposition 🖥️ L05 Ishita Kakkar and Sam Baumohl
3 Feb 5 Sparsity and low-rankness 🖥️ L06 Yupeng Zhang and Peter Zhao
4 Feb 10 Layerwise structures of embeddings 🖥️ L07 Bofeng Cao and Shixiao Liang
4 Feb 12 Reasoning trace and self-reflection 🖥️ L08
5 Feb 17 PCA and factor analysis (steering, model editing, interpretability)
5 Feb 19 Dictionary learning, SAE (feature interpretability)
6 Feb 24 Causal tracing and circuits (attribution, interpretability)
6 Feb 26 Leave-one-out, influence functions (robustness, sensitivity analysis)
7 Mar 3 Genomics foundation models
7 Mar 5 Watermarking, memorization

Notes

  • Schedule may shift; the table above will be frequently updated
  • Some illustrative figures and tables in the slides are AI-generated; please use with caution.