Skip to content

Schedule

This is the schedule for all the meetings. You can find the topics, the slides, and readings here.

Here is the general outline.

  • Week 1–2: Emergent phenomena in LLMs
  • Week 3–4: Mathematical structures of LLMs
  • Week 5–6: Statistical techniques for LLMs
  • Week 7: Case studies in domain applications

Legend:
- 📄 Reading, 🖥️ Slides, 🧪 Notebook/code, 👥 Discussion slides

Weeks

Week Date Topic Materials Discussants
1 Jan 20 Intro + transformers basics 🖥️ L01 Matthias Katzfuß and Maja Waldro
1 Jan 22 Emergent abilities, prompting and in-context learning 🖥️ L02 Jack Sperling and Brendan Joyce
2 Jan 27 Out-of-distribution generalization, induction heads 🖥️ L03 Samuel Yeh and Eva Song
2 Jan 29 Chain-of-thought reasoning, reinforecement learning 🖥️ L04 Paul Kantor and Zhiqi Gao
3 Feb 3 Linear representation hypothesis, feature superposition 🖥️ L05 Ishita Kakkar and Sam Baumohl
3 Feb 5 Sparsity and low-rankness 🖥️ L06 Yupeng Zhang and Peter Zhao
4 Feb 10 Layerwise structures of embeddings 🖥️ L07 Bofeng Cao and Shixiao Liang 👥 D07
4 Feb 12 Reasoning trace and self-reflection 🖥️ L08 Dian Jin and Keran Chen
5 Feb 17 PCA and factor analysis (steering, model editing, interpretability) 🖥️ L09 Fuxin Wang and Jiaqi Tang
5 Feb 19 Dictionary learning, SAE (feature interpretability) 🖥️ L10 Sifan Tao and Jin Mu
6 Feb 24 Causal tracing and circuits (attribution, interpretability) 🖥️ L11 Terence Wang and Jiaxin Ye
6 Feb 26 Sensitivity analysis, influence functions (perturbation, visualization) 🖥️ L12 Quoc Viet Le and Shien Zhu
7 Mar 3 Genomics foundation models 🖥️ L13 by Zhexuan Liu
7 Mar 5 Proof formalization, singular learning theory Lark Song and Cheng Chen

Notes

  • Schedule may shift; the table above will be frequently updated
  • Some illustrative figures and tables in the slides are AI-generated; please use with caution.