456 Episodes

  1. Deep Learning is Not So Mysterious or Different

    Published: 6/25/2025
  2. AI Agents Need Authenticated Delegation

    Published: 6/25/2025
  3. Probabilistic Modelling is Sufficient for Causal Inference

    Published: 6/25/2025
  4. Not All Explanations for Deep Learning Phenomena Are Equally Valuable

    Published: 6/25/2025
  5. e3: Learning to Explore Enables Extrapolation of Test-Time Compute for LLMs

    Published: 6/17/2025
  6. Extrapolation by Association: Length Generalization Transfer in Transformers

    Published: 6/17/2025
  7. Uncovering Causal Hierarchies in Language Model Capabilities

    Published: 6/17/2025
  8. Generalization or Hallucination? Understanding Out-of-Context Reasoning in Transformers

    Published: 6/17/2025
  9. Improving Treatment Effect Estimation with LLM-Based Data Augmentation

    Published: 6/17/2025
  10. LLM Numerical Prediction Without Auto-Regression

    Published: 6/17/2025
  11. Self-Adapting Language Models

    Published: 6/17/2025
  12. Why in-context learning models are good few-shot learners?

    Published: 6/17/2025
  13. Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina∗

    Published: 6/14/2025
  14. The Logic of Machines: The AI Reasoning Debate

    Published: 6/12/2025
  15. Layer by Layer: Uncovering Hidden Representations in Language Models

    Published: 6/12/2025
  16. Causal Attribution Analysis for Continuous Outcomes

    Published: 6/12/2025
  17. Training a Generally Curious Agent

    Published: 6/12/2025
  18. Estimation of Treatment Effects Under Nonstationarity via Truncated Difference-in-Q’s

    Published: 6/12/2025
  19. Strategy Coopetition Explains the Emergence and Transience of In-Context Learning

    Published: 6/12/2025
  20. Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs

    Published: 6/11/2025

6 / 23

Cut through the noise. We curate and break down the most important AI papers so you don’t have to.