456 Episodes

  1. Inference-Time Alignment: Coverage, Scaling, and Optimality

    Published: 4/3/2025
  2. Sharpe Ratio-Guided Active Learning for Preference Optimization

    Published: 4/3/2025
  3. Active Learning for Adaptive In-Context Prompt Design

    Published: 4/3/2025
  4. Visual Chain-of-Thought Reasoning for Vision-Language-Action Models

    Published: 4/3/2025
  5. On the Biology of a Large Language Model

    Published: 4/1/2025
  6. Async-TB: Asynchronous Trajectory Balance for Scalable LLM RL

    Published: 4/1/2025
  7. Instacart's Economics Team: A Hybrid Role in Tech

    Published: 3/31/2025
  8. Data Mixture Optimization: A Multi-fidelity Multi-scale Bayesian Framework

    Published: 3/31/2025
  9. Why MCP won

    Published: 3/31/2025
  10. SWEET-RL: Training LLM Agents for Collaborative Reasoning

    Published: 3/31/2025
  11. TheoryCoder: Bilevel Planning with Synthesized World Models

    Published: 3/30/2025
  12. Driving Forces in AI: Scaling to 2025 and Beyond (Jason Wei, OpenAI)

    Published: 3/29/2025
  13. Expert Demonstrations for Sequential Decision Making under Heterogeneity

    Published: 3/28/2025
  14. TextGrad: Backpropagating Language Model Feedback for Generative AI Optimization

    Published: 3/27/2025
  15. MemReasoner: Generalizing Language Models on Reasoning-in-a-Haystack Tasks

    Published: 3/27/2025
  16. RAFT: In-Domain Retrieval-Augmented Fine-Tuning for Language Models

    Published: 3/27/2025
  17. Inductive Biases for Exchangeable Sequence Modeling

    Published: 3/26/2025
  18. InverseRLignment: LLM Alignment via Inverse Reinforcement Learning

    Published: 3/26/2025
  19. Prompt-OIRL: Offline Inverse RL for Query-Dependent Prompting

    Published: 3/26/2025
  20. Alignment from Demonstrations for Large Language Models

    Published: 3/25/2025

22 / 23

Cut through the noise. We curate and break down the most important AI papers so you don’t have to.