Best AI papers explained
A podcast by Enoch H. Kang
456 Episodes
-
LLM Feedback Loops and the Lock-in Hypothesis
Published: 4/27/2025 -
Representational Alignment Drives Effective Teaching and Learning
Published: 4/27/2025 -
Adaptive Parallel Reasoning with Language Models
Published: 4/27/2025 -
AI: Rewiring the Flow of Ideas and Human Knowledge
Published: 4/27/2025 -
Learning and Equilibrium with Ranking Feedback
Published: 4/27/2025 -
Designing Human-AI Collaboration: A Sufficient-Statistic Approach
Published: 4/27/2025 -
GOAT: Generative Adversarial Training for Human-AI Coordination
Published: 4/27/2025 -
π0.5: Generalization in Robotic Manipulation via Diverse Data
Published: 4/27/2025 -
NoWag: Unified Compression for Large Language Models
Published: 4/26/2025 -
Optimal Tool Calls in Language Model Reasoning
Published: 4/26/2025 -
Data Selection for Empirical Risk Minimization
Published: 4/26/2025 -
LoRe: Low-Rank Reward Modeling for Personalized LLMs
Published: 4/26/2025 -
ParaPO: Reducing Language Model Verbatim Reproduction
Published: 4/26/2025 -
Test-Time RL: Self-Evolving LLMs via Majority Voting Rewards
Published: 4/25/2025 -
Tina: Tiny LoRA Reasoning Models
Published: 4/25/2025 -
Evaluating large language models in theory of mind tasks
Published: 4/25/2025 -
QUEST: Quality Sampling for Machine Translation
Published: 4/24/2025 -
Offline Preference Learning via Simulated Trajectory Feedback
Published: 4/24/2025 -
Reasoning Elicitation in Language Models via Counterfactual Feedback
Published: 4/24/2025 -
Eliciting Human Preferences with Language Models
Published: 4/24/2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.