A collection of research papers that shaped how I think about AI agents, LLMs, and distributed systems.
I read papers to understand the fundamentals, not just follow trends. Each paper here taught me something I still use when building agents today.
Recently started this habit. Trying to read three papers each month.
- Voyager: An Open-Ended Embodied Agent with Large Language Models
LLM-powered agent that writes and stores code as reusable skills. Relevant to sandbox agents.
- ReAct: Synergizing Reasoning and Acting in Language Models
The foundation of how modern coding agents like Claude Code work. Interleaving reasoning and actions.
- Context Engineering 2.0: The Context of Context Engineering
Framework for understanding how machines comprehend human situations and intentions through context.
- Dynamo: Amazon's Highly Available Key-value Store
The paper that defined eventual consistency and inspired DynamoDB, Cassandra, and Riak.
- Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Step-by-step reasoning dramatically improves LLM performance on complex tasks.
- Toolformer: Language Models Can Teach Themselves to Use Tools
How LLMs can learn to use external tools through self-supervised learning.
- Attention Is All You Need
The paper that started it all. Introduced the Transformer architecture.