Cornell AI History Lecture

  • Date & context: Guest lecture at Cornell, May 2 2025, focusing on AI as a new form of leverage.
  • Perception of change: Humans easily miss slow, cumulative shifts—AI’s rapid but still multi-year progress is often underestimated.
  • Working definition of leverage: Any mechanism where a small (or unchanged) input yields disproportionately larger output.
  • Three classical leverage types (Naval Ravikant):
    • Human labor – hiring more people.
    • Capital – using money to control bigger assets (e.g., mortgages).
    • Code / Media – software or content that scales at near-zero marginal cost.
  • Competitive erosion: Once a leverage source becomes commonplace (e.g., YouTube channels today), excess returns shrink; new leverage waves create the next outsized opportunities.
  • AI as compound leverage:
    • Acts like human labor (agents do tasks for you) and like code (infinitely copy-pasted).
    • Represents a rare “fresh” leverage class with huge, still-uncrowded upside.
  • Individual-level impact:
    • Learning tutor: GPT-style models tailor explanations, collapsing barriers to mastering new fields.
    • Skill scarcity shifts: When learning is cheap, curiosity and the discipline to explore become the scarce, valuable traits.
  • Team & startup dynamics: Super-powered individuals + AI agents let tiny teams create enterprise-level output, reducing the need for large head-counts and the coordination drag they bring.
  • Societal/scientific leverage:
    • Today’s science is bottlenecked by complexity and fragmented expertise.
    • AI can “wrap” disparate specialist knowledge, synthesizing insights we’ve never combined—an existing knowledge overhang.
    • Future models with stronger reasoning may generate novel hypotheses and experiments, becoming a 24/7 research engine.
  • Call to action: Re-examine how large the coming shift could be; many still under-estimate AI’s leverage and the opportunities (or risks) that follow.

Presenter

  • Hyung Won Chung (정형원) – Research Scientist at OpenAI specializing in reasoning and AI agents.
  • Foundational contributor to o1-preview (Sep 2024), o1 (Dec 2024) and Deep Research (Feb 2025).
  • Formerly at Google Brain, where he worked on large-scale training systems (T5X), PaLM, and the Flan-PaLM/T5 model families; earned his PhD at MIT.
  • Originally from South Korea; currently based in Mountain View, CA.