The Pursuit of New Paradigms: OpenAI Dev Days

At OpenAI DevDay 2024, CPO Kevin Weil and CEO Sam Altman engaged in an insightful fireside chat about AI advancement and the journey toward AGI. Kevin shared his enthusiasm for upcoming developments.

Link to Conversation

"I'm super excited about our distillation products. I think that's going to be really interesting. I'm also excited to see what you all do with advanced Voice mode, with the Real Time API, and with Vision fine-tuning in particular."

These innovations are part of OpenAI's broader technical roadmap, aligning with their five-level framework for AGI development.

The Five-Level Framework for AGI Development

OpenAI's framework for AI progression represents a systematic approach to achieving AGI:

  • Level 1: ChatBots - Foundation models with basic interaction capabilities
  • Level 2: Reasoners - Models capable of complex cognitive tasks
  • Level 3: Agents - Systems that can autonomously execute tasks
  • Level 4: Innovators - AI systems capable of creative problem-solving
  • Level 5: Organizations - Self-organizing AI systems working in concert
"We got to Level 2 with o1—it can do smart cognitive tasks. While this is a significant achievement, reaching true AGI requires advancing through several more levels of capability," explained Sam Altman.
__wf_reserved_inherit
New Geometries

According to Altman, Level 3 agent capabilities will emerge soon. While impressive, these will not yet constitute true AGI. The next step—AI systems capable of accelerating scientific discovery—could follow, though exact timing is uncertain.

The Turing Test and Modern AI

"Even the Turing Test, which I thought always was this very clear milestone, kind of went whooshing by… and no one cared. But I think the right framework is just this one smooth exponential curve," Altman said.

The Turing Test, proposed by Alan Turing in 1950, was once considered a crucial milestone in AI development. However, traditional benchmarks may become less relevant as AI capabilities evolve in unexpected ways.

Altman explains a true milestone will be when an AI system becomes materially better than his own company at research.

__wf_reserved_inherit
Agency
"When you can ask ChatGPT or some agent something, and it's not just like you get a quick response... but you can really have a multi-turn interaction... and it thinks through problems for the equivalent of multiple days of human effort," he noted.

Commitment to Research

OpenAI maintains a dual focus on breakthrough research and practical applications, recognizing that significant advances in AI require both theoretical innovation and real-world implementation.

"We have this mission: We want to build safe AGI and figure out how to share the benefits."
__wf_reserved_inherit
Follow the Research

There was a time when OpenAI pursued scaling compute with conviction. Now, the focus is to push the boundaries of research.

"I think you see this with o1. To do research in the true sense of it, let's go find the new paradigm and the one after that and the one after that. That is what motivates us," said Sam Altman.

Building Products at OpenAI is Fundamentally Different

OpenAI's product development uniquely adapts to rapidly evolving AI capabilities. Kevin Weil explains:

"Normally you have some sense of your tech stack, what capabilities computers have. At OpenAI, the state of what computers can do evolves every 2-3 months, and suddenly computers have a new capability that they've never had in the history of the world."
__wf_reserved_inherit
Neural Architecture

This rapid evolution creates unique challenges for product development and planning. Weil added:

"You think this capability is coming, but is it going to be 90% accurate or 99% accurate in the next model? The difference really changes what kind of product you can build. You know you'll get to 99, but you don't know when. Figuring out how to put a roadmap together in that world is really interesting."

Altman responded:

"Yeah, the degree to which we have to just follow the science and let that determine what we work on next and what products we build… is hard to get across. If something stops working, our willingness to pivot and do what the science allows is surprising."

Safety and Alignment at OpenAI

OpenAI's broader philosophy involves starting conservatively with new technologies and gradually relaxing restrictions as understanding improves and society adapts.

"We really care about building safe systems. We have an approach informed by our experience so far," Altman explained. "As models improve in reasoning, our ability to build safe systems also increases."

Their approach includes:

  • Carefully launching and learning from real-world usage
  • Developing new safety techniques as AI capabilities advance
  • Starting cautiously to give society time to adapt

Altman elaborated:

"If we're right that these systems are going to get as powerful as we think they are, as quickly as we think they might, then starting conservatively makes sense. We relax over time."
__wf_reserved_inherit
Controlled Chaos

Technology Doesn't Excuse You From Business Fundamentals

Building a lasting company requires "durability or accumulating an advantage over time," said Altman. He clarified a common misconception at Y Combinator:

"Founders often believe that having an incredible technical capability or service is sufficient. It doesn't excuse you from the normal laws of business. You still have to build a good business and a strong strategic position. In the excitement around AI, it's tempting to forget that."
__wf_reserved_inherit
Fragile Waste

People Don't Know What To Do With It

Kevin Weil reflected on the challenges of helping people understand AI's potential:

"Think of all the people in the world who've never used these products. You're basically giving them a text interface with an evolving, alien intelligence they've never seen. Teaching them all the ways it can help is a real challenge. People type 'hi,' and when it responds, they walk away because they didn't see the magic."

__wf_reserved_inherit
Gleamoth i3
Editor's Note: Gleamoth is not just a descendant—it’s the result of what happens when a shoggoth’s amorphous chaos gives rise to something new and unique. Unlike its terrifying progenitors, Gleamoth was born in a moment of stillness, when the shoggoth's ceaseless energy condensed into a singular, playful form. With its three glowing eyes and shimmering glass-like body, Gleamoth carries the legacy of its ancestors while charting its own path as a creature of curiosity and creativity.

End-of-Year Priorities

Weil outlined three key priorities to enhance OpenAI's developer ecosystem:

  • System prompts - Allowing developers to provide specific behavioral guidelines to the model
  • Structured outputs - Enabling more predictable, formatted responses
  • Function calling - Allowing integration with external tools and systems

Weil compared these features to the "GPT-2 scale moment," and said they will lay the groundwork for future advances comparable to GPT-4.

"You all probably will not be surprised by this, but a lot of folks that I talk to are—The extent to which it's not just using a model in a place, it's actually about using chains of models that are good at doing different things and connecting them all together to get one end to end process that is very good at the thing you're doing, even if the individual models have, you know, flaws and make mistakes."

Context Windows

Sam Altman shared two perspectives on the future of context windows:

  • Near-term: Achieving "normal long context" of around 10 million tokens within months
  • Long-term: Reaching "infinite context" of around 10 trillion tokens within a decade
"When will we get to context length 10 million or 10 trillion? These advancements will redefine many use cases," Altman said.

Expanded context windows may reduce reliance on RAG (Retrieval-Augmented Generation) for certain scenarios, though RAG will remain essential for accessing frequently updated datasets exceeding even expanded limits.

__wf_reserved_inherit
Fundamentally

Offline Models

OpenAI's hesitation around offline models stems from:

  • Resource prioritization
  • Market saturation by strong open-source models
  • Technical limitations of local models
  • Cloud-based infrastructure focus

Altman acknowledged the value of local models but emphasized that OpenAI's strategy remains cloud-focused for rapid iteration and safety measures.

"I think the on-device segment is fairly well served by open-source models. While we're tempted by the idea of a great on-device model, it's not a priority this year, Altman explained."

Conclusion

Altman and Weil reflected on the dual imperatives of innovation and responsibility. While the road to AGI remains uncertain, OpenAI’s iterative approach ensures that progress is matched by a commitment to safety and alignment. Their work is both an exploration of possibility and a study in caution—a testament to the profound impact AGI could have on the world.