Diving into the New AI Reasoning Models

 
 

If you want to dive more into the implications of reasoning models like ChatGPT-o3-mini (released over the weekend) and DeepSeek r1, I suggest reading Ethan Mollick's SubStack from today.

In his blog, he delves not only into the potential impact of these reasoning models, but also looks closely at how specialized agents in research are starting to show significant promise.

Some highlights:

💡 Reasoning models (reasoners) represent a key breakthrough in two ways:

  • They can be trained to compute more effectively based on examples of good problem-solving

  • Their performance improves with longer "reasoning time" rather than just requiring bigger models

💡The development of AI agents is progressing along two paths:

  • General-purpose agents (like OpenAI's Operator) which show promise but still face significant limitations

  • Narrow, specialized agents that are already proving valuable for specific tasks

💡 OpenAI's new Deep Research demonstrates the power of specialized agents:

  • Can produce PhD student-level analysis on complex topics (versus university student level for Google Deep Research)

  • Accurately cites and engages with academic literature that is publicly available

  • Shows "curiosity-driven" research behavior

💡 The future implications are significant:

  • While these tools won't replace experts, they will shift their role toward orchestrating and validating AI-generated work

  • Current narrow applications are already capable of performing work that previously required teams of highly-paid experts

  • Classrooms need to adapt at all levels to the fast increasing capabilities of GenAI models and tools

Check out Mollick’s complete blog on the topic here.

Previous
Previous

1 Million Website Visits in 2024!

Next
Next

Lower Student AI Literacy Predicts Higher Propensity to Use AI on Assignments