Forget Faster, AI is Thinking: What o3, Gemini 2.5 & Your Mindset Really Mean

A stylized image featuring a human brain and an abstract AI circuit board subtly merging together. A bright lightbulb icon shines above the connection point, symbolizing insight and collaborative thinking.

So, the AI world got another jolt this past week. OpenAI unleashed its o3 and o4-mini models, talking up their "reasoning" skills. Google countered, highlighting Gemini 2.5 Pro as a premier "thinking model." Suddenly, "reasoning" and "agentic AI" are the buzzwords echoing from Silicon Valley.

It feels different this time, doesn't it? Not just faster text generation, but something… more cognitive? More… teammate-like?

A fascinating new working paper, "The Cybernetic Teammate", based on a huge Harvard Business School, Wharton and P&G study, landed around the same time, adding fuel to this fire. It suggests AI can replicate key aspects of human teamwork.

So, what’s the reality here? Is this just the next hype cycle, or are we genuinely entering an era where AI acts less like a tool and more like a collaborator? What did OpenAI and Google actually do that matters to us?

That’s what we’re breaking down today.


THE STORY SO FAR

AI's evolution has been dizzying. We went from basic chatbots to powerful text and image generators like GPT-4o. But the focus was largely on output.

Now, the narrative is shifting:

  • OpenAI (April 16): Launched o3 and o4-mini, emphasizing their ability to "think for longer," reason through steps, use other tools (web search, Python) autonomously, and even incorporate images into their reasoning process.

  • Google (Late March/April): Introduced Gemini 2.5 Pro as a "thinking model" that excels at breaking down complex problems, showcasing state-of-the-art benchmark performance requiring advanced reasoning.

Why the sudden emphasis on thinking and reasoning? Because AI is bumping up against the limits of just predicting the next word. To tackle truly complex, multi-step problems, it needs to strategize, plan, and adapt – more like a human partner.


THE SHIFT: AI AS TEAMMATE (VALIDATED?)

Here’s where the "Cybernetic Teammate" study gets really interesting (check out my video here). The researchers found that AI (specifically GPT-4) didn't just make individual professionals faster; it changed the dynamics of work:

  • Performance Replication: Individuals using AI produced solutions matching the quality of two-person human teams working without AI. It truly augmented their capabilities.

  • Expertise Democratization: AI helped people operate effectively outside their core expertise. An R&D specialist could craft a commercially viable plan, bridging knowledge gaps usually requiring diverse human teams.

  • Surprisingly Positive Social Impact: Contrary to fears of tech-induced isolation, professionals using AI reported more positive emotions and less frustration than when working alone. The conversational nature made it feel more like a supportive interaction.

This research suggests the "AI teammate" idea isn't just a metaphor; AI can fulfill some functional and even social roles traditionally held by human collaborators.


A stylized image featuring a human brain and an abstract AI circuit board subtly merging together. A bright lightbulb icon shines above the connection point, symbolizing insight and collaborative thinking

HOW ARE THESE NEW MODELS 'THINKING'?

It’s not magic. These "reasoning" models work by being more methodical. Instead of jumping straight to an answer, they often internally (or sometimes explicitly, if asked) break the problem into smaller steps, evaluate possibilities, maybe even self-correct along the way. Think "Show Your Work" (like we discussed in prompt frameworks) but baked into the model's process.

When OpenAI says o3 can use tools agentically, it means the model can decide during its reasoning process that it needs external info (web search) or data analysis (Python) and then initiate that action itself to continue solving the original problem.


THE CATCH (IT ALWAYS COMES BACK TO US)

So, smarter, "thinking" AI that acts like a teammate? Sounds great. But nothing's free. The catch is that these advanced capabilities place greater demands on us, the human users.

  • Garbage In, Garbage Out (Amplified): A reasoning model given a vague goal will meticulously reason its way to a useless answer. Clear Tasks and rich Context are more critical than ever.

  • The "Agent" Needs a Manager: An AI that can act autonomously still needs oversight. We can't treat it like a "Big Red Button" you push and forget. We need to Refine its work, question its steps, and ultimately own the outcome. Trust, but verify – especially regarding potential hallucinations, bias, or data privacy concerns.

  • Interaction is Key: These models are designed for deeper collaboration. Sticking to simple, "Google Brain"-style commands means leaving their advanced capabilities untapped. We need to engage conversationally, iterate, provide feedback, and use multimodal inputs (like uploading images for the AI to "see").


WHAT THIS MEANS FOR ALL OF US

  1. Your Interaction Skills Are Now Your Superpower: Forget just "prompt engineering." Your ability to define goals clearly, provide context, give constructive feedback, and iterate effectively is what unlocks the power of these new models. Human communication skills are paramount.

  2. AI Can Genuinely Broaden Your Expertise: Lean into AI's ability to break down silos. Use it to understand adjacent fields, get different perspectives, and pressure-test your ideas from angles you hadn't considered (validated by the P&G study).

  3. Treating AI Like a Teammate Isn't Just Cute, It's Effective: The research and the models' design point in the same direction. Engaging AI conversationally, providing feedback (like saying "Thanks, that part was great, but refine this section"), and thinking of it as a collaborator ("Auggie," if you like) improves results and makes the process less frustrating.


A speech bubble flows into a stylised computer monitor. An arrow points from the monitor to a bursting lightbulb, signifying insight gained.

THE BOTTOM LINE

The AI landscape is shifting. Models are moving beyond simple execution towards more complex reasoning and agency. Research is showing AI can function like a teammate in surprising ways.

But the ultimate key isn't the model's IQ; it's our adaptation. Are we ready to move beyond basic commands? Are we willing to engage these tools with the clarity, context, and iterative feedback we'd give a human colleague?

The power isn't just in the AI; it's in the synergy between the AI and a human user who knows how to collaborate effectively. The tools are evolving. Now it's our turn.

Keep building that mindset,

Conor

Making Sense of AI's Latest Capabilities What It Really Means for You
Next
Next

AUTHENTIC ADVANTAGE: COMPETITIVE EDGE IN THE AI ERA