Cookie Settings
We use cookies on your device to enhance your experience. Some information may be shared with third parties.

Is AI making us dumber (and by ‘us’, we mean ‘your agency’)?

Jeff Bridges, Senior Application Developer on Aug 5, 2025

Illustration of a robot posed as Rodin's The Thinker

The promise of AI seems too good to be true: instant expertise, effortless content creation, and solutions to problems we didn't even know we had. Creative professionals can now generate campaigns in minutes, analysts can produce insights at lightning speed, and strategists can craft comprehensive plans with a few well-placed prompts.

Are we engineering our own intellectual decline? (MIT says ‘maybe’) 

But in our rush to embrace artificial intelligence, are we engineering our own intellectual decline?

MIT thinks so. In June 2025, researchers conducted a study comparing the critical thinking abilities of ChatGPT users, versus search engine users and a brain-only control group. Over four months, the ChatGPT users, “consistently underperformed at neural, linguistic, and behavioural levels,” compared to the other participants.

Scary stuff.

That fear extends to agency clients, who assume that hiring a marketing agency means paying knowledgeable professionals for novel creative solutions and expert strategic deployment.

They expect that intelligence to be authentically human. They're paying for the kind of deep thinking, cultural intuition, and creative problem-solving that emerges from years of experience and genuine expertise. But with AI democratizing access to seemingly-sophisticated output, the line between authentic insight and algorithmic imitation has become dangerously blurred.

All of this creates a crisis of trust. How can clients distinguish between agencies that leverage AI as a powerful tool to amplify human creativity, and those that dress up machine-generated content as original thinking? Especially when anyone with access to ChatGPT can produce work that appears professionally crafted to the untrained eye.

3 existential challenges ad agencies (and their clients) are facing

The transformation goes far beyond workflow changes. We're confronting what researchers identify as ‘cognitive atrophy’, where over-reliance on AI systems systematically weakens our capacity for independent thought and complex reasoning (as the MIT study posits). Each time we outsource thinking to machines, we may be diminishing the cognitive muscles that drive breakthrough creative work.

At the same time, we're experiencing (or rather, inviting) disempowerment as decision-making authority slowly shifts from human judgment to algorithmic efficiency. This isn't about robots taking over, it's about the subtle erosion of human agency as AI proves "better" at making choices we once considered uniquely human domains.

Perhaps most troubling is the emerging value lock-in problem, where the biases and limitations of today's AI systems become permanently embedded in our creative processes. We know AI is imperfect. Unchallenged, the worldviews encoded in current AI models risk becoming the foundation for how we approach strategy, creativity, and client solutions.

Redefining original work in the AI era

For advertising agencies and their clients, these challenges fundamentally alter what we mean by ‘original work’. The traditional definition of originality as ‘created without assistance’ is giving way to something more nuanced: work that is meaningfully directed, refined, and elevated by human intelligence, regardless of the tools involved.

This shift actually increases the value of humans who understand how to effectively harness AI while maintaining creative integrity. The most valuable professionals aren't those who avoid AI, or those who rely on it completely. It's those who can critically evaluate AI output, identify its limitations, and transform machine-generated content into something authentically insightful.

The most valuable professionals aren't those who avoid AI. Or those who rely on it completely.

The challenge is way bigger than plagiarism. AI systems are often 'confidently wrong’, producing polished content that can be factually incorrect, culturally tone-deaf, or strategically misguided. The human ability to spot these flaws, understand context, and apply judgement becomes exponentially more valuable, not less.

Show and Tell's approach: Transparent, thoughtful AI governance 

The path forward for agencies isn't outlawing AI use, nor a total embrace. It's thoughtful integration guided by clear policies that prioritize transparency and human oversight.

At Show and Tell, we've developed comprehensive AI guidelines that serve dual purposes: they ensure our team maintains intellectual rigor, while providing clients with transparency into (and confidence in) our process. 

Show and Tell's AI guidelines provide clients with transparency into (and confidence in) our process

Our approach centres on disclosure and human accountability. We're open about when and how we use AI tools, establishing clear boundaries around what we delegate to machines versus what requires human insight. This transparency doesn't diminish our value, it enhances it by demonstrating our commitment to authentic creative work at the outset of this AI revolution.

Our policy framework includes mandatory human review of all AI-generated content, specific protocols for fact-checking and accuracy verification, and requirements that final outputs reflect genuine human strategic thinking rather than algorithmic automation. We've found that clients appreciate this transparency because it addresses their underlying concern: they want assurance that they're receiving authentic expertise, not dressed-up automation.

Clients want authentic expertise. 
Not dressed-up automation

Most importantly, our AI governance emphasizes skill development — rather than skill replacement. We use AI primarily to handle routine tasks, freeing our team to focus on the conceptual thinking that drives creative solutions. This approach recognizes that AI's greatest value lies not in replacing human creativity — but in amplifying it.

And that keeps our team’s brains nice and healthy. Ready to answer client questions and address their concerns, without having to plug in.

The human intelligence premium

Using AI responsibly places a new premium on distinctly human capabilities: the ability to understand cultural nuance, navigate complex relationships, identify insights from disparate sources, and make creative leaps that surprise and delight. These skills are more valuable, not less, in an AI-augmented industry. 

Understanding nuance. Navigating relationships. Identifying insights. These skills are more valuable – not less in an AI-augmented industry.

The agencies that will thrive are those that view AI as a creative collaborator rather than a replacement for human thinking. They'll develop new evaluation frameworks that consider the quality of human judgment and decision-making, not just output volume or speed. They'll create collaborative systems where AI-assisted employees continue developing their cognitive skills rather than allowing them to erode. And they’ll continue to earn client trust, because their creative output prioritizes the brilliance of their team members.

Your agency partner needs to tightrope walk between leveraging AI's efficiency and preserving the human intelligence that drives authentic innovation. They need to be able to reap the benefits of AI while also being able to answer your questions and speak to their recommendations — in real time conversations, and without machine input.

In other words: You need your agency team  to be smart. Smart enough to know over-reliance on AI might make them dumb. And smart enough to know not to risk it with your work.

Thinking about how thoughtful, ethical AI governance can strengthen rather than compromise your creative intelligence? Working on your own AI guidelines for your in-house team, or external partners? Contact us to talk to Show and Tell's resident AI experts (including our in-house Perplexity fellowship holder #humblebrag).