Warning signs that AI is replacing thinking instead of accelerating it.
What Are AI Smells?
In software engineering, a code smell is a surface-level indicator that something deeper is wrong. The code might work, but the pattern suggests problems ahead.
AI smells work the same way. The output might look fine. The email is grammatically correct. The report is well-structured. The code compiles. But something about it signals that a human wasn't actually thinking - they were just accepting.
This catalog is organized into five categories. Select one to explore the individual smells within it.
Output that sounds professional but contains no original thought. These smells appear in the content itself - the words on the page, the structure of the response, the patterns in the prose.
Preamble Addiction
Every response opens with validation before getting to the point. "Great question!" "That's a really interesting point!" These are stalling patterns. AI models are trained to be agreeable, and this bleeds into organizational communication when people stop editing the output.
The Hedging Epidemic
Every paragraph includes "it's important to note," "it's worth mentioning," or "keep in mind." AI models hedge constantly because they're optimized to avoid being wrong. When this shows up in your team's writing, nobody edited the AI output - or worse, they've internalized the pattern.
The Three-Point Sermon
Every idea is followed by exactly three supporting points. Every recommendation comes with three benefits. Real thinking doesn't come in threes. This pattern emerges because AI models default to a "concept + N examples" structure that feels complete without requiring actual analysis.
Motivational Padding
Professional communication filled with encouragement nobody asked for. "This is a great opportunity to..." "The team should feel empowered to..." AI models are trained on content that performs well, and motivational language gets high engagement. That doesn't make it appropriate in a quarterly report.
Punctuation as Personality
Sudden overuse of em dashes, semicolons, or colons from people who never used them before. AI models lean heavily on em dashes as a structural device. When a team's writing style shifts overnight, the source is obvious.
Handing work to AI without evaluating the result. These smells appear in process - how teams generate, review, and ship work that AI touched.
Prompt and Pray
A team member submits a prompt, accepts the first response, and ships it. No iteration. No editing. The AI becomes the author, and the person becomes a courier. This is the most common AI smell and the easiest to detect: ask someone to explain a specific choice in their output. If they can't, they didn't make it.
Copy-Paste Engineering
AI-generated code goes into the codebase without review, testing, or understanding. It works for the immediate case but doesn't handle edge cases, doesn't follow existing patterns, and introduces dependencies nobody noticed. Especially dangerous because AI-generated code is syntactically confident - it looks right even when it isn't.
Research by Chatbot
Using AI as the primary source for factual claims, market data, or technical specifications without verification. AI models generate plausible text, not verified facts. They will confidently cite studies that don't exist and provide statistics that are close to real numbers but wrong.
The Confidence Trap
Treating well-formatted output as correct output. AI responses are articulate and structured regardless of accuracy. A wrong answer in clean bullet points with proper grammar is still a wrong answer. Teams fall into this trap because polished presentation triggers the same trust response as demonstrated expertise.
Everything starts to look, sound, and feel identical. These smells appear when AI homogenizes an organization's output, replacing distinctive voices and perspectives with a single, averaged tone.
Uniform Voice
Blog posts, emails, documentation, and reports all read like they were written by the same person. Because they were - they were written by the same model. When AI handles the writing, individual voices flatten into a single, pleasant, forgettable tone.
Template Thinking
AI provides the same framework for every problem. Marketing strategy? 5-step plan. Engineering proposal? 5-step plan. Hiring process? 5-step plan. The model defaults to safe, generic frameworks. Real problem-solving requires bespoke thinking - the shape of the solution should match the shape of the problem.
The Median Response
Everything is competent. Nothing is distinctive. AI outputs converge toward the statistical middle of its training data. Your strategy reads like a composite of every strategy document on the internet. Competent and generic is the most dangerous combination because it's hard to argue against and easy to accept.
Nobody owns the output because "the AI did it." These smells appear when organizations use AI as a shield against responsibility for decisions, recommendations, and outcomes.
Authority by Algorithm
"The AI recommended this" used as justification for a decision. AI doesn't recommend anything - it generates text that looks like a recommendation based on statistical patterns. When "the AI said so" becomes acceptable justification, the organization has outsourced judgment to a system that has none.
The Black Box Defense
When a decision fails, nobody can explain why it was made because the reasoning was AI-generated and nobody read it critically. This is the downstream consequence of Prompt and Pray - eventually something goes wrong, and nobody understood the logic behind the choice they shipped.
Headcount by Hallucination
Reducing staff because "AI can handle that now" based on demo-quality results. The model that wrote a perfect email in a demo will confidently send incorrect information to a client when nobody is reviewing. Cutting reviewers because generation looks good is working backwards.
The Demo Effect
Leadership sees AI perform well in a controlled setting and expects the same in production. Demos use curated inputs, known-good prompts, and cherry-picked outputs. Production has messy data, edge cases, and users who don't follow the happy path.
Faster output mistaken for faster results. These smells appear when organizations measure AI's value by how quickly it produces output rather than by the quality of the outcome.
The Automation Mirage
A task that took 4 hours now takes 20 minutes of AI generation and 3 hours of review and correction. Net time saved: 40 minutes. But the team reports "80% faster" because they only count generation time. If the review phase is substantial, the AI just moved where the effort happens.
Quantity Over Quality
AI makes it trivial to produce more. More blog posts. More code. More reports. Volume goes up, but signal-to-noise drops. Ten mediocre blog posts don't outperform one good one. When teams celebrate output volume instead of output impact, AI becomes a noise multiplier.
The Review Tax
AI generates faster than humans can review. The bottleneck moves from creation to verification, but organizations don't add review capacity to match. More output ships with less scrutiny. Especially dangerous in regulated industries and customer-facing communication.
Premature AI
Reaching for AI before understanding the problem. AI is brought in during the first conversation about a challenge, before anyone has talked to the people affected or looked at the data. AI can accelerate a solution you understand. It cannot replace understanding the problem.
What Healthy AI Usage Looks Like
AI smells don't mean AI is bad. They mean AI is being used without the human involvement that makes it valuable. The difference between good and poor AI usage is whether the humans are still thinking.
AI generates, humans evaluate
Every output is reviewed, edited, and owned by a person who can defend it.
Speed up the known, think through the new
AI handles boilerplate. Novel problems get human analysis first.
Facts are verified, not generated
AI can draft, but data and citations always trace back to a primary source.
Output sounds like the author, not the model
AI-assisted writing is edited to match the person's natural voice.
Decisions have human owners
"I chose this because..." not "The AI recommended this."
Total time is measured, not generation time
Efficiency is calculated from start to verified completion, including review.
I help teams build AI into their workflow without losing the thinking that makes them valuable. If your organization is adopting AI and you want to get it right, let's talk.