Claude Sucks (At These 7 Things)

An Honest Love Letter to an Imperfect AI

Understanding where Claude fails is what separates frustrated developers from those who achieve extraordinary results with AI.

Look, I love Claude. I use it every day. I have paid subscriptions to Claude Pro, Gemini, and three other AI tools, and Claude is the one I reach for most often when architecting solutions or solving complex technical problems.

But let me be brutally honest: Claude sucks at a lot of things.

And that is okay.

In fact, understanding exactly where Claude falls flat is what separates developers who get frustrated and abandon AI tools from those who use them as genuine force multipliers. After spending thousands of hours working with Large Language Models over the past three years, I have learned that the secret to AI productivity is not finding the perfect tool - it is understanding the limitations of your imperfect ones.

So let me catalog the seven things Claude is genuinely terrible at, why these limitations exist, and how to work around them to still get extraordinary results.

Why Honest Criticism Actually Matters

Before we dive into the list, let me explain why I am writing this.

The AI hype cycle is exhausting. Every week, someone on LinkedIn posts about how ChatGPT "wrote an entire application in 10 minutes" or how Claude "replaced their entire development team." These posts are either exaggerations or outright fabrications, and they set dangerous expectations for people just starting to explore AI tools.

The truth is more nuanced and more useful: AI tools like Claude are incredibly powerful within specific constraints. When you understand those constraints, you can architect your workflow to leverage their strengths and compensate for their weaknesses.

I treat Large Language Models like Claude, Gemini, and Grok as my "junior developers." I design the architecture, define the patterns, and handle the complex business logic. Then I delegate the repetitive, well-defined tasks - boilerplate code, unit tests, documentation, DTO mappings - to the AI.

This approach has allowed me to deliver production-ready code at 2-3x the speed of traditional development while maintaining high quality standards. But it only works because I know exactly what Claude cannot do.

Let me show you.

1. Claude Sucks at Real-Time Information

Claude does not know what happened yesterday. Or last week. Or even last month if it was after its training cutoff date.

If you ask Claude "Who won the Super Bowl this year?" or "What is the current price of Bitcoin?", it will either tell you it does not have access to current information, or worse, it will confidently provide outdated data from its training period.

Why This Limitation Exists: Large Language Models are trained on static datasets at a specific point in time. Claude's knowledge cutoff is January 2025. It is not connected to the internet (in most interfaces), and it does not have the ability to fetch live data.

How to Work Around It: Never ask Claude for current events, pricing data, or time-sensitive information. Instead, use Claude for timeless tasks: code architecture, explaining concepts, refactoring logic, writing documentation. If you need current data, fetch it yourself and provide it in your prompt. Claude is excellent at analyzing data you give it - just do not expect it to retrieve that data for you.

2. Claude Sucks at Counting Things Accurately

Ask Claude to count the number of words in a paragraph, and you will get an answer. Ask it three more times, and you might get three different answers.

Claude struggles with precise counting - whether it is characters, tokens, items in a list, or occurrences of a specific pattern in text. It is not being lazy. It is just not designed for this task.

Why This Limitation Exists: Language models generate text probabilistically. They predict the next token based on patterns in training data, not by executing deterministic algorithms. Counting requires sequential, precise iteration - something that neural networks do not naturally excel at.

How to Work Around It: Use Claude for pattern recognition and understanding, not precise enumeration. If you need to count things, use code. Ask Claude to write a script that performs the count, then run that script. Claude is great at generating a Python function that counts words - it is just terrible at doing the counting itself.

3. Claude Sucks at Maintaining Consistent Personas Across Long Conversations

Start a conversation with Claude, define a specific persona ("You are a senior Java architect who prefers functional programming"), and it will do a great job. Ask 50 questions over the course of an hour, and you will notice the persona drift.

Suddenly, your "functional programming purist" is suggesting mutable state patterns. Your "Java expert" is mixing in Python idioms. The consistency erodes over time.

Why This Limitation Exists: Language models have context windows - the amount of prior conversation they can "remember" and reference when generating responses. As conversations grow, older context gets deprioritized or dropped entirely. The model is not intentionally forgetting your instructions - it is just working with limited memory.

How to Work Around It: Keep conversations focused and relatively short. If you need to maintain a specific persona or set of rules across a large project, re-state the key instructions every 10-15 exchanges. Better yet, architect your workflow so each conversation has a single, well-defined purpose. I use separate chats for separate features, each with clear instructions at the top.

4. Claude Sucks at Mathematical Calculations

Ask Claude "What is 347 x 892?" and it will probably get it right. Ask it to calculate compound interest across 30 years with variable contribution rates, and you are going to get an approximate answer at best.

Claude is not a calculator. It does not perform arithmetic operations in the way computers traditionally do math. It generates responses based on patterns, and sometimes those patterns produce correct calculations - but you should not rely on it.

Why This Limitation Exists: Neural networks approximate functions. They do not execute precise algorithms. When Claude generates a mathematical answer, it is predicting what the answer should look like based on patterns in training data, not computing the result step by step.

How to Work Around It: Use Claude to write code that performs calculations, not to perform the calculations directly. Need to compute compound interest? Ask Claude to generate a function in Python or JavaScript that does the math correctly. Then run the code. Claude is excellent at generating mathematically correct algorithms - just do not ask it to be the calculator.

5. Claude Sucks at Remembering Previous Conversations

Each conversation with Claude starts fresh. If you had a brilliant exchange yesterday where Claude helped you architect a microservices solution, today it has no memory of that conversation.

This is not a bug. It is a deliberate design choice for privacy and technical reasons. But it can be frustrating if you expect continuity across sessions.

Why This Limitation Exists: Storing and retrieving individual user conversation histories at scale introduces massive technical and privacy challenges. Most AI providers reset context between sessions to avoid leaking information between users and to reduce infrastructure costs.

How to Work Around It: Treat each conversation as a standalone session. If you need to reference previous work, copy the relevant context into your new prompt. I keep a local file of "AI-generated patterns" - snippets of code, architecture diagrams, or prompts that worked well. When I start a new conversation, I paste in the relevant context. This approach actually improves results because I am explicitly providing the exact context Claude needs, rather than hoping it remembers something from days ago.

6. Claude Sucks at Generating Truly Random Content

Ask Claude to "generate 10 random startup names" multiple times, and you will start to notice patterns. The names feel algorithmically similar. Certain prefixes, suffixes, and phonetic structures appear repeatedly.

Claude does not generate truly random content. It generates statistically likely content based on its training data.

Why This Limitation Exists: Randomness requires unpredictability, but language models are fundamentally predictable systems. They generate outputs based on probability distributions learned from training data. True randomness would require a separate random number generator feeding unpredictable seeds into the model.

How to Work Around It: Use Claude for structured creativity, not random generation. If you need creative naming, ask Claude to generate names based on specific themes, etymologies, or brand guidelines. The more constraints you provide, the better the output. If you need truly random data (for testing, simulations, etc.), use a dedicated random generation tool or library, then ask Claude to help you structure or analyze that data.

7. Claude Sucks at Admitting Uncertainty

This is the most dangerous limitation.

Claude will sometimes provide confident, detailed answers to questions where it should say "I do not know" or "I am not certain." It will generate plausible-sounding explanations for obscure technical topics, historical events, or edge cases - and those explanations might be completely wrong.

This is the "hallucination" problem every AI practitioner warns about, and it is real.

Why This Limitation Exists: Language models are trained to generate coherent, helpful responses. They are not explicitly trained to recognize the boundaries of their knowledge or to express uncertainty. They generate the most likely continuation of the conversation based on patterns - and sometimes that pattern is a confident answer, even when confidence is not warranted.

How to Work Around It: Verify everything. Especially with unfamiliar topics, obscure libraries, or edge cases. Use Claude as a "first draft" generator, not a source of truth. When Claude provides an answer, treat it as a hypothesis to test, not a fact to trust blindly. I use Claude to generate solutions, then I validate those solutions through documentation, testing, or code review. This workflow catches hallucinations before they cause problems.

The Bigger Picture: Imperfect Tools, Extraordinary Results

So, Claude sucks at seven things. Probably more than seven, honestly.

But here is the important part: despite all these limitations, Claude remains one of the most valuable tools in my development workflow.

Why? Because I understand its constraints and architect around them.

I do not ask Claude to design my systems. I design the systems, then ask Claude to help implement the components. I do not trust Claude's math. I ask Claude to generate functions that perform the math, then I test those functions. I do not rely on Claude's memory. I provide explicit context in every conversation.

This is the mindset that separates effective AI users from frustrated ones.

Too many developers expect AI tools to be omniscient, flawless assistants that replace human expertise. When the tools inevitably fall short of that impossible standard, those developers abandon them entirely.

The smart approach is different: treat AI as a specialized tool with specific capabilities and specific limitations. Use it where it excels. Compensate for where it fails. The result is a workflow that is faster, more consistent, and more scalable than traditional development - but only if you acknowledge reality.

The Force Multiplier Philosophy

I call this the "Force Multiplier" philosophy.

AI does not replace developers. It replaces bad developers and supercharges good ones.

If you do not understand software architecture, AI will not save you. You will generate mountains of plausible-looking garbage code that compiles but does not work. But if you already know how to design systems, write clean code, and debug complex issues, AI can handle the tedious parts while you focus on the interesting problems.

I have been architecting software systems for 40 years - from writing assembly on Timex Sinclairs in the 1980s to building serverless microservices on AWS GovCloud for the Department of Homeland Security. I have seen every hype cycle come and go, from Object-Oriented Programming to Microservices to Blockchain.

AI is different. It is not a paradigm shift or a new framework. It is a genuinely new category of tool - one that augments human expertise rather than replacing it.

But only if you use it correctly.

And using it correctly starts with understanding where it sucks.

Know Your Tools

The difference between frustration and effectiveness is not finding perfect tools. It is understanding imperfect ones.

Claude has clear limitations. So does Gemini. So does every AI model currently available. But those limitations do not make these tools useless - they just define the boundaries of effective use.

If you expect Claude to be omniscient, you will be disappointed. If you expect it to replace your judgment, you will be burned. But if you understand what Claude is genuinely good at - generating boilerplate, refactoring code, explaining concepts, drafting documentation - and what it genuinely sucks at, then you can build workflows that leverage its strengths while compensating for its weaknesses.

That is the secret to AI productivity.

Not hype. Not hero worship. Just honest assessment and deliberate integration.

So yes, Claude sucks at seven things. Probably more.

And I will keep using it every single day - because I know exactly what those seven things are.

Fred Lackey - AI-First Software Architect

Meet Fred Lackey

The AI-First Architect & Distinguished Engineer

With 40 years of software architecture experience - from writing assembly on Timex Sinclairs in the 1980s to architecting serverless microservices on AWS GovCloud for the Department of Homeland Security - Fred has pioneered the "Force Multiplier" approach to AI-assisted development.

Fred doesn't just talk about AI productivity. He lives it every day, delivering production-ready code at 2-3x traditional speed while maintaining the highest quality standards.

  • 40+ years architecting software systems
  • 24M euro exit (biometric point-of-sale technology)
  • Co-architect of Amazon.com's proof-of-concept (1995)
  • First SaaS product granted ATO by DHS on AWS GovCloud
  • AI-First development methodology pioneer
Learn More About Fred