When code speaks the business language
I've been working with AI coding assistants for months now, and there's this recurring pattern that initially frustrated me but eventually became enlightening. You ask Claude (other AI tooling exists) to help with a method called processData()
, and it generates comprehensive tests for data validation and transformation. The tests are perfectly written and completely wrong – because the method actually handles user authentication.
My first reaction was to blame the AI for missing context. But the more I thought about it, the more I realized the AI was doing exactly what I would do encountering that method name in an unfamiliar codebase. It was making reasonable assumptions based on the only information available.
AI assistants don't play the translation game that we've all mastered as developers. They take our code names at face value, which turns out to be surprisingly revealing about how poorly we actually name things.
The computer can't tell you the emotional story. It can give you the exact mathematical design, but what's missing is the eyebrows. - Frank Zappa
Why we've become excellent liars
Every developer has navigated codebases where UserManager
classes manage everything except users, where validate()
methods silently modify data, and where business logic hides behind names like handleRequest()
that reveal nothing about actual purpose.
I've gotten surprisingly good at this translation layer over the years. You develop an intuition for reading between the lines – understanding what methods actually do versus what they claim to do. It becomes second nature to build institutional knowledge about the gap between names and reality.
AI assistants operate differently. They're refreshingly literal in a way that initially frustrated me. When your method is called calculateDiscount()
but actually sends promotional emails, Claude will suggest optimizations for discount calculations rather than email delivery performance. When your class is named DataProcessor
but handles user sessions, you'll get advice about data transformation instead of authentication patterns.
What I eventually realized is that this literalness isn't a limitation – it's feedback. The AI is showing you exactly how unclear your naming actually is, without the years of context and institutional knowledge that let you navigate the semantic gaps.
How business language changes everything
Consider what happens when you approach a subscription feature differently. Instead of jumping straight into code, imagine starting with a simple Gherkin scenario – not out of BDD zealotry, but to think through the business logic before getting lost in implementation details.
Given a user with an expired subscription
When they attempt to access premium content
Then they should be redirected to the renewal page
This one scenario does something valuable: it gives you a vocabulary that actually describes what the system should do. Not a technical vocabulary of handlers and processors, but business terms that map directly to user needs.
The resulting code would practically name itself: SubscriptionService.isExpired()
for checking status, PremiumContentGuard.checkAccess()
for authorization, RenewalRedirectHandler.redirect()
for handling the renewal flow. (The last one still sounds a bit technical, but at least it tells you what gets redirected where.)
If you then asked Claude to help optimize this hypothetical code, something interesting would happen. Instead of generic performance advice, you'd get domain-specific suggestions: cache subscription lookups by user ID, batch premium content authorization checks, pre-calculate renewal redirect URLs for common subscription types. The AI would understand the business context because the code actually represented the business context.
Contrast this with the typical approach, where everything lives in UserHandler
classes with process()
methods. The AI suggestions in that scenario would be about as useful as asking for directions to "that place over there."
When code actually represents reality
I sometimes work on projects where the developers clearly cared about naming. These are the codebases where OrderService.calculateShippingCost()
actually calculates shipping costs, and InventoryRepository.reserveItems()
actually reserves inventory items. Basic stuff, but surprisingly rare.
The difference in AI assistance on these projects is remarkable. When you ask for help optimizing checkout flows, you get precise suggestions: cache shipping calculations by zip code, batch inventory reservations to reduce database calls, pre-calculate tax rates for common locations. The AI understands the business domain because the code actually represents the business domain.
Compare this to the typical enterprise setup where checkout logic is scattered across RequestHandler
, BusinessLogicProcessor
, and GenericServiceImpl
classes. Ask an AI to optimize that structure and you'll get generic advice about connection pooling that completely misses the business context.
The irony is that we often spend more time explaining our code to AI assistants than we would have spent naming it clearly in the first place. Every conversation that starts with "this method doesn't actually process data, it handles payment authorization" is time that could have been spent on more useful problems.
The compounding effect of clear names
I've found that thoughtful naming pays compound interest when working with AI assistants. The time you spend considering whether something should be PaymentProcessor
rather than DataHandler
comes back to you repeatedly in better AI suggestions.
When your code accurately describes what it does, several things happen:
- AI suggestions match your actual domain rather than generic patterns
- Generated tests cover business scenarios instead of just code paths
- Refactoring recommendations understand the business context
- You eliminate explanation overhead in AI conversations
The AI can focus on how to solve problems because the what is already clear. It's like the difference between asking someone to "fix the widget" versus "repair the brake caliper on the front left wheel" – one gets you confused stares, the other gets you actionable help.
Practical steps that actually work
I'm not suggesting anyone rewrite legacy systems for AI compatibility – that would be both impractical and career-limiting. But I have found value in what I think of as opportunistic naming improvements.
When you're already touching code – fixing bugs, adding features, refactoring – you can ask yourself: "Does this name reflect what this actually does in business terms?" If you can improve it without breaking dependencies, why not take the thirty seconds?
For new features, I sometimes write simple Gherkin scenarios first. Not because I've converted to BDD evangelism, but because it forces me to think in business terms before getting lost in implementation details. This approach has been more useful than I initially expected.
The temptation to call something DataHandler
and move on is real – naming is genuinely difficult, deadlines are pressing, and there are always other priorities. But I've found that spending a moment to call it PaymentProcessor
or NotificationScheduler
pays dividends every time an AI assistant encounters that code later.
What this actually accomplishes
The goal isn't perfect domain modeling or achieving some theoretical clean code ideal. It's about reducing cognitive load – that translation layer between what your code claims to do and what it actually does. This benefits both the humans maintaining the code and the AI assistants trying to help with it.
When an AI assistant struggles to understand your code, it's worth considering whether the problem is with the AI or with how clearly the code expresses its intent. The literalness that initially frustrated me has become a useful debugging tool for unclear naming.
In a world where AI assistance is becoming standard practice, writing code that clearly expresses its business purpose isn't just good practice – it's increasingly practical. The AI can focus on helping you solve problems rather than deciphering what you're trying to accomplish.
This was as much about learning to work more effectively with AI tools as it was about improving code clarity. Just don't expect it to revolutionize your development workflow overnight – sometimes the most useful improvements are the ones that quietly remove friction from daily work.