We recently got together to talk about what's really happening with AI and software development. Not the hype, not the fear, just what we're seeing on the ground.
Don't Harsh Our Vibes
The "vibe" label sells short what's happening with coding agents. "Vibe" implies this technology is a toy—just faster autocomplete or smoother Stack Overflow. When you ship production software, you throw the vibes away and use your same old workflow.
Our experience is different. Agents are meaningfully changing how engineers build production software. Companies are reorganizing around agentic coding—this isn't a toy.
Our friends at Assignar are using agentic coding workflows to build complex construction finance management software. GitHub Copilot coding agent, Cursor Bugbot, and Claude Code /security-review all play a role on the team. -Jake
But the Vibes Aren't Immaculate
We're on the agentic bandwagon, but we can't just wait for big companies to throw billions at their models and call it a day.
Here's why: these models run at an unimaginable scale, but they remain constrained by the data that they have access to. They learn from code, documentation, Stack Overflow posts—anything written down. But if something isn't in the text, the model doesn't know about it.
And so much of what makes software engineering hard was never written down. The crucial context lives in engineers' heads, in Slack threads that get buried, in whiteboard sessions that were never photographed. The models can't learn what was never captured.
Code Is Just an Approximation of Intent
Our codebase isn't the truth—it's just what happens to run. The actual truth is the intent: what we wanted the software to do, what constraints we faced, what tradeoffs we made. When something breaks, you need this intent to fix it properly. Without it, you might fix the bug but break the use case, optimize the wrong thing, or undo a critical workaround.
I’ve used agentic coding tools to build entire new features, like the markdown editor inside BearClaude, only to have it create a regression in the Chat History view (a completely unrelated part of the code base). When “vibing”, manual smoke tests are typically the answer for spotting regressions. But when software engineering with AI assistance, we need something more thoughtful. -Jake
The False Revolution: New Panes of Glass
As engineers, our natural tendency is to solve problems with new layers of abstraction. Let the agents handle the messy details underneath while we work at a higher level. Ambitious projects like plain-lang—which lets you code in plain English sentences—are attempting precisely this, defining a natural language abstraction that simplifies everything below.
It's a compelling vision. But abstraction layers rarely eliminate complexity—they just relocate it.
At Sym, we fell into this trap. We created elegant devtools for just-in-time AWS access. You could request permissions through our clean interface: "I need production database access." Beautiful.
Except first, teams had to untangle their existing AWS permissions. "We have a role called 'WebAppProd' with... who knows what permissions? Created three years ago, modified seventeen times, attached to twelve services." Before using our "simplification," they needed to understand both IAM and our abstraction of IAM. We'd added another pane of glass, not removed the complexity underneath. -Jon
The same pattern awaits AI coding abstractions. As these natural language systems grow complex, they'll need all the same tools any codebase needs. Natural language debuggers. Natural language version control. Natural language merge conflicts. We end up reinventing the wheel at a new level of abstraction, now maintaining two layers of complexity instead of one.
The Real Revolution: A New Abundance
The middle way is paradoxical: it's revolutionary precisely because it's not trying to be revolutionary. It's about finally doing what we always knew we should do: Documentation. Tests. Clear specifications. Design discussions. Architecture decisions. Code reviews that review thinking, not just syntax.
We skip these because they're too expensive. But AI creates a new abundance. When documentation becomes nearly free, you stop rationing it. When test writing takes seconds, not hours, you stop skipping it.
Now when I finish working on something, I ask the agent: "Update the tests and docs." "Can you help me DRY up this code?" This often takes seconds and sometimes uncovers an issue I missed. These tasks used to be the TODOs that haunt a codebase—things we'd "definitely get to in V2." The difference isn't that I'm working faster; it's that I'm actually completing the work. The boring, essential parts that make code maintainable are finally getting done. -Jon
What Actually Changes
What changes:
Code review becomes intent review—PRs without documentation get rejected
Context persists across time and team changes
Standards rise as costs drop—incomplete work becomes unacceptable
The "nice-to-haves" become table stakes
What doesn't:
Humans specifying intent—this gets more important, not less
The complexity of understanding existing systems
Software evolution and changing requirements
Bugs, edge cases, and the fundamental challenges of building software
What We Still Need to Figure Out
Feedback Loops
Encoding intent at project start isn't enough. When you explain to an AI what you're building, when you iterate on approaches, when you ask it to explain existing code—all that context usually evaporates. What if it didn't?
What if these artifacts could be validated and refined over time? Fix a bug, update the intent. A new team member discovers gaps; they fill them in. This creates a virtuous cycle instead of entropy. We’ve learned from the agile revolution that overly precise up-front specification doesn’t leave room for learning and iterating. So the best intent-driven workflow needs to allow people to work iteratively and stay in flow while still capturing each new learning and decision.
This is a big part of why we’ve built SpecStory as a connector to existing AI coding tools. It allows developers to stay focused on the task at hand, defining and delivering great software, while recording a clean log of that definition. This way, 2 months from now, when you need to remember “why did I decide to go with a JSON field in my database instead of multiple columns?” you can ask that exact question of your own AI chat history. -Jake
The Real Vibes Will Not Be Televised
The televised version is what gets clicks—programming languages that look like English, YouTube demos of full apps built in an afternoon, breathless threads about replacing all engineers.
However, the real revolution occurs in the unglamorous daily practice. It's in the PR rejected because the documentation doesn't align with the intent. It's in the feedback loop that catches drift between code and docs. It's finally having time to write architecture decision records because agents handle the boilerplate.
Coding agents give us the resources to do all the things we always should have done. If we can encode software engineering components into something agents understand, we can truly "make it so."
How are you refining your simple daily practices to take advantage of the real power of AI-assisted software development?
This piece grew out of a conversation about how AI is actually being used in software development today. We'd love to hear what you're seeing in your own work.
Jon is a founding engineer at Distill, a platform for learning about people and companies. Previously, he co-founded Sym, a devtool for security engineers, and led engineering teams in health tech, infrastructure, and enterprise software.
Jake is the co-founder of SpecStory, a tool that helps developers preserve and share their AI coding sessions. Before SpecStory, Jake led product teams at Docker and DigitalOcean.
Watch the conversation here: