Claude Code Replicates a Year of Engineering Effort in an Hour, Says Google Engineer Jaana Dogan

Reading Time: 4 minutesClaude Code, in Jaana Dogan’s view, is a strong piece of engineering, and seeing it succeed has made her more eager to push her own work further.

GoogleTech News

Written by:

Reading Time: 4 minutes

One of Google’s senior engineers has offered a brief account that says something larger about the present state of software work. She described how a coding system from Anthropic, called Claude Code, produced in about an hour a functioning solution to a problem her team had been grappling with since the previous year.

The engineer, Jaana Dogan, who oversees parts of the Gemini API, explained that the work concerned the management of distributed AI agents which are programs whose purpose is to organize and supervise other programs. 

Why Distributed AI Agents Were So Hard to Design

Within Google, she said, the problem had been approached from several directions. Each method had its defenders, but no single design had ever commanded full agreement. What made the episode notable was the simplicity of the test. The instructions she gave the system were short and unadorned, amounting to only a few paragraphs. 

Image credit: Jaana Dogan/ X

She later clarified that the description was intentionally stripped down, relying only on ideas already known outside the company, since internal details could not be shared. Even so, the result closely resembled, in structure and intent, the system her team had spent many months trying to assemble.

Limits, Access Restrictions, and Industry Competition

Jaana Dogan is careful to add that the result was not flawless and would still need human attention. The system, she says, does not remove the need for judgment. For those who doubt the value of coding agents, her advice is simple: try them where you already know the ground well enough to see both their strengths and their limits.

When asked whether Google makes use of Claude Code, she replied that its use is restricted to open-source work and barred from internal projects. Another reader pressed her on when Gemini might reach a similar standard. Her answer was brief and direct: the effort is underway, both in the models themselves and in the surrounding systems that support them.

She also rejected the idea that progress in this field must come at someone else’s expense. The industry, according to her, has never operated as a zero-sum game. Recognizing good work, even when it comes from rivals, is simply honest. Claude Code, in her view, is a strong piece of engineering, and seeing it succeed has made her more eager to push her own work further.

AI Coding Tools Advanced Faster Than Expected

Looking back, she traced how quickly AI-assisted programming has moved. In 2022, such tools could manage little more than single lines of code. A year later, they were filling out whole sections. By 2024, they could move across files and assemble small applications. Now, in 2025, they are capable of shaping and reorganizing entire codebases.

What stands out most is how wrong earlier expectations proved to be. In 2022, she did not believe the following two years’ progress could ever be made reliable at scale. In 2023, the present moment seemed far in the future. The gains in quality and speed, she concluded, have gone well beyond what most people thought possible.

Claude Code: Workflow Lessons From Its Creator

At roughly the same period, Boris Cherny, the engineer behind Claude Code, set down his own account of how the tool ought to be used. His chief point was a simple one: the system works better when it is given some means of checking itself. A simple mechanism for verification, he wrote, can multiply the usefulness of the result, often by a wide margin.

Boris Cherny advises beginning most sessions not with code, but with a plan. The user should go back and forth with the system until this outline is clear and settled. Once that groundwork is done, the machine is usually able to complete the task in a single, uninterrupted run. For work that repeats itself, he relies on short commands and specialized sub-agents, each assigned a narrow duty such as cleaning up code or running tests.

When the job grows longer, his approach becomes more methodical. He sets background agents to review the work after it is finished, looking for errors or weak points. At the same time, he often runs several instances in parallel, each handling a different part of the problem. By default, he works with the Opus 4.5 model, which he finds steady enough to support this kind of disciplined division of labor.

How Claude Code Fits Into Everyday Engineering & Why Experience Still Matters

In day-to-day work, Cherny’s team makes a habit of treating the tool as another participant rather than a distant aid. During code reviews, they call on it directly within colleagues’ pull requests, asking it to supply missing documentation or clarify intent. He also notes that the system does not operate in isolation. It connects readily with outside tools such as Slack for communication, BigQuery for examining data, and Sentry for tracing errors. So, it can be drawn into the ordinary flow of engineering work.

After her remarks drew more attention than she had expected, Jaana Dogan returned to explain what she had meant more clearly. Her aim, she said, was to strip away the excitement and state the facts. Over the past year, Google has produced several versions of the same underlying idea. Each came with its own compromises, and none clearly surpassed the others. When the strongest ideas that endured this process are presented to a coding agent, it can assemble a respectable demonstration version in roughly an hour.

Dogan was careful to stress that this speed does not come from nowhere. It takes years to absorb experience, test ideas against real products, and discover patterns that will hold up over time. That slow accumulation of understanding is the hard part. Once it exists, the act of building becomes comparatively easy. “It’s totally trivial today, she writes, to take your knowledge and build it again, which wasn’t possible in the past.” Because the work can begin again from the ground up, the results are cleaner, unburdened by the weight of old decisions.

Final Words

What then should we think of the software that is capable of re-creating a year of work in a lunch break? Maybe this: it was never the typing that was the trick. The engineer who spent twelve-month-long wrestling with the distributed agents was not wasting time; the engineer was learning what to ask. Claude Code merely happened to get those questions pre-filtered, debugged, and all the false starts eliminated that actually make discovery so costly. 

The irony is rich. Google, arguably the pioneer of this entire AI revolution, now sees an external tool do in sixty minutes what its own groups have toiled over seasons to achieve. But the response of Dogan, not bitter, but merely a shrug and a “let’s build better” may be the most reasonable answer that can be given.