Introduction
The Pearl Health Engineering team has been using AI for nearly three years now. Our early experiments with GitHub Copilot turned into a broader system that’s core to how we operate. This post describes what's working, what we've learned, and where we're headed.
History of AI Coding Agents at Pearl
Early Adoption
Pearl Health was an early adopter of AI. When GitHub Copilot launched in 2022, we immediately offered seats to anybody who wanted one. The only rules were:
- You had to actually use it. No idle seats.
- You had to share what you learned, either in Slack or through a live demo.
When Cursor arrived in 2023, we took the same approach. Eventually, we told the team that we’d fund any AI tool they wanted, as long as they followed those two rules. Our thinking was that it was too early to dictate a single option. The technology was evolving at an incredible pace, and every AI tool felt like a winner. Over time, the list grew to include Windsurf, Claude Code, JetBrains Junie, Gemini Code Assist, and more.
Today’s Setup
In June 2025, we looked at our overall AI adoption, and we saw indicators that our ultra-flexible approach wasn’t serving us anymore.
First, the proliferation of tools meant that engineers weren’t sharing best practices. This included prompting techniques (because everyone was using a different AI model), rule files, and tool-specific tips and tricks. Second, we saw that new hires were confused about which tools to choose, which slowed down adoption.
We made the decision to align everyone around Claude Code, with the option to use Cursor Pro for folks who preferred an IDE-driven experience. This has been extremely effective. AI adoption shot up across the team, and it’s now an essential part of how we operate. In fact, as of December, the “suggestion accept rate” for Claude Code is a whopping 96%. In other words, when Claude Code suggested a code change, engineers hit “accept” 96% of the time and actually used that code.

Centralizing on Claude Code also increases the future value that we’ll get from investing in tooling, like Claude Code plugins, skills, and sub-agents.
Beyond Coding
AI-assisted engineering isn’t just about writing code. We’re finding AI, and Claude Code in particular, to be a helpful tool throughout our development lifecycle.
Architecture and Design
Every project benefits from up-front design documentation. But it takes discipline to do this consistently. First, the lead engineer needs to actually write the document, which takes time. Next, it needs to be reviewed. The number of reviewers (and their seniority) depends on how complex the project is. There might even be live discussion, which requires meetings. Of course, the reviewers will have feedback (that’s why we do reviews in the first place!) and that feedback leads to additional work.
All of this is good practice, and it leads to better outcomes. But it’s a non-trivial investment. We’ve found that AI can streamline the process by serving as a peer architect during the design phase. With that in mind, in May 2025 we rolled out a new policy: using AI is mandatory for design documents.
In some cases, AI simply helps with completeness. For example, we maintain a template for High-Level Design documents (HLDs). Engineers are encouraged to upload their HLDs to AI along with the template, and have the AI point out anything that’s missing or under-specified. But the really big wins happen when we use AI as a thought partner. We can ask it questions like:
- What other approaches should we consider, and why?
- Which observability KPIs should we include?
- What are the biggest risks, and how might we mitigate them?
The benefit is obvious: engineers spend less time writing documents. But more importantly, the actual designs are more robust and complete! This has a force-multiplying effect: it means fewer iterations, which means less time overall for the individual and for the reviewers. And of course, better designs mean higher-quality systems that make us more effective as a team.
Repository Documentation
AI is great at paying down “documentation debt”.
Some of our code dates back to the early days of Pearl. Three years is a long time in the world of software. We have some critical, complex systems that haven’t been modified, because they “just work”. However, when it’s time to enhance or refactor this code, engineers might spend hours, or even days, absorbing the design and understanding the decisions that were made. We’ve found that Claude Code is excellent at generating comprehensive design documentation, explaining how something works, and even making educated guesses about why it works that way.
Claude Code generates Markdown files that we submit as PRs. These PRs are reviewed by a human, and then merged in. Once that’s done, we have well-written guides that a human or an AI can use as the starting point for their next round of changes.
I’ve found that this approach works especially well for refactoring. Let’s say you want to modify a complex repository or module. You could skip the documentation process, and simply ask the AI to make the changes. It would then need to pass through multiple steps:
- Understand the existing setup.
- Generate a plan.
- Execute the plan.
However, if you make step 1 its own prompt, and generate re-usable output (in the form of Markdown files), you get better results for steps 2 and 3.
Subject-Matter Expertise
Thanks to our excellent documentation and CLAUDE.md files, Claude Code is really good at answering difficult questions – in particular, questions that used to require live conversations with SMEs. This has a force-multiplying effect. Engineers move faster, because they can independently make good decisions, and our Staff/Principal engineers are able to scale further.
We’ve found this to be especially helpful for frameworks that are shared across teams. One example is infrastructure-as-code (IaC). At Pearl, we use Terraform for every piece of our infrastructure. There’s a centralized DevPlatform team that owns the core modules and guardrails, but each team is accountable for writing their own service-specific modules and deploying their own code. The DevPlatform team writes excellent documentation (with the help of AI, of course). But when someone needs additional help, they usually head to Slack. Recently, however, we’ve seen that Claude Code is really good at being the “front-line support” for IaC. It can answer complex questions about why something works the way it does. When there are multiple ways to solve a problem, Claude Code explains the tradeoffs.
You may be wondering how this differs from your experience with online chatbots. After all, anyone can get Python or AWS advice from ChatGPT. But when you want to know about your team’s internal frameworks, you need AI that has direct access to your code.
Future Plans
We see some really exciting opportunities in 2026 and beyond. Our goal is to increase the level of complexity that AI can handle, without compromising on quality. The more we can offload to AI, the faster we can move as a team, and the more we can accomplish.
Some of the items on our list include:
- Incorporating MCP into our standard processes. Right now, teams are experimenting with MCP servers for Confluence, ClickUp, Figma, AWS, and other tools. We’d like to establish clear playbooks around using these integrations to accelerate the SDLC.
- Developing customized Claude Skills. These get especially interesting when you combine them with MCP. For example, a bug-fixing skill could be taught to pull ticket descriptions from ClickUp, combine it with observability data from Honeycomb, identify root causes, and then suggest possible fixes.
- Investing in customized code review agents that take our Pearl-specific standards into account. We’ve observed that AI code reviews with homegrown, “opinionated” prompts can significantly outperform the various AI review tools on the market today.
- Migrating from CLAUDE.md to files to the emerging AGENTS.md standard. In practice, this just means re-naming some files. But it keeps us aligned with where the industry is headed, and positions us to leverage future innovations based on the standard.
We’re also looking at bigger, more experimental shifts.
The first is running multiple agents in parallel. I’m convinced that the industry is headed in this direction. If we get this right, it’ll be a huge unlock. Our initial experiments might use Git worktrees. But over time, we’ll likely explore cloud-based systems like OpenAI Codex (or a Claude-based equivalent).
The second is allowing non-engineers to submit “low-risk” PRs using AI. We’re piloting this approach with our UX designers, and the results are very promising. We’re interested to see whether we can expand to Product Managers. It would require very strict guardrails, and the AI would need to reliably generate high-quality PRs that don’t require extensive back-and-forth with engineers.
Finally, the AI labs will keep producing better models. Each iteration will handle increasingly-complex tasks “out of the box”, and we aim to maximize every opportunity.
Want to join us?
We’re looking for candidates that are excited about the opportunity to leverage AI even further. If you think this describes you, and you want to make a difference in U.S. health care, then come work for us! We’re actively hiring for engineering roles, and we’d love to meet you.



