The Intern Left. We Needed Another Way.
We lost our intern. Offshoring didn't work. So we built something else — an AI agent that now handles deal underwriting reconciliation in minutes instead of days. Here's exactly what we built a
Last year we were looking hard at ways to improve our deal turnaround. We had experimented with offshoring some of the analyst work for a while — it helped at the margins, but it wasn’t the answer. Over the summer we had an excellent intern who did genuinely strong work, producing models quickly and accurately. When that ended, we were back to the same question: hire another full-time analyst, or find a different way to close the gap.
Preston, our acquisitions lead, prompted me to look into AI. He knows my background — I spent a decade as a design engineer before moving into real estate — and he probably wondered why I wasn’t already pursuing these tools more proactively. Fair point.
The problem
The problem we were trying to solve was underwriting throughput. Specifically, every deal maps to our model a little differently. The seller’s chart of accounts is never the same twice. Their rent roll format, how they treat one-time items, whether payroll is one line or four — all of it varies. That variability is exactly what makes traditional automation fail. Macros and templates work when the input is consistent. Real estate deals aren’t. I had tried rule-based approaches before and hit the same wall every time: the moment a deal came in with an unfamiliar structure, the automation broke and a human had to take over.
What agentic AI changes is the ability to handle that variability through pattern recognition rather than rigid rules. More importantly, an agent can be trained on your specific workflow and learns from mistakes as you coach it. When it maps a category incorrectly, you correct it. It usually doesn’t make that mistake again.
The initial work was building that learning environment — establishing the structure of our model clearly enough that the agent had something to learn from. That setup takes real effort. But once it exists, training becomes iterative: run a deal through, review the output, correct what is wrong, run another. Each round the agent performs better. The improvement compounds in a way that is genuinely similar to onboarding a junior analyst — steady growth that accelerates once the foundational understanding is in place.
What we built
We built an agent — using Claude Code — that ingests the full document set for a deal: offering memorandum, operating statement, rent roll. It loads and reconciles the data across those sources, translates the seller’s chart of accounts to ours, and runs the same reconciliation process on the rent roll — normalizing unit types, flagging vacancy inconsistencies, identifying loss-to-lease. Each source is cross-checked against the others. Discrepancies get flagged before they reach the model.
That’s not simple work. It requires holding the full picture in context simultaneously — which is exactly what large context windows now make possible, and exactly what breaks under deadline pressure when done manually.
On a recent 150-unit value-add deal, the agent caught three material issues on the first pass: an occupancy history that didn’t reconcile with the rent roll, a one-time insurance recovery buried in operating income, and stabilized NOI overstated by approximately 18% in the seller’s underwriting. What would have taken the better part of two days took about 20 minutes.
Why this wasn’t possible 18 months ago
I looked at this problem in 2023. The tools weren’t there.
What changed: in March 2024, context windows expanded to 200,000 tokens — enough to hold an entire property package in a single pass. In November 2024, Anthropic released the Model Context Protocol, a standard that lets AI agents connect reliably to external files, databases, and systems without fragile custom code. In February 2025, Claude Code made it practical for a firm without a software development team to build and deploy working agent systems. We built ours over a handful of weekends. It’s in production, processing real deals.
More recently, advances in agentic capabilities — reliable tool use, skills orchestration, the ability to hand off between tasks without losing state — put the remaining pieces in place. The context window got us in the room. The agent architecture made it work.
What doesn’t change
The agent doesn’t replace an analyst. It changes what the analyst does.
Someone still needs to manage it — running deals through, reviewing outputs, catching errors, continuing to coach it when something is mapped incorrectly. That oversight isn’t optional. The agent’s reliability is a direct product of the attention given to training and quality control.
What it handles is the data work: loading, reconciling across sources, translating, flagging. What it doesn’t handle is what to do with what the numbers reveal. Market judgment, broker relationships, the read on a seller’s motivation — none of that changes. What changes is that the analyst who previously spent hours on data reconciliation now applies that time to the work that actually requires them.
Next: how the same infrastructure problem shows up on the investor relations side, and what we built to solve it there.

Eric - this passage from you: "What it doesn't handle is what to do with what the numbers reveal. Market judgment, broker relationships, the read on a seller's motivation — none of that changes."
It reminds me that "people do business with people."
Your circumstance here is such an excellent example of how one aspect of the work speeds up, and will get done perhaps even better without a human, but at the front end of the process, and the back end of the process, are human interactions - people interacting with people, atop a foundation of relatedness. I have a hard time imagining that part going away. I have an easy time imagining how much more important it will be.
Thank you for your insights here 🙏