AI Finance

Domain Beats Syntax: Why Operators Are Winning the AI Era

Saul Mateos

An attorney, a cardiologist, and a roads worker walk into a coding hackathon.

Sounds like the setup to a joke. It's not.

13,000 people applied. 500 were selected. 3 of 5 winners weren't software engineers. The attorney won first place. He automated California's housing permit process because he'd spent years watching it grind people down. The cardiologist built a patient communication tool because he'd watched patients forget 80% of what he told them in appointments. The roads worker built infrastructure planning software for Uganda because he'd sat in the budget meetings where bad data killed real projects.

None of them knew how to code. All of them knew exactly what was broken.

Boris Cherny, head of Claude Code, said it plainly at the event: "Coding is largely solved."

The new question isn't "can you code?" It's "do you know what needs to exist?"

What the Hackathon Actually Proved

The results weren't a fluke or a feel-good story. They were a signal.

The winners didn't beat engineers by learning to code faster. They won because they came in with a decade of operational frustration and a clear mental model of the solution. The attorney didn't need to understand database schemas. He understood permit workflows. He knew which fields were redundant, which approvals were ceremonial, which handoffs caused three-week delays. That knowledge was the product. The AI was just the execution layer.

That's the inversion that most people are still processing.

For the past 30 years, having a technical idea wasn't enough. You needed either years of programming experience or a substantial budget to hire engineers who would spend weeks just understanding the domain before writing a single line of code. The knowledge gap between "I know what's broken" and "I can fix it" was enormous. Now it's not.

I Am Not an Engineer. I Build Software Anyway.

I run finance, HR, marketing, and technology at a company. My background is accounting and financial strategy, not computer science. And over the past year, I've built more functional software than I ever expected to build in my career.

Not toy demos. Actual tools that my team uses.

I built a financial dashboard that pulls real-time case data from Salesforce, runs variance analysis against budget, and flags anomalies before a human would catch them. I built an automated workflow that syncs provider agreements from our CRM into a Notion database, runs a compliance check, and routes exceptions for review. I built a marketing site from scratch, including the design system and deployment pipeline. I built HR documentation generators that take raw policy notes and produce formatted, reviewed documents. I built a portfolio analyzer that ingests lien data and calculates expected recovery values.

None of this required me to become a developer. It required me to know exactly what I needed.

How Vibe Coding Actually Works

The term "vibe coding" sounds like something a tech influencer invented. The practice is more grounded than that.

You describe what you need in plain language. The AI writes the code. You run it, see what it does, and tell the AI what's wrong or what's missing. You iterate. After a few passes, you have something working.

The skill isn't typing prompts. The skill is knowing when the output is right. That requires domain knowledge, not technical knowledge.

When I built the Salesforce dashboard, I wasn't debugging JavaScript. I was reviewing the output and saying "this variance calculation is wrong, it should be comparing actuals to the prior-year period, not the budget midpoint." The AI fixed it immediately. A junior developer who didn't understand financial reporting would have made the same mistake and might not have caught it.

The domain expert is the quality control layer. That's the job that AI hasn't automated.

There's also a practical element that makes this work: you don't have to understand everything the AI writes. You have to understand enough to know if the result is doing what you intended. In finance, that means I can evaluate a cash flow model output even if I can't explain the recursive function that generated it. The business logic is what I own. The implementation is what the AI owns.

Why CFOs and COOs Are Unusually Well-Positioned

Finance and operations leaders have something that most engineers don't have: a cross-functional view of where things break.

An engineer embedded in the product team sees product problems. A CFO sees the intersection of finance, HR, legal, sales, and operations, often in the same week. They sit in the meeting where the VP of Sales promises revenue that finance hasn't modeled. They see the HR system that doesn't connect to the payroll system. They see the reporting process that takes four days because three people manually reconcile spreadsheets.

Every one of those friction points is a buildable solution.

The CFO who spent ten years watching the monthly close take 12 days knows exactly what a 3-day close would require. They know which reconciliations are just matching two exports. They know which approval step is a bottleneck because one person is always traveling. They know which data transformation happens manually every single month with copy-paste.

That knowledge is worth more than knowing React.

The Failures Were Part of It

I want to be honest about the path here, because the hackathon winners make it look clean and it wasn't.

My first attempt at automation was Make.com. I built workflows that connected email to Slack to spreadsheets. They broke constantly. Every time a field name changed or an API updated, the whole chain collapsed. I spent more time maintaining the automations than I spent doing the work manually.

Then I tried ChatGPT for financial analysis. It hallucinated numbers. Confidently. I once asked it to build a revenue projection based on historical data I pasted in, and it invented a growth rate that had no relationship to the inputs. I caught it because I knew what the number should be. Someone without that domain knowledge wouldn't have caught it.

Then I found Claude. Then Claude Code. The difference wasn't that Claude was smarter (though the reasoning was better). The difference was that Claude Code let me iterate in real time. I could see the output, correct it, and watch the fix happen in seconds. The feedback loop was tight enough that my domain knowledge became the bottleneck, not the technology.

That progression, from broken automations to hallucinating chatbots to a tool that actually matched my working speed, took about eight months. The hackathon winners compressed a similar journey into a weekend because the tools had caught up. But the principle is the same: the person who knows the problem will eventually find the right tool. The person who only knows the tool will keep building the wrong thing.

The Hiring Question Nobody Wants to Answer

If a domain expert with AI can now produce in a week what used to take a developer a month, how does that change hiring?

The honest answer is that it changes it a lot, and most organizations haven't thought it through.

For finance teams specifically, the calculus is shifting. The question used to be "do we hire another analyst or contract a developer?" Now it's "do we hire someone who deeply understands the business and is willing to experiment with AI tools?" Because that person, given six months, will build more than a developer who needs six months just to understand the domain.

I'm not arguing that engineers are unnecessary. Complex systems, security-sensitive infrastructure, and high-scale platforms still need people who understand the underlying architecture. But the long middle ground (internal tools, reporting automation, workflow optimization, data pipelines for internal use) is increasingly accessible to domain experts.

The gap between "I know what's broken" and "I can fix it" keeps shrinking. Hiring strategies should reflect that.

Document First, Automate Second

One principle I've learned from building things myself: you cannot automate a process you don't understand at the manual level.

Before I built the Salesforce variance dashboard, I spent a week pulling the data manually and building the logic in Excel. I wanted to understand every edge case before I handed it to an AI. Which fields were null? Which records had conflicting dates? Which business rules had exceptions?

When I finally went to build the automated version, I had a specification that was detailed enough to produce a working first draft in an afternoon.

The operators who will get the most out of AI tools are the ones who are rigorous about the manual process first. Not because you'll always do it manually, but because automating a half-understood process just produces wrong answers faster.

This is actually a finance skill. Every good financial model starts with understanding the business logic before building the formula. The same discipline applies here.

The Competitive Advantage That's Available Right Now

Here's what I keep coming back to after the hackathon results.

The competitive advantage in this moment isn't technical skill. It's operational knowledge plus the willingness to experiment.

Every organization has a list of "someday" projects: the report that someone manually builds every month, the integration that two systems don't have, the dashboard that the board keeps asking for. That list exists because building things used to require a budget and a roadmap and a developer with capacity. Now it requires someone who understands the problem and is willing to spend a few days iterating with an AI.

Finance leaders who recognize this and start building, even imperfectly, even slowly, are accumulating a compounding advantage. Each tool they build teaches them what works. Each iteration makes the next project faster. The learning curve is steep at the beginning and then flattens quickly.

The people who wait for "more mature" AI tools or for their IT department to greenlight something are ceding ground to the people who are already building.

The hackathon wasn't just a competition. It was a preview of who wins the next ten years of work. The winners weren't the people who knew the syntax. They were the people who knew the problem.

That's an operator. That's you.

P.S. If you're a CFO, COO, or finance leader who's been curious about building your own tools but not sure where to start, the best first step is the most boring one: write down one manual process your team runs every month and document every step. That document is your prompt. The rest follows from there.

Want to talk about your finance function?

I spend 30 minutes with CFOs and finance leaders every week discussing how AI fits into their operations. No pitch, just a conversation.

Book a 30-Minute Conversation

or email us at hello@strategiq.so

More from Insights