Skip to main content

AI and the Alternative Lending Infrastructure Gap

What 75 Operators Actually Said — and What It Means for COOs, CFOs, and Credit Leaders

Field research, Q2 2025 – Q1 2026. Based on 75 interviews across private credit, CRE bridge, hard money, revenue-based financing, asset-based lending, revenue factoring, and financial technology.

Deals die from friction, not math.

Jarred B.Senior Real Estate Underwriter & Credit Analyst15 years in private lending, $500M+ underwritten

How to Read This Paper

75 interviews across a lot of different functions means not everything in here lands the same way for everyone. Before you dig in, here’s where your section is.

  • COOs and operations leaders — Start with Insight 1 (The 80/20 Ceiling) and Insight 3 (The Cliff Handoff). The bottom-line prescription was written for you.
  • CFOs and credit professionals — Insight 2 (The Sales-Underwriting War) and Insight 5 (AI as Policy Enforcement Layer) are where the panel spoke most directly to your seat.
  • CTOs and technology leaders — The Strategic Landscape section handles the build-vs-buy question and the integration failures your peers kept naming as actual deal-killers.
  • CRE brokers and relationship-first originators — Insight 4 (The Relationship Paradox) is written specifically for operations where the relationship is the product.
  • Revenue-based financing operators — There’s a dedicated revenue-based financing sub-section inside Insight 3. We treat revenue-based financing as its own operating model, not a footnote to private credit. Because it is.

The Market Got Big. The Operations Never Grew Up.

I’ve been on enough of these calls to recognize the pattern about ten minutes in.

The COO is running a $70M shop. Pipeline’s fuller than it’s ever been. Capital’s actually available. And about ten minutes in, they say some version of the same thing: we’re just stuck doing it all by hand.

That’s the gap. Right there.

Private credit is sitting at $1.5 to $2 trillion in AUM, projected to hit $2.6 trillion by 2029. Morgan Stanley dedicated a full outlook to it. The broader U.S. alternative financing market is tracking toward $105 billion by 2029 at 13.2% annual growth. Non-bank lenders now originate 67.5% of new mortgage loans — a share that once belonged entirely to regulated depositories.

The deal flow is real. The capital is real. The growth is real.

The operations behind it never grew up.

Freddie Mac’s 2024 Cost to Originate Study put average loan origination costs at roughly $11,600 per loan — up 35% over three years — with 67% of that cost going to labor. Then $957 billion in CRE loans matured in 2025, a historic wave of transitional assets needing bridge capital at exactly the moment conventional lenders pulled back. More files. More complexity. More manual work. Flowing into shops that never fundamentally changed how the work gets done.

For the COO running a $50M to $500M shop, the $3 trillion private credit boom doesn’t feel like a tailwind. It feels like more volume entering a process that was already breaking.

There’s no calendar in this business. There’s whatever’s in the pipeline and whatever fire needs to get put out before lunch.

Ayson S. is a partner at a boutique CRE fund focused on single-tenant retail. He describes his day as “max rep every time” — 9 AM to 8 PM, calling landlords by hand because in his words there is no secret sauce. Joshua P. is a VP at a boutique CRE fund working $3M to $15M apartment buildings. He wakes up at 6 AM on Mondays and hammers through weekend emails in 45 uninterrupted minutes before the day starts. His rule: every month spent working a deal has to return six months of income. Otherwise he won’t touch it.

That’s the texture of this business. Hustle. Discipline. Zero room.

And while operators are running this hard, the systems underneath are doing the opposite.

Here’s what actually happens inside these shops all day. Analysts and underwriters spend most of their time functioning as data entry clerks. There’s no standardized chart of accounts in this industry. So every underwriter is stuck in what the people I interviewed literally called PDF Hell. Scanned PDFs. QuickBooks exports. Mobile screenshots of rent rolls. Lance C., who worked inside a bridge lending operation, described borrowers submitting statements in every imaginable format with no intake standard — meaning every underwriter solves the same normalization problem from scratch, every single deal. Zachary S., who does advanced modeling on value-add multifamily, confirmed it from the analyst chair: teams are rebuilding the wheel for every deal, copy-pasting into one-off spreadsheets, running the same process over and over.

Highly paid people doing the lowest-leverage work in the building. And that’s exactly where human errors enter the credit decision.

Doc. Ref. 409-A
[02]

The Algorithmic Shift

What AI Is Also Doing to Your Portfolio

Nobody in this space wants to say this out loud. So I will.

AI isn’t just an opportunity for alternative lenders. It’s a threat to their portfolios. And those two things are happening at the same time.

In February 2026, private credit markets got rattled. Analysts started warning that AI was compressing margins and weakening debt-service coverage on software borrowers. Software represents 17 to 25% of private credit deals by count — that’s not a niche. UBS modeled an aggressive disruption scenario where U.S. private credit default rates climb to 13%, nearly double the stressed estimate for leveraged loans. Fitch recorded a 9.2% default rate across private credit in 2025. The New York Times ran the headline: “Once the Hottest Bet on Wall St., Private Credit Has Started to Crack.”

The stakes here aren’t abstract.

If you’re sitting in the C-suite of an alternative finance shop right now, the question isn’t whether you use AI anymore. That debate is over. The question is whether you use it to fix your operations before it cracks your portfolio.

That’s what this research is actually about.

9.2%Private credit defaults, 2025 (Fitch)
17–25%Of deals exposed to software borrowers
[03]
Insight 1

The 80/20 Ceiling

Highest RelevancePrivate creditCRE bridgeHard moneyAsset-based lendingRevenue-based financingUniversal pattern across all segments.

Seventy-five interviews. Different segments, different shop sizes, different workflows. One pattern showed up every single time.

Every operator who tried to automate something hit a ceiling at around 80% of the workflow. Every one.

Matthew K. spent six years at Deloitte and another six building financial technology. Here’s how he put it:

You can automate 80% of the data processing and give the underwriter a massive head start — and more consistent. But the system has to be designed to flag the 20% that needs human judgment instead of guessing at it. The automation that tries to replace the underwriter fails. The automation that makes the underwriter faster works.

Matthew K.CTO, Private Credit OperatorSix years at Deloitte; six years building financial technology

That’s not a one-off. Bank statement parsing. Covenant monitoring. Stacking detection. The first version looks good in the demo. Production breaks it. I’ve seen this over and over.

Zachary S. walked through his portfolio monitoring build — a system designed to pull operating statements and rent rolls into standard templates, auto-calculate DSCR and debt yield, and flag covenant breaches. The first version fell down in practice because the inputs weren’t standardized. Different property managers. Different file formats. Missing fields. His conclusion was the one I kept hearing across these interviews:

You have to clean up and standardize the data and the processes first. Otherwise the automation just breaks — and once trust is gone, the project is dead.

Zachary S.

Right. And once trust is gone, good luck getting the team back on board.

The architecture that actually works is augmentation, not replacement. The agent handles the data-entry work that consumes roughly 70% of every file and surfaces risk signals a human would miss in manual review. The underwriter still makes the credit decision — because that decision depends on context, relationship, and qualitative judgment that doesn’t appear in the bank statements. That part doesn’t automate. It shouldn’t.

The heatmap below shows where in the loan lifecycle this plays out most sharply. Bank statement spreading and servicing/monitoring are the two stages with the highest combination of AI readiness, operational pain, and lowest human judgment requirement. Those are your near-term targets.

Fig D. AI Opportunity Heatmap Across the Loan LifecycleScore 1–5 per dimension
AI ReadinessOperational PainHuman JudgmentAutomation AttemptedLead & Owner Research4323Deal Screening & Intake4423Bank Stmt Spreading5514Underwriting Decision2351Committee Review1251Closing & Docs3332Servicing & Monitoring4522Renewal & Workout3531
Low (1)Mid (3)High (5)Source: StarterStack.ai Field Research (2026)

What does the ROI math actually look like? McKinsey’s 2025 Global Banking Annual Review found that AI could bring gross cost reductions of as much as 70% in certain categories — but recommends modeling for net cost reductions of only 15 to 20% because of offsetting technology costs and the irreducible need for human oversight in high-judgment workflows. The Citizens Bank 2025 CFO survey found that midsize companies report an average 35% ROI on AI investments — though that number should be treated as directional, not prescriptive for alternative lending specifically. Same survey found 61% of CFOs agree AI has made financial processes easier, up from 38% in 2024.

The 80/20 ceiling is real. The ROI math is starting to work. The architecture that produces those returns is augmentation, not replacement.

[04]
Insight 2

The Sales-Underwriting War

Highest RelevanceRevenue-based financingPrivate creditCRE bridgeAsset-based lendingAll segments where sales and credit functions are organizationally separate. The four above showed the sharpest version of this conflict.

The most important thing I heard across these interviews wasn’t technical. It was organizational.

Ryan C. is a credit and AR professional at an alternative lender. He walked me through an attempted rollout of an automated email process — a system that would ask existing clients to refresh their documentation once a year so the company could reassess credit risk. Standard portfolio review. The C-suite supported it. The sales team killed it.

Sales argued the process would put unnecessary strain on existing clients and damage their position of trust. The compromise was a manual selection process. Sales got effective veto power over documentation requests on lower-risk accounts. The automation the credit team designed to standardize portfolio review became a system where the salespeople whose compensation depends on keeping the borrower happy got to decide which borrowers received the documentation request.

More than 60% of the lenders I interviewed described some version of this. The exact mechanism varied. The pattern didn’t.

Here’s why this conflict is structural, not personal — and this is the part most people miss.

Sales teams and risk teams are paid to do different jobs. Sales gets paid to bring in volume and keep the relationship intact. Risk gets paid to keep the portfolio clean and avoid losses. Both are doing their jobs correctly. The conflict between them isn’t a bug in the org chart. It’s the org chart working as designed.

AI automation breaks the truce. Automation forces transparency on every account, runs the same process every time, enforces policy by default. The flexibility the sales team relies on to keep deals alive is the same flexibility the risk team has been trying to remove for years. Automation forces that trade-off into the open. Right out in the open where everyone has to deal with it.

William M. is a Chartered Financial Analyst with 25 years in private credit. He said it with the kind of clarity that only comes from watching this play out for two decades:

People need to be forced to follow policy. Emotion always gets in the way — including greed and fear.

William M.Chartered Financial Analyst25 years in private credit

Before you deploy any policy enforcement system, three questions have to be answered. Who is authorized to override a flag? What documentation is required for the override to be valid? Where does that documentation live so a regulator or auditor can find it? If you don’t have answers to all three before you go live, you don’t have a policy enforcement system. You have a list of rules nobody’s accountable for.

Xeina, a credit professional with experience inside bridge lending operations, named the failure mode directly:

An exception that isn’t documented isn’t a judgment call — it’s a liability waiting for an auditor to find it.

XeinaCredit professional, bridge lending operations

The vendors building these systems love to describe the override capability as “flexibility with accountability.” What they’ve actually built is a system that produces a paper trail of how many times the policy was overridden. That’s only useful if someone’s reading the trail and acting on it. Most shops aren’t.

The regulatory environment is closing the gap between “we did our best” and “you violated policy” faster than most operators realize. The CFPB has stated directly that there are no exceptions to federal consumer financial protection laws for new technologies. Courts have gone further: an institution’s decision to use algorithmic or machine-learning decision tools can itself constitute a policy that produces bias under the disparate impact theory of liability.

For alternative lenders, this creates a real squeeze from both sides. Manual processes carry the risk of inconsistency and the documentation gaps that come from sales-driven exceptions. Poorly built automation creates a different liability — the algorithm itself becomes evidence of systemic bias. The lenders who’ve navigated this successfully stopped trying to align the sales and risk teams through culture work. They changed what the automation does. They built systems where they can show a regulator and their investors exactly why every decision was made and which inputs drove it.

One note on regulatory applicability: CFPB exposure varies significantly by product type. Consumer mortgage faces the most direct scrutiny. Commercial revenue-based financing and CRE bridge lending operate under a different regulatory framework. Verify your specific exposure before treating this section as universal compliance guidance.

[05]
Insight 3

The Cliff Handoff and the Back-Office Investment Gap

Highest RelevancePrivate creditCRE bridgeHard moneyRevenue-based financing version follows in the sub-section below.

Everyone wants to talk about the front end. The intake. The underwriting automation. The document extraction. That’s where the demos live, so that’s where the attention goes.

The most expensive problem in this space is sitting on the other side of the close. And almost nobody’s funding it.

Dave H. is an actuary with more than 20 years in insurance and finance. He named it directly:

The AI focus tends to be on the front end rather than the back end. We haven’t properly mapped out the investment on the back end and what it can actually truly do to our front end. That’s a real knowledge gap right now.

Dave H.Actuary20+ years in insurance and finance

That’s the gap. And it’s costing these shops more than they know.

Afnan A. is a CFO at an alternative lender. He described what he calls the Cliff Handoff between origination and servicing — and when he explained it I remember thinking, yeah, I’ve seen this exact thing. The full underwriting file sits in the origination system. The bank statement spread, the borrower’s tax returns, the covenant calculations, the underwriter’s notes from the borrower call — all of it stays locked there. The team running the loan for the next three years doesn’t get any of it.

A payment is missed or a covenant trips, and the portfolio team starts from scratch. They reconstruct the credit picture by hand from documents the borrower has to resend. Lance C. tried to fix this directly — connecting financial spreading to servicing so covenant compliance could be tracked automatically. It didn’t work. Poor data quality. Borrowers submit statements in every imaginable format. The technology to standardize unstructured documents existed. His company just hadn’t funded the work to build a system that could handle the variation real borrowers actually submit.

The Cliff Handoff doesn’t just lose the data — it loses the decision. And you cannot manage a portfolio you don’t understand.

Afnan A.CFO, alternative lender

I haven’t heard a cleaner way to say it.

Here’s what the fix actually requires. The Cliff Handoff isn’t a communication problem. It’s a systems architecture problem. Three things have to be true before any monitoring automation can function. Underwriter annotations have to be structured fields tied to specific covenants and risk variables — not free-text comments buried in email threads. A note that says “borrower mentioned renovation running behind” is not monitorable. A structured field that tags a draw-timeline covenant with a named risk flag is. The origination system has to have a defined data contract with the servicing system — not a one-time export at close. The handoff should be a live data relationship, not a file transfer. And the monitoring trigger schema has to be designed at origination, before close — so the loan structure is monitorable by design, not retrofitted after the fact. If you’re retrofitting, you’ve already lost.

Once origination context travels to servicing, the next step is using that context to continuously re-score the loan against the original underwriting thesis. Alex V. is a portfolio professional with advanced AI modeling experience. He framed the distinction in a way that stuck with me:

A delinquency flag tells you the risk already happened. A monitoring system worth building tells you the underwriting thesis is no longer true — before the borrower knows it either.

Alex V.Portfolio professional with advanced AI modeling experience

That’s the shift. Risk deteriorates before payments are missed. A monitoring system built on the Cliff Handoff fix is the foundation for re-underwriting a loan in real time rather than waiting for the covenant to trip. A loan structure that can’t be monitored isn’t a credit decision. It’s a deferred workout.

So why does the back office stay chronically underfunded? Dave H. was direct about it:

The technology is out there. The money is out there. The problem is the shareholders and stakeholders are requesting very high return on equity, so it kind of crunches the expenses.

Dave H.Actuary20+ years in insurance and finance

Capital flows to the parts of the business that produce visible volume next quarter. Back-office investment in standardizing data and connecting systems produces no visible volume next quarter, so it doesn’t get funded. The same cycle repeats. The AI project runs on incomplete data, underdelivers, and the board uses the result to cut the next AI budget. I’ve watched this loop play out more times than I can count.

McKinsey said it directly in their work on the next era of private credit: machine learning and AI can improve underwriting decisions and enable more effective portfolio monitoring — but only if the data foundation is already in place. The foundation is the work. The AI is what you build after.

The Revenue-Based Financing Version

Highest RelevanceRevenue-based financing operatorsRevenue-based financing

Revenue-based financing is not a smaller version of private credit. The deal volume, the advance duration, the remittance structure, the syndication mechanics, the UCC filing workflow — none of it maps onto a CRE bridge loan framework. Treating them as equivalent is one of the fastest ways to build the wrong thing.

A COO at a leading revenue-based financing firm described the unit economics problem to me and it’s one of those things that sounds insane until you realize how common it is. His shop has no standardized dashboard connecting acquisition cost per advance to default rates and net portfolio returns. The reports that would answer what a December cohort actually returns get assembled by hand whenever the CFO asks — at a firm with a board, a CFO, and a real ROE target. The single most important number in the entire business gets rebuilt from scratch every time someone asks for it. Every time.

Three dynamics specific to revenue-based financing are directly addressed by the Cliff Handoff insight. The ISO layer is its own data quality and unit economics problem — ISO-originated deals carry embedded acquisition economics that sit outside the standard origination system. If those costs aren’t captured at intake, the unit economics calculation is permanently broken and you won’t know it until it’s too late. Daily remittance data is a continuous monitoring signal that most revenue-based financing operators aren’t using that way. Revenue-based financing has a daily cash flow data point from every active advance. That signal, properly structured, is the earliest possible indicator of business health deterioration — weeks before a missed payment. Most shops use it only as a collections trigger. That’s leaving the most valuable signal in the building on the table. And syndication position tracking — knowing what your December paper actually cost you by March, across ISO channels, syndication splits, and default curves simultaneously — is the source-of-truth problem specific to revenue-based financing that no standard tool currently solves. It’s the most commercially valuable dataset the sector hasn’t built yet.

[06]
Insight 4

The Relationship Paradox

Highest RelevanceCRE brokeragePrivate money originationRelationship-driven mid-market

Not every corner of alternative finance is an AI opportunity in the same way. I want to be honest about that because I think a lot of people in this space won’t say it.

Ayson S. was direct:

I don’t automate anything. The commercial real estate sector is very, very difficult to automate. Every deal is different. No deals are the same. No tenants are the same. No owner is the same.

Ayson S.Partner, boutique CRE fund

Joshua P. said it differently:

I honestly think prospecting can’t really be automated.

Joshua P.VP, boutique CRE fund

He noted that direct mail still works for older property owners who don’t engage digitally and that the relationship itself is what produces the deal.

Here’s what I want to be clear about. These are not people who missed the AI moment. Ayson knows exactly what he would automate — he just hasn’t found anything that does it well at a price that makes sense for a two-person shop. Joshua studied data science and worked at Deloitte and Booz Allen. He’s making a deliberate choice about where AI fits in his workflow. Calling either of them technology-averse would be factually wrong and strategically useless.

The relationship is the product. The owner trusts the originator because they’ve been calling for years and know the building, the tenants, and the market the way only repetition can teach. That trust doesn’t transfer to a machine. Trying to automate it would damage the only asset the broker has.

Ayson’s actual framing of the problem was sharper than anything I could have written:

The relationship is the close. The research is the tax. Stop making me pay the tax by hand.

Ayson S.Partner, boutique CRE fund

That sentence is the entire AI opportunity for this segment. Finding owners, detecting refinance windows, flagging partnership changes, building pre-call research packets — those are exactly the kinds of data-matching tasks AI handles well. A properly configured system pulling from public records, county assessor data, and CoStar could surface the same information at a fraction of the cost and time. The phone call still happens. The relationship still gets built. The research gets compressed from days of analyst work into a list available the same morning.

So what does that actually look like in practice? Pre-call research compression — owner history, recent transactions, refinance windows — pulled and ready before the originator picks up the phone. Refinance window detection based on loan maturity, rate environment, and property performance data. CRM data entry automation after calls, capturing what was said rather than requiring the originator to log it manually after an eight-hour day of calls. Contact discovery from public records, assessor data, and market sources. None of that touches the relationship. All of it removes the tax Ayson’s talking about.

The firms most resistant to AI in alternative finance are often the ones with the most to gain from having the research-and-administrative layer automated. The work they need handled is well-defined, doesn’t require their judgment, and compounds fastest when it gets handed off. That’s not a knock on how they operate. That’s the opportunity.

[07]
Insight 5 — AI as Policy Enforcement Layer

Policy as Infrastructure

Highest RelevanceCFOsCOOsCredit professionalsRisk-side operatorsAcross all segments. The highest-resonance use case among readers most likely to control AI budgets.

This insight was buried in an early draft of this research. The field review surfaced it as the single most resonant finding among CFOs, COOs, and credit professionals — the exact audience controlling AI deployment decisions. It belongs in the primary findings.

William M. said it with the precision that only 25 years of credit discipline produces:

I do not want AI making credit decisions. I want AI making it impossible for an underwriter to submit a file with a missing covenant package.

William M.Chartered Financial Analyst25 years in private credit

That sentence reframes the entire AI deployment question for the alternative lending C-suite.

The highest-ROI, lowest-risk AI application in alternative lending is not decision automation. It is making it structurally impossible for humans to deviate from documented credit policy without a logged, auditable rationale.

This is what Ryan C.’s sales team was fighting against. It is what the regulatory pincer in Insight 2 is moving toward whether operators build it or not. And it is the use case that most directly addresses the source of friction Jarred B. named in the opening of this paper — deals dying not from math, but from the organizational behavior around the math.

What policy enforcement AI actually does

  • Flags incomplete files before they reach the underwriter’s desk — eliminating the back-and-forth that consumes the first 48 hours of every deal
  • Makes it structurally impossible to advance a file without completing required documentation — removing the sales team’s ability to push partial submissions through
  • Logs every policy deviation with a named owner and a documented rationale — creating the audit trail regulators and investors will increasingly require
  • Surfaces pattern data on how frequently specific policy requirements are being overridden and by whom — giving the COO visibility that currently does not exist

The override architecture

Every policy enforcement deployment must answer three questions before going live: who is authorized to override a policy flag; what documentation is required for the override to be valid; and where does that documentation live in a system an auditor or regulator can access.

An exception without a documented rationale and a named owner is not a judgment call. It is a liability. The policy enforcement layer does not eliminate exceptions — alternative lending will always have legitimate ones. It makes every exception visible, traceable, and owned.

This is the use case that pays for the rest of the deployment.

[08]
Synthesis

First Principles

What the Research Suggests: Three Strategic Principles

Highest relevance: Operators across every segment. The synthesis of what separates breakthrough shops from the ones still stuck.

A note before this section: these are research-derived conclusions from the 75 interviews and the broader literature review. Where the analysis reflects my interpretation rather than direct field findings, I’ve said so explicitly.

Of the executives in this research, most sit in what I’d call the Experimenter quadrant. They’ve attempted automation, hit the data quality wall, and stalled. That’s not a failure of ambition. That’s a sequencing problem. These three principles describe what separates the shops that broke through from the ones that are still stuck.

Fig E. AI Maturity vs. Operational Impact (n=75)Source: Field Research, 2026
AI Maturity vs. Operational ImpactScatter plot of 75 lending operators classified by Trust in AI on the horizontal axis and Depth of Use on the vertical axis, falling into four quadrants: Skeptics in the lower-left, Experimenters in the lower-right, Optimizers in the upper-left, and Builders in the upper-right.OPTIMIZERSBUILDERSSKEPTICSEXPERIMENTERSLowMediumHighTrust in AI →NarrowPartialIntegratedDepth of Use →
SkepticsExperimentersOptimizersBuildersSource: StarterStack.ai Field Research (2026)
FundCFOUnderwriterCredit AnalystCTOFin. ModelerActuaryOps / COO

Principle 1 — Data Above All: The Infrastructure Argument

Every AI deployment that worked across these 75 interviews had one thing in common before anything else happened. The data foundation existed before the AI was deployed. Every deployment that failed at scale failed on this exact point. Not the model. Not the vendor. The data.

60% of AI pilots fail entirely due to poor data quality. McKinsey’s work on the next era of private credit found that AI can enhance underwriting decisions and portfolio monitoring — but only if the data foundation is in place. The industry’s current adoption rate of back-office automation sits around 30%, which means the infrastructure gap isn’t theoretical. It’s the majority operating condition in this sector right now.

Here’s the piece that almost never comes up in these conversations — and it should. Data security. For any regulated institution or institutional-LP-backed fund, data security is a procurement gate, not a feature. You need to be asking vendors the same questions you’d ask a borrower before you extend credit. Where does borrower data live during ingestion? Who has access, and under what access control framework? What are the retention policies for PII and financial documents? What’s the breach notification protocol and the liability structure if something goes wrong?

One institutional operator in the panel review said it in a way I keep coming back to:

The security posture of your AI vendor is part of your credit risk — treat it that way.

Institutional operatorPanel review participant

Due diligence on the vendor’s security architecture should be treated as seriously as due diligence on the vendor’s model performance. That’s not overcautious. That’s what your LPs are going to require eventually anyway.

Principle 2 — Judgment Is the Ultimate Data Set

The industry is focused on capturing discrete numbers. DSCR. LTV. FICO. And those matter. But the data that actually makes AI valuable in lending isn’t the what. It’s the why.

Why did the underwriter accept that specific owner add-back? Why did they waive that covenant? Today, that context lives exclusively in the underwriter’s head. When a deal closes, it becomes a victim of the Cliff Handoff — thrown over the wall to servicing as a static spreadsheet, completely deaf to the deal’s story.

Matthew K. named the principle that defines this better than I could:

The underwriter’s judgment is the product. Everything else is infrastructure.

Matthew K.CTO, Private Credit OperatorSix years at Deloitte; six years building financial technology

That’s the frame. AI infrastructure in lending doesn’t try to replace that judgment. It captures the expert judgment and the context behind the decisions — as structured, queryable, auditable data — so that context travels with the loan through its full life. That’s the shift. Not automation. Continuity.

Regulators are enforcing the explainability requirement now. The CFPB has stated there are no exceptions to federal consumer financial protection laws for new technologies. The explainability requirement isn’t a compliance cost. It’s a portfolio management tool. A system that can explain every credit decision is a system that can identify pattern deviations before they become losses. You get both.

Principle 3 — You Must Control the Stack (What That Actually Means)

I want to be precise about this one because I think it gets misread.

Controlling the stack means owning your data model and your decision logic. It does not mean building the infrastructure yourself. Those are two different things and conflating them is how shops end up either over-building or completely exposed.

For a $50M to $200M AUM shop, building and maintaining in-house models isn’t realistic. The data science headcount, the MLOps infrastructure, the labeled training data — it’s not there and it doesn’t need to be. The real risk with point solutions is vendor lock-in on your most sensitive data and your most proprietary risk logic. The way to avoid that is making sure that whatever you deploy, you own the training data, the model outputs, and the audit trail — and can move them if the vendor relationship ends. That’s the test.

Matthew K. has lived both sides of this as a CTO. He put it precisely:

The underwriting judgment cannot change — the infrastructure around it absolutely can. But infrastructure means plumbing first — clean data contracts between your LMS, your CRM, and your document layer — before you build anything on top.

Matthew K.CTO, Private Credit OperatorSix years at Deloitte; six years building financial technology

Plumbing first. I’m still trying to get more shops to internalize that sequence, because the instinct is always to jump to the AI layer before the pipes are clean.

Controlling the stack also doesn’t mean replacing your stack. It means owning the data contracts between the systems you already run. The integration question isn’t “does this replace Encompass” or “does this replace Salesforce.” It’s: does this create a clean, auditable data handoff between the systems you already operate? That’s it. That’s the whole question.

Every AI project that died at the integration layer — and multiple CTOs and COOs named this as where projects actually fail — died because the data contract between the new AI layer and the existing LMS or CRM was never defined. The handoff was assumed. When it broke, there was no owner and no recovery path. Not a technology failure. A planning failure.

[09]

The Bottom Line

Start with the data. Align the people. Then deploy the technology. In that order.

I’ve said that sequence in a lot of rooms. The part that always needs more explanation is the middle one.

“Align the people” kept showing up in earlier versions of this research without enough operational content behind it. Based on the panel review and the field findings, here’s what it actually requires.

Map the comp structure conflict before you design the automation. The system has to make the right behavior easier than the wrong behavior — not just possible. If sales is compensated on volume and speed and the automation introduces friction into incomplete files, sales will find a way around it. Every time. The automation design has to work with the comp structure, not against it. You can’t culture your way around a misaligned incentive. I’ve never seen it work.

Build the exception documentation workflow before the exception happens. Every override needs a logged rationale and a named owner before the system goes live. Designing this after the first disputed exception is too late — and there will be a disputed exception. An exception that isn’t documented isn’t a judgment call. It’s a liability.

Pilot with the risk team first, not the sales team. Build trust in the output before the political fight starts. The risk team will stress-test the system honestly. They’ll find the edge cases. They’ll build the feedback loop that makes the output trustworthy. When the sales team eventually asks whether the system can be trusted, the answer needs to come from operators who’ve already used it under real conditions — not from a vendor demo. That distinction matters more than people think.

Afnan A. spent three years trying to align origination and portfolio management. Here’s what he said about it:

Aligning the people is not a step you complete and move past. It’s a condition you maintain continuously — and it degrades the moment the system starts producing outputs that threaten someone’s autonomy or comp.

Afnan A.

That’s the one I want you to sit with.

The lenders who’ve already started this work aren’t moving faster because of better algorithms. They’re moving faster because they got their data and their people aligned first. That gap compounds quietly. And by the time you feel it, it’s already significant.

[10]

Open Questions

What We Don’t Know Yet

I want to be honest about where the gaps are because I think intellectual integrity matters more than making this research sound complete.

The borrower experience of AI-assisted underwriting is something this research doesn’t address at all. Every interview was conducted from the lender and operator perspective. The borrower’s, buyer’s, and seller’s experience of AI-assisted underwriting — how borrowers actually respond to AI-driven documentation requests, whether faster underwriting actually improves closing reliability for sellers, how borrowers behave differently when they know an algorithm is reviewing their file — that’s a critical gap. I don’t have that answer. That’s the next study that needs to happen.

The default rate differential between AI-underwritten and human-underwritten loans doesn’t have a clean answer yet. The industry has extensive data on traditional default models. Almost no longitudinal studies exist comparing default rates of AI-underwritten loans versus human-only underwritten loans across revenue-based financing, hard money, and private credit portfolios. This is the single most commercially valuable dataset that doesn’t exist yet. Until it does, ROI claims for AI underwriting are directional at best. Including ours.

Cross-sector knowledge transfer is the third gap that’s been sitting in front of this industry the whole time. Insurance is five to ten years ahead of alternative lending in back-office AI experimentation — as Dave H.’s perspective in these interviews makes clear. Hard money lending could learn from insurance’s actuarial modeling approaches. Revenue-based financing could learn from factoring’s real-time cash flow monitoring. These sub-sectors operate in almost complete intellectual isolation from each other, which means the industry as a whole is paying tuition on problems other sectors have already solved. That’s inefficient. It’s also avoidable.

[11]
Methodology + Disclosure

About the Research

This research was conducted through qualitative interviews with 75 alternative finance professionals spanning CRE brokerage, private credit, hard money lending, revenue-based financing, asset-based lending, revenue factoring, insurance, and financial technology. All findings were externally validated against industry reports from McKinsey, Morgan Stanley, S&P Global, Deloitte, EY, Citizens Bank, Freddie Mac, and regulatory filings from the CFPB.

Interview data was collected between Q2 2025 and Q1 2026. Interviewees are referenced by first name and last initial throughout this paper. Identifying professional details are used only where interviewees provided explicit consent to attribution.

This research was conducted and funded by StarterStack.ai, an AI-native infrastructure provider for alternative lenders. Readers should weigh that context when evaluating the strategic recommendations in the “What the Research Suggests” section. The field research findings — the 80/20 Ceiling, the Sales-Underwriting War, the Cliff Handoff, the Relationship Paradox, and the Policy Enforcement Layer — are grounded in primary interviews and validated against third-party research. The strategic principles represent the author’s interpretation of that data.

Deals die from friction, not math.

Jarred B.

The friction is the problem.

The friction is solvable.