Your Borrower Didn't Sign Up for Vibe-Coded Lending

There is a moment that every lender I know would rather not think about.

A borrower hands over the file. Tax returns. Personal financial statement. Bank statements. Rent roll. Operating statements. A guarantor's Social Security number on page three of the application. Entity docs. Appraisals. Sometimes the mortgage on their house. Sometimes the college tuition schedule for their kids. In commercial real estate, we joke that a borrower gives us more intimate information in a loan file than they've ever given a doctor. Then they sign the signature page, slide it across the table, and trust that what happens next is safe.

That trust is the quiet load-bearing beam of this entire industry.

If it cracks, everything above it falls.

What Lovable Just Showed Us

This week, Lovable, one of the fastest-growing AI coding platforms in the world, became a case study in what happens when trust is not the first pillar of the business.

Reporting from Business Insider and The Next Web laid out what went wrong. Lovable's API left source code and database credentials exposed for 48 days after the company closed the bug report without fixing it. A separate study of over 1,600 Lovable-built applications found that roughly 170 of them had vulnerabilities allowing personal information to be accessed by anyone. Approximately 70% of Lovable apps had row-level security disabled entirely. Broader research cited in the same coverage estimated that up to 62% of AI-generated code contains vulnerabilities.

Jake Moore, global cybersecurity advisor at ESET, put it plainly in the coverage: "Vibe coding continues to accelerate bad defaults."

In a lending context, a "bad default" is not a settings toggle somewhere in an admin panel. It is your borrower's tax return sitting in a database with row-level security that was never turned on. It is a personal financial statement retrievable by anyone who guesses the right URL. It is a Social Security number one misconfigured API call away from the open internet.

The Pattern Nobody Is Naming Out Loud

Here is what I am seeing in almost every conversation with lending teams right now.

Analysts uploading borrower tax returns into ChatGPT to "just get a quick summary." Loan officers feeding a borrower's personal financial statement into a generic AI tool to draft a credit memo faster. Credit teams building custom GPTs on top of frontier model APIs and pointing them at confidential deal folders. Boutique lenders standing up a shiny internal chatbot on top of an LLM because the vendor's demo last Tuesday looked fast.

None of these moves are malicious. Every one of them is well-intentioned. The people doing them are smart, competitive, and trying to keep up with an industry that is moving faster than their current tech stack can support.

That is exactly why this is so dangerous.

Because the bright shiny object is delivering immediate results, the question of what happens on the other side of the API call is rarely the question anyone is asking. Where does that data actually live once it leaves the loan file? Is it stored? Cached? Logged? Fed back into a training loop? Accessible to the vendor's support team? Isolated from the data of another institution running the same tool down the street? And if that vendor, or the model under the hood, were to suffer a Lovable-style incident, how exactly would your borrower ever know, and what would you say to them when they asked?

Most lenders I talk to genuinely do not know the answer to any of those questions.

The Borrower's Perspective Is the Only One That Actually Matters

Step back into the borrower's chair for a second. I have been a borrower. I have signed a personal guarantee. I have handed over SSNs, tax returns, and spousal financials to lenders I barely knew, because that was the cost of getting a deal done.

When I signed that package, I was not thinking about the lender's AI stack. I was trusting that the institution on the other side of the table had built the vault that would hold my information. That they had locks on the doors. That they had the discipline to know the difference between tools that were built for finance and tools that were built for screenshots on Twitter.

The borrower's trust is not conditional on the lender's technology roadmap. It is an implicit promise. The moment a borrower finds out their 1040, their entity structure, and their Social Security number were silently fed through a general-purpose AI tool that was never architected for regulated financial data, that trust does not erode. It detonates.

And when it does, they are not going to blame OpenAI. They are not going to blame Anthropic. They are not going to blame the vibe-coded wrapper that sat between them. They are going to blame the name on the term sheet.

Your name. Your institution. Your bank.

Why "We're Using the Enterprise Version" Is Not the Answer

Every time I raise this in a lender meeting, someone says the same thing. "Don't worry, we're on the enterprise plan."

I want to be careful here, because the frontier AI labs, Claude, Gemini, GPT, are extraordinary companies run by serious people. I use Claude ten hours a day. I am not writing this to trash any of them. And to be precise: yes, Claude Enterprise is SOC 2 Type II certified. So is Gemini Enterprise. So is ChatGPT Enterprise. That is table stakes for any credible enterprise SaaS in 2026.

What the enterprise tier does not change is the fact that the design center for a general-purpose AI platform is fundamentally different from the design center for a regulated lending workflow. And a SOC 2 report is only as useful as its scope.

Anthropic's SOC 2 covers Anthropic. OpenAI's covers OpenAI. Google's covers Google. None of them are scoped to "how a community bank's borrower NPI is extracted, routed, stored, logged, and retained inside a CRE lending workflow." When your vendor risk committee hands that SOC 2 to a bank examiner, the examiner is not evaluating the AI lab's controls against your OCC 2023-17 framework. They are evaluating the vendor chain as it applies to your borrowers, specifically. If the tool in that chain was built for general-purpose use, the scope of its report does not cover the question the examiner is actually asking.

SOC 2 is one layer. The full stack a regulated institution needs is wider. It includes GLBA-aligned handling of nonpublic personal information. Infrastructure-level data isolation by institution, not just a policy that says "we don't train on your data." Immutable audit logs tied to specific underwriting decisions that an examiner can trace end-to-end. Integration into your existing Third-Party Risk Management framework under OCC Bulletin 2023-17. Citation-level provenance on every extraction so a credit officer can defend any output against its source document. Fair lending controls that keep decisioning governed by your institution's policy, not the model's emergent behavior.

These things are not bolt-ons. They are foundational architectural decisions that have to be made on day one, when the company is being built, by founders who decided that regulated financial services trust was the number one pillar of the business. You cannot ship a platform for twenty-four months as a general-purpose tool and then retrofit it into a financial institution-grade system. The architecture has to be intentional from the first line of code.

The companies that vibe-coded their way to a beautiful UI in ninety days are, almost by definition, the ones that did not make those decisions on day one.

The Compounding Cost of Doing This Casually

Here is the part that really should keep CROs and compliance officers awake.

A data exposure in a lending institution is not a single-point event. It compounds. In ways that do not compound in a consumer app.

Every borrower in the exposed file has a personal reputation in a small business community. Every CRE borrower is part of a sponsor network that talks constantly, about lenders, about deals, about who to work with and who to avoid. A single exposure of a single sponsor's personal financial statement travels faster through a CRE network than any email you could possibly send to contain it. Depositors read banking breach headlines and move money. Regulators show up with questions that are not optional. Examiners rewrite their next vendor risk review around the incident. Your existing borrowers reconsider whether to bring you the next deal. Your pipeline, which took years of relationship-building to assemble, reprices overnight.

That is the kind of event that ends careers and shuts down institutions. Not because any single person did anything egregiously wrong, but because the organization chose to casually adopt technology that was never built to carry the weight it was being asked to carry.

And unlike most operational risks, this one only gets worse with time. Every quarter, more lenders pipe more borrower data through more AI tools. Every quarter, more of those tools ship more features faster, often on top of third-party components that their own engineers did not write. Every quarter, the attack surface widens. The Lovable story is not a one-off. It is a leading indicator.

What Trust-First Actually Looks Like in Practice

I am not writing this to tell you to stop adopting AI. If you have spent five minutes on this site, you know that is not a position I could possibly hold. AI is the biggest leverage shift to hit commercial lending in my twenty years in this industry, and the lenders who adopt it intentionally are going to dominate the ones who do not.

What I am writing to tell you is that there is a difference between adopting AI and vibe-adopting AI. And the difference is visible in a handful of questions that any serious lender should be able to get a clean answer to from their vendor on the first call.

Is the platform SOC 2 Type II certified, with an audit report available under NDA? Is every institution's data architecturally isolated from every other institution's data, at the infrastructure level, not just the application permissions level? Is your deal data used to train models that will be served back to someone else? Is every document extraction tied to a page-level citation in the source document, or is the AI "sounding confident" without evidence? Can vendor employees browse your production data? Is every action inside the platform captured in an immutable audit log that a bank examiner will recognize? Does the platform comply with GLBA? Does it fit inside the OCC's Third-Party Risk Management framework without you doing custom remediation work?

If a vendor cannot answer those questions fluently, in writing, in a way your vendor risk committee can forward to your regulators, you are not looking at an AI platform for lending. You are looking at a general-purpose tool with a lending use case bolted on top. Lovable-by-another-name.

That distinction is going to define who wins and who loses over the next five years in this space. Not the sophistication of the model. Not the speed of the demo. Not the slickness of the UI. The trust architecture underneath.

The Bottom Line

Your borrower is not asking to see your vendor stack. They are not reading Business Insider. They are not going to know whether the AI that summarized their loan file was built for financial services or whether it was built for bedroom developers shipping side projects. They are trusting you to know the difference.

The lenders who take that trust seriously are going to inherit the next decade of CRE lending. The ones who chase the bright shiny object without asking what is under the hood are going to be the cautionary tales their peers reference in their own board decks three years from now.

When we built LenderBox, we made a decision that trust, security, and institutional-grade compliance were not features. They were the foundation. SOC 2 Type II certified. Fully siloed data architecture. Zero cross-institution training. Every extraction tied to a page-level citation. Immutable audit logs. GLBA-aligned. Built from day one for regulated financial institutions, not retrofitted from a general-purpose chatbot.

That was not a marketing decision. That was a borrower decision. Because the borrower who hands over the file is the only stakeholder who ultimately matters. Everything else, the product, the pricing, the pipeline, only works if the trust holds.

Lovable just reminded the whole industry what happens when it does not.

Do not be the next case study in that article.

Vijay Mehra is the CEO and Founder of LenderBox, the AI-powered intelligence platform for commercial real estate lending. With twenty years in CRE as both a technology founder and principal investor, including a prior PE exit with Rethink, a CRE deal management platform, he writes about the intersection of artificial intelligence, commercial lending, and the trust architecture that separates institutional-grade platforms from bright shiny objects.