Back to Insights
Software

Building Software That Survives the AI Wave

The SaaSpocalypse is real for some software, overblown for others. Here's what AI can't replicate and how to build for it.

S5 Labs TeamFebruary 5, 2026

The iShares Expanded Tech Software Sector ETF dropped 19% in a single month, falling nearly 30% from its September highs. ServiceNow lost 115billioninmarketvaluesinceearlyJanuarydespitepostingstrongearnings.Adobe,Microsoft,Salesforce,SAP,Oraclecollectively,over115 billion in market value since early January despite posting strong earnings. Adobe, Microsoft, Salesforce, SAP, Oracle — collectively, over 730 billion in value erased. Analysts have a name for it now: the SaaSpocalypse.

The catalyst was specific. Anthropic released agentic plugins for Claude Cowork. OpenAI launched Frontier. Both announced HIPAA-compliant healthcare tools that compete directly with Veeva and Salesforce’s life sciences division. AI foundation model companies stopped building tools that complement enterprise software and started building tools that replace it. (We covered the full market impact in our analysis of the $1 trillion software selloff.)

But here is the thing about market panics: they are directionally correct and magnitudinally wrong. Some software categories genuinely face existential pressure. Others are being sold off by association. The practical question for engineering teams is not “will AI kill us?” but “what are we building that AI can’t?”

Separating Signal from Noise

Both of the following statements are true simultaneously: AI is structurally replacing certain categories of software, and the market is overreacting to what that means for the industry at large.

The structural argument is real. When an AI agent can generate a legal document, look up financial data, or run a templated workflow — tasks that entire SaaS companies were built around — the value proposition of those companies compresses. Thomson Reuters fell 15%. LegalZoom fell 15%. RELX and FactSet saw double-digit declines. These companies got hit because their core products overlap meaningfully with what AI can now do.

The overreaction argument is also real. Ninety-five percent of generative AI pilots in enterprise have reportedly failed to deliver measurable ROI. Switching costs for deeply embedded enterprise software remain enormous. Satya Nadella warned that AI agents could “dismantle business applications by consolidating business logic into the AI layer, making traditional SaaS backends replaceable” — but Nadella runs a company with $200 billion in enterprise contracts. He knows those contracts do not unwind in a quarter.

Forrester analyst Charles Betz offered the useful counterpoint: “There are about 20,000 legal jurisdictions worldwide and complying with applicable regulation is a major reason why people trust vendors like SAP.” That number — 20,000 — is worth remembering. It represents a kind of complexity that does not yield to better language models.

The signal is clear for engineering teams: some software is genuinely vulnerable, and some software is far more defensible than the market currently prices. The job is figuring out which category yours falls into.

The Feature vs. Product Test

Jason Slagle, CEO of CNWR, framed the distinction well: “underwhelming SaaS applications that are features not products” face real risk, while established platforms will endure.

This is a useful framework. Ask yourself honestly: is what you have built a product or a feature?

A feature is a thin layer of functionality around a commodity operation. It takes an input, applies a straightforward transformation, and produces an output. If you could describe your product’s core value proposition in a single API call — generate this document, score this lead, format this report — you are building a feature. Features get absorbed. They become plugins, integrations, or single prompts in an AI agent’s workflow.

A product is a platform with accumulated data, entrenched workflows, deep integrations, and institutional knowledge baked into its configuration. Products are not defined by what they do in a single interaction. They are defined by the years of context they hold and the operational complexity they manage.

The uncomfortable middle ground is where most software lives. Your product started as a feature, accumulated data and integrations over time, and now sits somewhere on the spectrum. The question is whether the valuable part — the part that is hard to replace — is the core of your product or an incidental byproduct of its history.

Be ruthless about this assessment. The market just was.

What AI Can Replicate Tomorrow

If you want to know whether your product is vulnerable, look at whether its core value falls into any of these categories.

Boilerplate document generation. Contracts, reports, proposals, filings — anything that starts from a template, fills in variables, and produces a formatted output. AI does this well today and will do it better next quarter.

Basic data lookups and reporting. If your product’s primary value is providing access to data that exists elsewhere and presenting it in a structured format, AI agents can replicate that. The interface layer between users and data is exactly what language models excel at.

Template-driven workflows. Multi-step processes that follow fixed logic — if X, then Y, then Z — are straightforward to encode as agent behavior. Workflow automation that requires no judgment is workflow automation that an AI handles natively.

Simple CRUD interfaces. Create, read, update, delete operations on structured data, with basic validation and permission controls. These are foundational capabilities of any application framework and trivial for AI to generate or operate.

First-draft content creation. Marketing copy, product descriptions, initial designs, social media posts — anything where the value is getting from blank page to rough draft.

If one or more of these describes your product’s core value, the clock is ticking. Not because AI will replace you tomorrow, but because your competitive moat just got shallower, and it will keep eroding.

What AI Cannot Replicate (Yet)

The categories that remain defensible share a common trait: they depend on something beyond processing capability.

Proprietary datasets accumulated over years. AI needs data to work with. If your company is the source of the data — not a layer on top of someone else’s data, but the entity that generates, curates, and validates it — you have an asset AI cannot replicate by being smarter. Bloomberg’s terminal business is not just a data display. It is the data itself.

Deep regulatory compliance. Betz’s 20,000 jurisdictions point is not an exaggeration. Compliance is not a knowledge problem that AI solves by knowing more. It is an operational problem that requires validated processes, audit trails, regulatory relationships, and the institutional credibility that comes from years of maintaining them. Regulated enterprises will not replace their compliance infrastructure with an AI agent regardless of how capable it is, because “our AI said it was compliant” is not a defense that regulators accept.

Complex multi-system integrations. Enterprise software that serves as the connective tissue between a dozen other systems — ERP, CRM, HRIS, financial platforms, custom internal tools — is not valuable because of what it does. It is valuable because of what it connects. Ripping it out means rearchitecting every integration that touches it.

Network effects. Platforms whose value increases with the number of users are defended by their user base, not their code. AI cannot replicate a professional network, a marketplace with thousands of buyers and sellers, or a collaboration platform that holds an organization’s institutional memory.

Institutional trust and audit trails. In enterprise sales, the question is not just “can your product do the job?” It is “will your product still be doing the job in five years, and can we prove what it did to auditors?” Trust is earned over years of uptime, incident response, and demonstrated reliability. It is not a feature that an AI startup can ship.

Real-time transactional systems requiring millisecond reliability. Payment processing, trading systems, industrial control — systems where latency, accuracy, and deterministic behavior are existential. These are not problems where “approximately right, most of the time” is acceptable.

Building for Defensibility

If the analysis above describes your exposure honestly, the next step is making engineering decisions that strengthen what AI cannot easily replicate.

Invest in your data moat

The most durable competitive advantage in the AI era is proprietary data. Every interaction with your product should generate data that makes the product more valuable and harder to replace. This is not just about storing user data. It is about structuring, enriching, and connecting it in ways that create compounding value. The more proprietary data you accumulate, the wider the gap between what you offer and what an AI agent could replicate starting from scratch.

Build deep integrations

Become the system of record, not a layer on top of one. The deeper your product integrates into a customer’s operational infrastructure — their data pipelines, their workflows, their compliance processes — the higher the switching cost. This is not about creating lock-in for its own sake. It is about creating genuine operational value that comes from deep integration, and recognizing that this depth is itself a moat. Our piece on API design principles that scale covers the technical patterns that make integrations durable enough to function as real competitive advantages.

Embrace AI as a feature, not a competitor

The companies best positioned to survive AI disruption are the ones integrating AI into their products rather than waiting for AI to come for them. If AI can generate a first draft, let your product incorporate that capability and add the value that comes after — review, compliance checking, integration with existing data, institutional context. The worst strategic position is a product that does what AI does but slower and at higher cost. The best strategic position is a product that uses AI to do its job better.

Focus on compliance and trust

Regulated industries move slowly for good reasons. If your product serves them, lean into that. Build audit trails, earn certifications, maintain regulatory relationships, and invest in the operational discipline that makes enterprises trust you with critical processes. This is not exciting work. It is the kind of boring, compounding advantage that is nearly impossible to replicate quickly.

The AI Engineering Opportunity

The flip side of the SaaSpocalypse is an explosion in demand for engineers who can build the systems that are doing the disrupting. Software engineering median salaries declined 9% year over year, but AI engineering roles are growing faster than companies can fill them.

This is not a contradiction. It is a rotation. The skills that commanded premiums five years ago — standing up CRUD applications, building standard web interfaces, writing boilerplate business logic — are precisely the skills that AI compresses. The skills in demand now are different: designing agent-based architectures, building reliable AI pipelines, managing model deployment and monitoring, creating the infrastructure that makes AI systems production-ready.

For engineering teams, the message is not that software is dying. It is that the valuable part of software engineering is shifting. Teams that can build AI-native applications, design systems that integrate AI capabilities reliably, and manage the operational complexity of the new stack are the ones that will command premiums in the years ahead.

The skillset is not disappearing. It is transforming. And the teams that recognize this early — investing in AI engineering capabilities rather than hoping the wave passes — will be the ones building the software that survives.

The Bottom Line

The SaaSpocalypse is real for software that wraps thin functionality around commodity operations. It is overblown for software that owns proprietary data, manages complex compliance, provides deep integrations, or creates network effects.

The companies that come through this transition will not be the ones that panicked, and they will not be the ones that ignored it. They will be the ones that assessed their exposure honestly, strengthened what AI cannot replicate, and integrated AI into what they do rather than pretending it would not come for them.

The question every engineering team should be asking right now is simple and uncomfortable: what are we building that AI can’t?

Want to discuss this topic?

We'd love to hear about your specific challenges and how we might help.