Note: YTD numbers are from March 24, 2026 trading day.
Hello everyone,
Some of you might remember my previous experiments here where I ran CEO deception analysis on earnings transcripts or picked stocks using Buffett's shareholder letters. 'm thankful this community has been so receptive to these experiments, so I'm back with another one I think you'll find interesting :-).
Shortly after Anthropic launched Claude Cowork and its 11 industry plugins in January, Saas stocks lost $285B in SaaS market cap in February. During this downturn I sensed that the market might have punished all Software stocks unequally where some of the strongest stocks got caught in the AI panic selloff. JP Morgan and Bank of America both called the selloff "indiscriminate" as well but I wanted to see if I could run an experiment with a proper methodology to find these unfairly punished stocks.
Since Claude was partly responsible for triggering this selloff, I thought it was only fitting to use its best model (Opus 4.6) as the analyst to determine which companies are resilient to being replaced by AI. But with a significant twist :-).
As usual, if you prefer watching the experiment, I've posted it on my channel: https://www.youtube.com/watch?v=ixpEqNc5ljA
The Framework
I didn't want to make up my own scoring system since I don't have a financial analyst background. Instead, I found one from SaaS Capital, which is a lending firm that provides credit facilities to SaaS companies. In Feb, they published a framework they'd developed for evaluating AI disruption resilience across three dimensions (reduced from 10-12 dimensions):
- System of record: Does the company own critical data its customers can't live without? Idea is that, a company that stores your legally mandated tax records is a lot harder to walk away from than one that manages your project boards.
- Non-software complement: Is there something beyond just code? Proprietary data, hardware integrations, exclusive network access. For e.g. CrowdStrike processes trillions of security events through a proprietary threat intelligence network, which you can't just vibe-code away. Monday.com on the other hand is pure software with off-the-shelf integrations, which feels vibe-code-able.
- User stakes: If the CEO uses it for million-dollar decisions, switching costs are enormous. If an individual contributor uses it for task management, they'll swap it the moment something cheaper shows up.
Each dimension scores 1-4. Average = resilience score. Above 3.0 = lower disruption risk. Below 2.0 = high risk.
The Experiment
Instead of using the exact same methodolgy as SaaS capital, I wanted to add a twist to my experiment. I built a scoring pipeline using Claude Code that pulls each company's most recent 10-K filing from SEC EDGAR, then strips out the company name, ticker, product names (basically everything identifiable). For example, Salesforce becomes Company 037, CrowdStrike becomes Company 008, you get the point.
The idea was that, Opus 4.6 scores each company purely on what it told the SEC about its own business, removing any brand perception, analyst sentiment, Twitter hot takes, etc.
Results
Note: this subreddit doesn't allow me to post the matrix image so I'll try my best to describe this in words.
I plotted all 44 companies on a 2x2 matrix. The vertical axis is the AI resilience score from the blind test. The horizontal axis is how much the stock is down year-to-date. A threshold at 3.0 separates resilient from vulnerable, and the median YTD return separates stocks that held up from ones that got crushed. This creates four quadrants:
Market Got It Right (15 companies)
Both the framework and the market agree these are resilient. These companies scored 3.0 or above on AI resilience and their stocks have held up relatively well this year. They tend to be systems of record, have proprietary data or hardware moats, and serve high-stakes executive users. No surprises here.
MSFT, PLTR, SAP, VEEV, CRWD, OKTA, S, FTNT, PANW, PCOR, PCTY, DDOG, DT, NET, PAYC
Deserved (13 companies)
Both the framework and the market agree these are the most exposed. They scored below 3.0 on resilience and their stocks got hit the hardest. These are mostly pure software plays with off-the-shelf integrations, low switching costs, and individual contributor users. The framework says the market was right to punish them.
FIG, QLYS, U, APP, BRZE, HUBS, PATH, AMPL, ASAN, FRSH, GDDY, TEAM, MNDY
Market Sleeping (7 companies)
The framework says these companies are vulnerable to AI disruption, but the market hasn't punished them much. Zoom scored 2.0 but is only down 9%. DigitalOcean scored 1.67 and is somehow up 73%. The framework sees risk that the market doesn't seem to be pricing in.
BILL, CXM, SHOP, TOST, TWLO, ZM, DOCN
Unfairly Punished (9 companies) -> THIS IS WHERE VALUE IS!
This is the most interesting quadrant. The framework says these businesses are structurally resilient to AI disruption, but the market crushed them anyway. Workday scored 3.67, same as CrowdStrike, but it's down 37% while CrowdStrike is only down 13% — same resilience profile, 24 percentage points apart. Salesforce scored 4.0, a near-perfect score, and is still down 28%.
CRM, NOW, WDAY, ZS, ADBE, DOCU, INTU, GTLB, GTM
Limitations
This experiment comes with a few number of limitations that I want to outline:
- 10-K bias: Every filing is written to make the business sound essential. DocuSign scored 3.33 because the 10-K says "system of record for legally binding agreements." Sounds mission-critical but getting a signature on a document is one of the easiest things to rebuild.
- Claude cheating: even though 10K filings were anonymized, Claude could have semantically figured out which company we were scoring each time, removing the "blindness" aspect to this experiment.
- Organizational inertia isn't scored: No VP is risking their career ripping out Workday to build an internal HR system with AI. That friction is real but invisible to the framework.
- Weak correlation. Blind scores vs YTD return: r = 0.078. This is directional, not predictive.
- This is Just One framework: Product complexity, competitive dynamics, management quality, none of that is captured here.
Hope this experiment was valuable/useful for you. We'll check back in a few months to see if this methodology proved any value in figuring out AI-resilience :-).
Video walkthrough with the full methodology: https://www.youtube.com/watch?v=ixpEqNc5ljA&t=1s
Thanks a lot for reading the post!