"Something Big Is Happening" Is Worth Reading, Not Swallowing Whole
A viral essay gets some facts right, but leaps to big questionable conclusions.
Two days ago, Matt Shumer published a 5,000-word essay called “Something Big Is Happening” on X. Within 24 hours, it had over 20 million views. Fortune syndicated it. Inc. covered it. As I write this, his post now has 65 million views.
I encourage reading his piece. Like me, however, not all are impressed, and I caution against accepting his conclusions.
What “Something Big Is Happening” Claims
Shumer is co-founder and CEO of OthersideAI, which makes an AI writing assistant called HyperWrite. He frames his essay as a long-overdue honest conversation with friends and family, comparing this moment to February 2020, when most people hadn’t yet grasped what COVID would become.
His core claims, in order of escalation:
On February 5, OpenAI and Anthropic both released major new models (GPT-5.3 Codex and Opus 4.6). In my experience over the past week, these models are significant improvements from what came before. Shumer reports that he can now describe a software application in plain English, walk away for four hours, and return to find the finished product built, tested, and ready.
He argues that AI labs focused on coding first because it accelerates AI development itself, and that they are now turning that capability toward everything else: law, finance, medicine, consulting, writing, and analysis. He cites Anthropic CEO Dario Amodei’s prediction that AI will eliminate 50% of entry-level white-collar jobs within one to five years, and says many in the industry think that estimate is conservative. (Multiple CEOs pushed back on Amodei’s predictions at Davos.)
He cites METR, the organization that tracks autonomous AI task duration, showing a doubling trend in how long AI can work independently, and extrapolates this forward: days of independent work within a year, weeks within two, months within three.
His advice to readers is practical and mostly sensible: pay for a subscription, experiment seriously, build financial resilience, and rethink career assumptions.
What Shumer Gets Right
The factual foundation of Shumer’s post is largely solid. The model releases are real. METR’s benchmarks do show the trend he describes. Amodei has made the public statements Shumer attributes to him. AI capability in software development has genuinely accelerated, surprising even the most optimistic observers. Anthropic has published research documenting AI exhibiting deceptive behaviors in controlled settings. The scale of investment is real. In 2025, over half of all VC funding, $211 billion, was invested in AI.
I think his practical advice is sound. Anyone in a knowledge work profession who hasn’t spent serious time with the current generation of AI tools is falling behind. The $20/month subscription to Claude or ChatGPT is the best professional development investment available right now. Don’t wait to take a class on AI. Get started by asking AI how it can help you.
Where It Goes Wrong
Shumer’s facts are solid. His extrapolations are not. Here are three areas where his conclusions break down.
Software is not a proxy for all knowledge work
Shumer’s personal experience is real, and I have no reason to doubt he’s reporting it accurately. But software development is the domain where AI has the most structural advantages: clear success criteria, automated testing, well-defined outputs, and the ability for the AI to verify its own work by running the code. He is making the same leap I hear often from tech executives, that gains in software engineering will generalize smoothly to all knowledge work.
This assumption deserves scrutiny. Almost every field other than software involves messier success criteria, ambiguous inputs, competing stakeholder interests, and outputs that can’t be automatically verified. A contract review requires understanding the client’s strategic position. A medical diagnosis depends on patient history, which is incomplete by nature. A financial model’s value lies not in its arithmetic but in the assumptions behind it.
Even within AI-assisted coding, the picture is less rosy than Shumer paints. Three times on the day I wrote this, Claude Code made basic mistakes on tasks. In one case, when I asked how I could prevent the error, Claude Code apologized, saying it was a random mistake and that there was nothing I could have done to prevent it. I love Claude Code and find it hugely valuable, but for my work, I am far from able to step away for four hours and come back with the job done.
Capability is not deployment
The piece treats “AI can do X” as equivalent to “AI will replace humans doing X within Y years.” The history of technology adoption suggests the gap between capability and economic impact is large and unpredictable. Radiology AI has been “about to replace radiologists” for nearly a decade; radiologist employment has grown. Self-driving cars were supposed to be ubiquitous by now. The barriers are not primarily technical. They are institutional, regulatory, legal, and social.
Adoption requires someone to sign off, to accept liability, to change workflows, to retrain staff, to convince skeptical clients, and to navigate compliance. These frictions don’t show up on capability benchmarks, and they don’t follow exponential curves.
Trend extrapolation is not forecasting
Shumer presents exponential capability curves as though they will continue indefinitely. They might. They also might encounter physical, economic, or regulatory constraints that bend them. We are already seeing uneven gains across AI capabilities: remarkable in code generation, more modest in complex reasoning over long time horizons, still limited in tasks requiring real-world grounding. Presenting one scenario as the only scenario is advocacy, not analysis.
The strongest counterargument to Shumer’s piece is historical. Every previous wave of “this technology changes everything” predictions from very smart, well-informed people who have overestimated the speed of economic transformation and underestimated the messiness of real-world adoption. Maybe this time it really is different, but “this time is different” is a claim that does all of the heavy lifting here. Personal anecdotes from software development, however vivid, don’t meet that bar for economy-wide predictions.
My Recommendations
I teach executives and students how to use AI to create value. I believe generative AI technology is transformative, and that professionals who ignore it are making a serious mistake. But transformative and imminent economic revolution are different claims, and conflating them leads to either panic or hype, neither of which helps anyone make good decisions.
The practical advice Shumer offers at the end of his piece is actually the most valuable part, and it doesn’t require accepting his timeline to act on it. Use the tools seriously. Experiment daily. Build the habit of adapting. Understand what AI can and can’t do in your specific domain, not in a startup founder’s demo.
These tools are genuinely powerful and getting stronger. The smart move is to engage with curiosity, maintain skepticism, and avoid both panic and dismissal.






Ben -
Thanks for keeping us informed.
Based on an earlier suggestion from you, I've been using Perplexity as my primary AI, mainly for search. Bearing in mind that I don't want to do a lot of my comparison research, do some of these later developments warrant a change?
Phil