Nerfing, Throttling, and the Truth About AI Models: Jason Criddle Explains Why Dominait.ai Is Different

Jason Criddle Explains Why Dominait.ai Is Different

Over the last year I’ve been deep in the vibe-coding world… yes, that strange intersection where developers, AI hobbyists, and productivity hackers attempt (ed) to build custom workflows across tools like Claude, Gemini, OpenAI, DeepSeek, and experimental open-source frameworks. I used past-tense in parenthesis because it’s coming to an end. But I won’t ruin the article for you.

If you live in that world, you already know the rhythm: one week your favorite model feels superhuman; the next it feels like it has completely forgotten everything you have been working so hard on. Then, a new “pro tier” or “plus plan” drops, promising more stability and more speed and more magic dust. All you have to do is upgrade or pay for another monthly fee. You upgrade… and the cycle starts again.

When I brought this up in my latest interview with Jason Criddle, founder of Dominait.ai, SmartrHoldings, and the Smartr network of brands, his response was instant and unfiltered:

“Ah, that’s simple. Nerfing and throttling.”

And in that single phrase, he explained what thousands of vibe coders and software engineers complain about daily but rarely understand at a business-model level.

The Hidden Cost of “Training the Master’s Model”

As Jason put it:

“What vibe coders and even serious software engineers using these models may not understand is they are literally paying the company to train the company’s own model. The more you code, the more they learn from you. And they will throttle or nerf a version to prepare you for the next upgrade.”

It’s the oldest trick in the subscription economy. Whether it’s smartphones, GPUs, or software APIs, performance mysteriously degrades right before the release of a new version. Call it entropy, call it optimization fatigue, call it whatever you want; but when millions of people experience the same slowdown at the same time, coincidence becomes unlikely.

Jason draws the analogy cleanly:

“They will tell you that soon you won’t have to worry about these old issues with the new model because it makes you anticipate buying the new model. Just like with phone companies who come out with new models yearly or bi-yearly and turn off support to models they released just a few years ago. Your phone gets slower, develops bugs, starts to glitch… It keeps their cycle of keeping you spending your hard earned money going.”

And in the AI space, this cycle is amplified by scale: every prompt, every code snippet, every API call feeds their corporate learning and investor earning loops.

The Great Vibe-Coding Fade-Out

In the early days of “vibe coding,” people genuinely believed they were part of a creative revolution. Developers shared threads about building full-stack apps with LLMs, indie engineers traded tips on prompt chaining and code generation hacks, and social feeds were full of “I built this in a weekend with AI” demos. It made so many jump on board the train.

But over the last few months, the tone has changed. Medium, Reddit, and Discord channels that once buzzed with optimism are now filled with resignation, disgust, and downright anger. The headline that keeps surfacing, “Vibe Coding Is Dead,” started as clickbait, but now feels like consensus.

A popular Reddit thread on r/ArtificialIntelligence sums it up:

“We all jumped on the vibe train, but lately it feels like we’re the ones being trained. I’ve canceled two pro accounts this month. Every update promises better reliability, but it’s just new throttles and higher token costs.”

Another engineer on r/LocalLLaMA echoed the same frustration:

“Coding with AI used to feel like pair programming with a genius. Now it feels like babysitting a tired intern.”

Publications like The Verge, Business Insider, and Analytics India have reported similar trends: usage of pay-tier AI coding assistants has plateaued, while subscription churn has quietly risen. Many freelance developers and indie builders say they’re returning to conventional tools like VS Code extensions or open-source local models because, as one article put it, “The magic wore off once the meter started running.” We are paying for these models to make more money off of us. It sounds redundant and weird, but it is. We are fueling our own demise.

That’s the fatigue Jason Criddle was describing when he said these companies were “nerfing and throttling” their own systems. It’s being done on purpose to keep us spending money. The models haven’t simply lost capability; users are realizing that the upgrades were built to extract more value from them, not necessarily for them.

Where vibe coding began as a type of creative freedom, it’s ending as cost management, and that’s exactly why Dominait.ai feels like such a breath of fresh air.

Unlike the systems that manipulate performance to sell you the next version, Dominait is structured around stability, transparency, and reciprocity. Your data trains your agent, not the company’s next release. Your investment generates your returns, not someone else’s capitalization. And instead of fading out like another AI fad, Ryker is being designed to grow stronger the more his users succeed. And reward users for more use. No throttling, no paying Dominait more money. Just giving you a better product as time goes on.

As Jason put it best:

“We care about the user first. We don’t play games with throttles or nerfs. The more you build with us, the more we build for you.”

Why Models “Get Better Again”

If you’ve ever noticed that a model that was acting up suddenly improves after a few weeks, Jason has a theory for that too:

“Why would your AI model that was glitching start working better suddenly? Simple. Because one of their competitors is coming out with a new model and they need to make sure you don’t cancel your subscription and move to another company.”

It’s a chess match between giants. One throttles, another launches, then the first releases a patch to regain goodwill. Meanwhile, the users, coders, researchers, small business owners… people paying the money, all foot the bill for the game.

On Reddit’s r/LocalLLaMA and r/MachineLearning, hundreds of threads echo this same frustration.
One poster wrote: “It’s wild. Claude works amazing for a week, then it starts lagging. GPT-4o is brilliant one day and dumb as a rock the next. Then a ‘stability update’ comes out right when a competitor model releases.”

In the open-source community, similar comments abound. A developer on HuggingFace forums noted: “You can feel the latency curve change the week a new premium tier is announced.”

Jason’s insight connects the dots: what looks like “random instability” is often an engineered scarcity. It’s a system tuned to keep users upgrading, retraining, and repaying.

Why Dominait.ai Refuses to Play the Game

Jason’s philosophy stands in direct opposition to this cycle.

“This is what makes Dominait so different. We care about the user first. We have prices that are very, extremely fair and low and we even reward for training. And even if we have price increases in the future, it isn’t going to be because of some game. It will more than likely just be because we had a cost increase to cover somewhere that we couldn’t help.”

Dominait’s design is built around user partnership, not dependency. Ryker, the AI core powering the ecosystem, learns with the user but doesn’t siphon value away from them. In other words, your input trains your inference. Not someone’s centralized profit engine.

The difference is philosophical as much as architectural.

“Because of our business model, we intend to be the most affordable on the market while still giving the user the most control over any AI out there. And we haven’t even launched.”

This commitment is baked into every layer of the Smartr ecosystem. Smartr Apps, for instance, have dropped in price over time rather than increasing.

“The first one that launched in 2016 was $500 a month. That same version is $200 a month now. And we price-adjusted and refunded our customers’ money and gave credits to make it fair. Technology is supposed to get cheaper. Not more expensive.

A website used to cost millions to make, even before anyone knew what a website was. Now websites can be made for free.”

Jason’s kind of move is virtually unheard of in SaaS. Instead of weaponizing upgrades, SmartrHoldings and companies rewarded loyalty.

A Revenue Model Built Backwards, On Purpose

Jason explains that Dominait and Ryker follow the same principle:

“Our model will never be about profits for us or investors. We always think about our customers first. Even with our Smartr Apps… One of our versions pays up to 75% of our revenue to affiliates. And the 25% we have left has to cover maintenance, a very small profit margin, investors, and company expenses. We literally make almost zero money on that version because the rewards are there for the user referring business. It was done on purpose to give serious Smartr partners the ability to make a full-time job out of referring customers if they choose to.”

That’s not spin to me. It’s system logic with compassion and understanding of human need built within their ecosystem. The more value users create, the more they’re rewarded. When Ryker launches in January, this will extend to usage and data: the higher the customer tier, the higher their reward percentage for training, testing, and building.

In other words, the customer doesn’t just pay to use the system—they profit from improving it.

Supporting Evidence from the Developer Community

If you follow AI developer discourse across Reddit, Discord, and Medium, a theme emerges: fatigue. Engineers are tired of closed-loop systems that degrade performance to upsell subscriptions.
Medium writer “@fawxAI” summarized it this way: “It’s the new planned obsolescence – software edition.”

A veteran coder on r/ChatGPTPro agreed: “I’m basically renting my own feedback. Every improvement I make to the model costs me more tokens next month.”

Contrast that with the cooperative tone in the Smartr community. Early testers in the DOMINAIT.ai platform already describe it as “participatory architecture.” The upcoming Dominait alpha aims to formalize that ethos at scale.

How the Economics Break Down

To understand why Jason’s model matters, consider the economics of the current AI arms race:

  • Training GPT-level models reportedly costs between $50 million and $200 million per iteration.
  • Ongoing inference and API costs have exceeded $400 billion for major providers.
  • Meanwhile, actual monetized use cases… apps generating consistent cash flow barely represent a fraction of that spend.

It’s a treadmill powered by venture capital. As long as user data and developer code keep flowing in, the treadmill moves. When it slows, new pricing tiers reignite the motion.

Dominait flips this. Instead of monetizing the data pipeline, it monetizes the relationship. The user is a stakeholder, affiliate, partner, and co-trainer, not raw material to train a model or profit from.

Looking Toward January

As someone who’s interviewed Jason multiple times, I’ve learned to recognize that he doesn’t speculate… he’s forecasting. He told me:

“Ryker and his platform will be the same as all SmartrHoldings software. The higher amount a customer pays, the higher their rewards will be for using the system all around.”

That sentence reveals more than a pricing model; it’s a vision for sustainable AI economies.

When Dominait.ai goes live, it’s not promising magic. It’s promising stability, an antidote to the nerf-throttle-upgrade spiral. Customers will pay reasonable rates, own their training loops, and share in the rewards of progress.

The more time I spend on AI forums, the clearer Jason’s “nerfing and throttling” thesis becomes. It’s simple pattern recognition rather than conspiracy or paranoia. Systems that depend on recurring revenue cycles inevitably manipulate experience to ensure the cycle continues.

Dominait’s decision to anchor profits to performance rather than pressuring users could make it the most user-aligned platform in AI.

And when I think of Jason’s now-famous quote from his latest interview:

“Like Tony building Iron Man in a cave using a box of scraps.”

…it hits differently. He’s not glamorizing struggle. He’s describing a design philosophy: build what the world needs with what you have, reward the ones who believe in you, and never, ever punish them for their loyalty.

That’s the difference between an AI company and a human-AI partnership. And that’s why I can’t wait for January when Ryker steps out of the cave.

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x