In the first three posts of this series, we looked at why you need a product strategy now more than ever, how AI changes the problem space, and how it reshapes the solution space. Today, we’ll shift the lens to the product itself.
Following Marty Cagan’s framework, any product – AI or not – faces four fundamental risks, and you need to address all of them if you want it to succeed:
- Value – Does it truly answer an unmet need?
- Usability – Will they be able to use it in a way that delivers that value?
- Feasibility – Can we build it with the resources, technology, and time available?
- Business viability – Will it work for our business?
AI changes each of these in different ways – sometimes lowering the bar, sometimes raising it, and often shifting where the real danger lies. Let’s look at how all four risks evolve in the AI era and what you should focus on accordingly.

Value: Finding the Unsolved Problems
AI makes it easier to solve many problems, but it also solves a lot of them out of the box. Your value risk today is less about “Can we help customers?” and more about “Are we solving something they still need help with?”
Assume your customers use everything AI has to offer. They already use a long list of generally available tools. In this reality, which problems are still left unsolved? Which new problems arise as a result?
A product is only as good as the problem it solves. Make sure your problem is still relevant.
Usability: Expectations and Illusions
AI is not just changing what products can do – it is changing what customers expect, since they use AI all the time, regardless of your product.
A workflow that felt smooth last year can now feel dated because users are experiencing faster, more contextual, and more adaptive tools elsewhere. Think about search: after using Perplexity, I expect direct answers, not lists of links, and I expect to refine them in a natural conversation instead of starting over each time. I hardly use Google anymore. And that’s Google we are talking about!
Your competition when it comes to usability is not just the other products in your category or alternative solutions, it’s any AI-product that your customers use out there.
At the same time, AI can create an illusion of usability.
A natural language interface feels intuitive in a demo, but if the system does not truly understand the domain or the user’s intent, the shine fades quickly. When the promise is “it understands you,” the cost of being wrong is higher. An AI-like experience that produces irrelevant, incomplete, or wrong results erodes trust faster than a more traditional interface would. If you cannot deliver quality consistently, you may be better off holding back than offering a pseudo-AI experience that damages credibility.
Feasibility: The Comeback
In recent years, feasibility was less of a concern for most products. With enough resources, you knew you could build whatever is needed, and most challenges were “engineering problems”.
In that sense, AI has made things even more feasible than before. Coding assistants and copilots speed up the work in many ways.
But when your product is the AI – when you are trying to create an AI solution where none existed before – feasibility risk comes roaring back. In those cases, identifying a valuable problem is only the first step. The open question is whether you can get the AI to solve it well enough, consistently enough, for real customers in real conditions.
In these cases, it is no longer a pure engineering problem. It is research, that you have no certainty whether or not will yield the results you are after.
I see this with startups I work with in AI-heavy domains. They know what they want to do, but they are not sure it’s at all possible.
Even if the problem is clear and the need is strong, prepare for months of experimentation, data collection, and model tuning before you know if the solution is even possible at the quality bar your market demands.
Business Viability: Can You Afford It
Business viability is an important risk to manage, and the usual suspects remain.
However, in the AI era, there are two additional things to consider.
The first is defensibility. If your product relies mainly on generic AI capabilities, ask whether that is a real moat or something anyone could copy. Without proprietary data, deep domain expertise, or fast-building stickiness, competitors can catch up before you scale.
The second is cost. Running, fine-tuning, and retraining models can get expensive, and these costs often rise with usage. You must be sure you can deliver at a price customers will pay while keeping healthy margins.
AI changes the shape of every product risk, sometimes making them easier to tackle and sometimes raising the bar. To succeed, you must reassess all four regularly, spot where the real risk has shifted, and adapt your strategy before the market forces you to.