Right now, almost every SaaS company is being pressured to add AI, and many teams have no clue how to do it.
But is it okay to build an OpenAI wrapper?
My answer is: yes.
But not in the way most people would do it.
First off, I don’t think anyone should use OpenAI anyway. It’s expensive, and ethically I would rather not build on it at all. They are openly involved in projects where their technology is being used to kill people in violation of international law.
But replacing OpenAI with any other LLM would not automatically make it a good idea.
A smart prompt is not enough. And a few AI features glued onto a product are not enough either.
It’s just like building a normal product. A collection of tactics isn’t enough.
You need an AI product strategy.
Because, AI is not the strategy. It is only useful when it leverages something else you already have.
I’m not saying every company should train its own model. Quite the opposite.
I think we’ll see the same shift with LLMs that we saw in computing more broadly. At first, a lot of value came from highly specialized systems built for very specific tasks. They were efficient, but only for that narrow use case. General-purpose computing changed that. With simpler, more generic instruction sets, the specialization moved up into the software layer. The machine became reusable. The logic became flexible.
I think LLMs will play a similar role. For most use cases, people will rely on general-purpose models and move the specific logic into the layer around them: prompts, workflows, RAG pipelines, agents, or MCP functions.
It’s totally fine to use an existing LLM like Claude or Mistral, whether in the cloud or self-hosted.
So why are OpenAI wrappers weak?
To stay with the analogy: nobody should brag about using a CPU. It is not a unique selling point, nor should it be considered a valid product strategy.
It should not be about whether you’re using one or not. It should be about what you use it for.
So what are you using it for? A unique prompt? Probably not enough. It can be reverse-engineered easily.
But what if you give an off-the-shelf LLM access to unique data you have collected over the years, and use a prompt to present it in a smart way? That’s something nobody can easily copy.
Or imagine using an off-the-shelf LLM to create more engagement inside an online or offline community you have built over the years. Anyone could probably copy that prompt. But without the community, it would be worthless…
That is where AI starts to become useful: when it amplifies value you have already created.
And if you haven’t built that value yet, it might be smart to make data capture part of your strategy.
So yes, use an off-the-shelf LLM. But use it smartly:
- Build POCs to test how users respond to AI enhancements
- Improve existing flows
- Multiply the value you have already built
But don’t just add a simple prompt and tick AI off your list.

