OpenAI’s real IP play: Why structural dependency, not your prompts, is the target
OpenAI is shifting its focus from monetising everyday ChatGPT prompts to building structural dependency through enterprise partnerships and “value sharing” on major commercial breakthroughs , says Paul Armstrong
UK businesses are asking the wrong questions about OpenAI and intellectual property directly because of what was said last week at the World Economic Forum in Davos. OpenAI’s CFO, Sarah Friar, spoke about future “value sharing” models tied to intellectual property, particularly in scientific and commercial breakthroughs which set tongues ablaze. Phrasing like that travelled fast and landed badly. People heard “OpenAI taking a cut of your ideas” and filled in the gaps themselves. Here’s what’s really going on.
Your thrilling prompt isn’t what OpenAI wants, they want structural dependency
OpenAI is not planning to skim value from everyday usage, and employees using ChatGPT to draft emails, explore ideas, or pressure test thinking are not surrendering ownership by default. Or at least, not yet(!). Work remains yours for now. Pundit’s panic has been fuelled by frothy headlines while a bigger shift has been hiding in plain sight for some time. OpenAI needs to make a lot of money fast.
Despite making a ton of money, OpenAI is under intense financial strain. Running large scale models at global demand burns capital at a rate £20-a-month subscriptions alone can’t cover. Infrastructure costs remain high and do not come down because adoption grows. Around 95 per cent of users still pay nothing. Investors, partners, competitors, and regulators all see the same problem and so does Sam Altman.
Recent editions of What Did OpenAI Do This Week? have circled the same theme repeatedly, OpenAI’s money and where it comes from. Ads. New tiers. Enterprise deals. Interoperability plays. Media partnerships. Custom models. Licensing conversations. Controversial new friends. None of this is accidental. Big voices across the industry are rightly asking how OpenAI plans to sustain itself without collapsing trust or losing control of its legal exposure. Answers remain partial, although the direction of travel is becoming clearer. Certainty over any of it is, however, not clear.
Uncertainty sits at the centre of OpenAI partly by design, partly because of the competitor set they are playing in. No modern company has scaled a technology this powerful, this fast, with legal, economic and governance frameworks still being written in real time. Anyone offering neat answers is selling comfort, not insight. The tension this causes, and that Altman and co actively seem to court, explains both the fascination and the frustration many have when it comes to predicting where OpenAI will go next.
OpenAI and the IP pressure cooker
Generative AI lives on unsettled legal ground. Training data practices remain contested across all jurisdictions. Courts have not delivered definitive rulings on transformation, infringement, or permissible use at scale. Appeals will take years. Settlements tend to muddy clarity rather than remedy much. Businesses expecting a clean resolution in the near term are setting themselves up for disappointment. Regulation isn’t coming from Trump and co that will do much because of the enormous power and impact these companies have, and where the administration is with regards to reelection, and China.
One thing usually happens when legal turbulence looms for big platforms – entrenchment. OpenAI’s acceleration of enterprise partnerships, deep integrations and long term commercial alignment fits that logic neatly. Once a system sits inside research pipelines, customer service operations, development workflows, or decision making processes, removal becomes painful regardless of legal outcomes. Look at Google, Amazon, Apple and beyond, it’s way cheaper to pay fines for everything from child privacy breaches to dark pattern manipulation than make legal decisions.
Of course, while your IP may not be being breached now, never say never. Terms of service change all the time, especially when a company is going public, which OpenAI is clearly getting ready for as an option, not the option. Any IPO would sharpen pressure around revenue predictability, margin expansion, and clearer monetisation narratives. OpenAI doesn’t want all that right now. Expecting permanence from a private company racing toward scale misunderstands how big money behaves under a microscope.
OpenAI could clamp down far harder on copyright exposure if it chose to. Filtering and blocking copyrighted material is technically possible and already applied selectively in high risk areas. Relaxation elsewhere reflects trade offs, not incompetence. Accusations that OpenAI models have drawn on material from sources such as Elon Musk’s Grokipedia underline how porous training ecosystems have become. Outputs may appear clean while inputs remain contested, liability is the big question, and no fast answers are coming from any of the big players.
What businesses should actually be thinking about
The argument isn’t that these companies are too big to take to court. Far from it, but there’s a lot to do before any decisions change anything. Shift thinking from prompt and content ownership towards dependency. Many organisations adopted ChatGPT opportunistically, letting usage spread because it felt productive and low risk. Policies followed later, if at all. None of which is particularly terrible until the supplier experiments with new revenue models under pressure.
ChatGPT now sits inside research, summarisation, internal analysis, customer responses and decision framing across countless firms across most industries. In many cases it occupies space once filled by junior judgement, without anyone explicitly deciding that should happen. Few leadership teams have mapped where human thinking has been replaced rather than supported, or how easily those workflows could be unwound if pricing, licensing or data terms moved against them.
Reading terms properly is table stakes; thinking beyond them now is a smart move. OpenAI mentioned at Davos the potential (key word) for them to take a license portion of a drug’s sales if their tools were used when making the drug. If that doesn’t impart a need for leaders to question what staff are putting into the black box, and how much they trust OpenAI, what will is perhaps a matter for the board.
Organisations need clarity on whether they treat these systems as tools, advisors, or infrastructure, because each category carries different expectations around trust, bias, and responsibility. Clear boundaries create some resilience, but assuming there will be new operating models coming is a smart move and a future point of differentiation for big tech. By the time any clarity arrives, platforms already embedded into corporate operations will be difficult to dislodge, even if fault is established.
OpenAI keeps emphasising alignment rather than extraction for good reason. Enterprise partnerships allow value sharing to be explicit, negotiated, and contractual. Pharmaceutical research, scientific discovery, industrial modelling, and large scale R and D are where this model makes sense. OpenAI makes money when partners succeed because its systems are integral to that success. Google does the same thing with Isomorphic Labs already. So OpenAI isn’t, for now at least, coming after a slice of your latest Canva LinkedIn carousel brilliance.
The mistake would be to read this moment as reassurance rather than warning. OpenAI is not taking your ideas today because it does not need to. Position is still being built, workflows are still being absorbed, and decision making is still being quietly reshaped. Once dependency is high enough, monetisation options widen. Switching costs rise by design, until staying put feels cheaper than leaving. The question for you isn’t whether OpenAI will ever come for value, but whether you’ll recognise the shift when it happens. By the time the change feels obvious, the choices will already have been narrowed. To change an old Banky quote, your business is an acceptable level of opportunity today, and if it isn’t, you’ll soon be told.