I'm Building An Engine That Powers IP Creation - And It Could Protect You From This Earthquake-Inducing Fault In All Generative AI
- astjws
- Jan 5
- 5 min read
Updated: Jan 14
An engine that powers Intellectual Property… there’s a an AI earthquake coming... what on earth are you talking about, Jack? Fair question. I’ve spent the last three years developing new skills and thinking about this, and I’m finally confident enough to say preparation time is over, the work starts now. This article will be unlike anything else you read about TV this month.

A few years ago, inspired by work with The World Ethical Data Foundation, I began looking for ways to fix systemic problems in creative industries. TV production companies live and die by IP, yet are being squeezed by streamer-led consolidation. So I targeted one of the most archaic, inefficient processes in the business: unscripted TV development.
IP development hasn’t meaningfully evolved. Every production company follows a similar model: hire bright people, generate ideas, pitch, win commissions. Each step is slow, expensive, hard to scale, and results are inconsistent, as the cartoon rather charmingly illustrates. This era is coming to an end.
The age of knowledge technologies is here, and it’s allowed me to create The IP Engine, a super-human TV development assistant. Powered by billions of audience data points and driven by a workflow of connected AI agents, it helps creative people develop big ideas based on insights competitors don't have, proves demand for new ideas using audience data, and uses AI in the right way to turbocharge the development process - the aim being to unlock growth for TV production companies in any market, at less cost. But it doesn’t work without what the tech community describes as a “human in the loop”.
But I’m going to argue that we creatives are still far, far more important than clinging onto our place in “the loop” suggests. The truth is that, in fact, current Generative AI models are fundamentally flawed - and the companies building them don’t want to focus on fixing the issue. So the world we creatives work in is not about to accelerate away from us as quickly as we might imagine. That’s not to say there won't be seismic impacts, but rest assured that, as always, the creative person only has to learn new tricks. Get it right and we can unlock what AI can do now, safe in the knowledge that we’re not going to be designed out of the loop any time soon.
Why do I know this? At ground level we creatives can already sense the issue. As AI pioneer and former Chief AI Scientist at Meta, Yann LeCun put it: “Current systems are really not that smart. They’re trained on public data, so basically, they can't invent new things.”
If you’ve ever used a tool like ChatGPT to work on a creative business related problem you’ll know that what looks like invention from AI is actually just averaging. An uninspiring remix of what already exists. By design, it sorts vast amounts of text it took from the internet, finds what’s statistically common, sands down the bumps, quietly bins the outliers, then magically produces a detailed, yet bland and often inaccurate response. The “tyranny of averages” that I first wrote about in 2015 is being scaled beyond recognition.
Dig deeper and it gets worse. This is the earthquake point: today’s AI is built on maths that is unfit for purpose. The Euclidean algebra used to make all Generative AI foundational models work is excellent at measuring similarity and optimising patterns but this first step in the AI revolution is beset by decay, hallucination, and signposts down blind alleys. These aren’t bugs that will be patched in the next release. They are structural limitations, hiding in plain sight, baked in. Fixes will not come in updates.
Why? Because reality is full of things AI can’t yet fathom. Things we naturally understand, like contradiction, surprise, cause, effect, and consequence are parts of a complex array of non linear factors that fuel good ideas. Good ideas appear when something breaks, jumps, or is explained in a fundamentally new way. Euclidean maths can’t do that.
So we end up with systems that sound magically fluent but lack grounding in reality, because they guess results without understanding the real world. In fact, Google even use the term “grounding” to mean the recurring need for people to transform LLMs from statistical guessers into reliable reasoning machines by shoveling carefully curated content into the hungry yet soon to be phased out engine.

If you’re wondering, I only prompted ChatGPT to show the impact of all this, not to make it look like a disaster movie, that was its choice.
The big AI players are selling us new homes with serious subsidence issues. Their pitch is: move in for free, then rent the house, but you’ll have to help us renovate it while you’re living in it because we built it on shifting sands - but move in now while we work out how to build your dream AGI home. The truth is they don’t yet know how, when, or even if, AGI can be built, but they do know that what they are selling now has some serious structural issues. They choose to keep it quiet since they are anchored to return on eye-watering investment.
However, they are relatively hushed corners where Google, OpenAI, Microsoft and the others are working on the real issue. Projects like DeepMind’s Genie 3 and the pioneering World Labs, which announced itself in late ‘24 with $230m in funding, are exploring how to make new foundational models using non Euclidean geometry. But people like Sam Altman and Mark Zuckerberg don’t like to make a thing of this gaping mathematical void in their current plans. They tend to avoid discussing it publicly with serious thinkers like Yann LeCun and father of Quantum Computing David Deutsch, a man with vast insight into the fabric of reality and author of a highly recommended book on the subject.
Yann quit Meta recently over this issue, calling LLMs a dead end and arguing that true “world models” (impossible under Euclidean maths) are required. David, meanwhile, politely skewered Sam face to face last September while on stage at an award ceremony. He pointed out that while ChatGPT is able to talk better than he ever imagined, it is not able to innovate, much less explain new ideas.
So yes - it’s a good news message! Here we are in January 2026, creative people with systems that can’t be creative because they can’t understand the world they describe. We, creative reader, you and I, remain essential. We are still the secret sauce and will be for some time. Our imagination, judgement, and ability to think outside the box remains our super power. By comparison current AI systems are blabber boxes incapable of rendering reality. That’s why AI workflow tools will have a big 26, from Google’s Antigravity to the AI workflow orchestrator n8n. Current AI is wondrous, but it needs us to fix and make it useful.
This idea is central to The IP Engine. It works because it pairs the best of AI with the best of human creativity (that’s you). It works because it’s the missing link between trying to use AI to be more creative and succeeding. It works because it reduces risk of exposure to the fundamental problems I’ve described. Problems that may puncture the AI bubble because we gloriously surprising, relatively chaotic and curiously creative humans cannot, and will not, be designed out. At least not yet. Now is the time to think about your creative process, work out how to use new tools in the right way, then do it.
I’d love your thoughts!
Please comment on LinkedIn or drop me a line for more on The IP Engine.
Here’s some further reading:
Jose Crespo explains that everyone is using the wrong algebra in AI: https://hackernoon.com/everyones-using-the-wrong-algebra-in-ai
David Deutsch gives Sam Altman that back handed compliment: https://www.businessinsider.com/sam-altman-openai-david-deutsch-turing-test-for-agi-2025-9?utm_source=chatgpt.com
David’s websites for his books The Fabric Of Reality and The Beginning Of Infinity: https://www.daviddeutsch.org.uk/books/the-fabric-of-reality/ / https://www.thebeginningofinfinity.com/
BBC article on why Yann LeCun left Meta: https://www.bbc.co.uk/news/articles/cdx4x47w8p1o
Dr. Gary Marcus on why LeCun is likely to be right, but less likely to succeed: https://garymarcus.substack.com/p/breaking-marcus-weighs-in-mostly


Comments