Context, not compute, will define the next generation of intelligence

3 hours ago 6
An AI face in profile against a digital background. (Image credit: Shutterstock / Ryzhi)

For years, AI progress has been measured by scale: larger models, bigger datasets, longer context windows. Each new breakthrough promises that if we simply feed systems more data, we’ll get sharper insights.

Yet, at least outside of training, that assumption is running into trouble. As models absorb longer prompts, they often become less reliable. The model has more to choose from, which makes it more likely to focus on the wrong thing.

Researchers call this context rot: as an AI system processes more information, irrelevant details clutter its working memory. The result can be less accurate responses, higher costs, and a gradual erosion of trust.

A recent experiment by Microsoft to create an AI-led “Magentic Marketplace” demonstrated how AI can fail here. The lab’s managing director Ece Kamar explained, “We are seeing that the current models are actually getting really overwhelmed by having too many options.”

How context rot creeps in

Most enterprise data resides in documents—PDFs, reports, and internal files that are chopped into chunks for retrieval-augmented generation (RAG). When a user asks a question, the system retrieves the passages that appear semantically similar and sends them to the large language model (LLM) as context.

The catch is that similarity isn’t the same as relevance. A fragment can look like a match but miss key definitions or exceptions. Without additional context, a fragment may just be noise.

The AI ends up juggling too much information, without understanding which parts truly matter and which just create more noise in the system.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

The fix isn’t to cram in more text, it’s finding text that’s more relevant to the business question at hand. This means equipping AI with a knowledge layer that reflects how the world really works, as a network of entities and relationships, not disconnected data points.

Thinking in connections, not documents

Humans don’t reason in documents, but in relationships. A knowledge graph captures those connections explicitly: people, places, products, and the links between them.

When data is stored and searched as a graph, retrieval shifts from “closest approximate match” to “best supported answer.” A legal assistant, for example, might ask about a contract clause.

A keyword or vector search could return one clause that looks relevant, while a graph-based system understands that the clause belongs to a larger definition and retrieves all related sections. The answer is more complete and contextualized, and this avoids the problem of trying to connect information across different chunks.

The end result is that the model needs far fewer tokens to generate a relevant answer.

Why graphs build trust

Transparency is another major advantage of graphs. Vector embeddings, the mathematical process AI uses to link similar words, are powerful for machines but completely unreadable by humans.

A graph by contrast is easy to see and understand. It records the exact chain of facts the system used to reach a conclusion, along with the sources and permissions involved. It can be visualized in a way that makes sense to humans.

That traceability is invaluable in regulated environments. It’s much easier to justify a decision when you can show the path through the data, and why a decision was made, rather than just point to a cluster of opaque numbers. Built-in governance and explainability make graph-based AI enterprise-ready and trustworthy.

Don’t wait for GPT-6

Some leaders ask why they should worry about context when future models will be smarter. It’s true that large language models are improving quickly. But no matter how capable they become, they will never be trained on your private enterprise data.

A foundation model also works a bit like a search engine with extraordinary reasoning capabilities but no index of your company’s information. It can generate answers, but without being fed with the right context, it can’t know which parts of your knowledge are authoritative, up to date, or most relevant to the question.

Even when LLMs reach double-digit versions, they’ll still need a structured, secure way to access what’s unique to a business.

That’s why the bottleneck for AI adoption is shifting from compute power to data organization. The key question is no longer “Which model should I use?” It’s “How well is my knowledge organized?”

Making graphs easier to use

Graph databases once had a reputation for being hard to learn. That was true a decade ago, when teams had to invent their own schemas from scratch. Two changes have made them far more accessible.

First, the Graph Query Language (GQL) is now an international ISO standard. It’s the first new data language to be standardized since SQL decades ago. GQL gives engineers a shared declarative language for working with graph data, one that complements SQL rather than competing with it.

Standardization leads to improved interoperability, clearer documentation, and a well-defined skill set for hiring purposes.

Second, thanks to AI, modern graph platforms now automate work that previously required specialized expertise. Assisted modelling, domain templates, and hybrid search, which seamlessly blends vector and graph queries, are now AI-powered and accelerated with agents.

It’s a step change in making the technology easier to use and deploy. Teams spend less time hand-crafting data structures and more time asking real business questions.

The knowledge-layer advantage

Smart organizations are realizing that the strongest AI results come from pairing powerful models with well-organized, connected and contextualized knowledge. The model is the reasoning engine; the graph is the scaffolding that holds the right facts in place.

When retrieval is guided by connections, it produces higher quality context, and better results. LLMs can spend less effort filling in gaps, and more on delivering accurate, explainable reasoning. Responses improve, latency drops, and costs fall. Most importantly, users start to trust the answers.

We’re moving from an era defined by raw compute to one defined by organised context. Longer prompts and bigger models will continue to matter, but structure, clarity, and connectedness will matter more.

If you want AI that’s consistent, fast, and trustworthy, the path forward isn’t “bigger.” It’s better organized.

We've featured the best AI website builder.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Read Entire Article