
Designing a GraphQL supergraph your AI agents can safely use
By Seb Potter

Written by
Seb Potter
Strategist

Most organisations have a first proof of concept with AI behind them by now. With the right use case, value appears quickly. The harder question is what happens next. How do you move from a promising demo to something that works reliably across your whole organisation?
Building a chatbot, content assistant or co-pilot is relatively straightforward. There are plenty of open source LLMs available and with the Model Context Protocol (MCP) you can give AI access to a system for input. A working demo can be up and running in a matter of days. The real difficulty begins when you try to take that demo into production.
Technology is rarely the bottleneck. The obstacles sit around it: IT security, availability, privacy and compliance, governance, and the daily reality of how people actually work. That is exactly where projects stall. Once you need real scale, the quick experiments slow down and it becomes much less clear whether the value you saw in the demo will hold. So where do you start if you want to build an AI project that lasts?
For customers, the promise of AI is convenience. A chatbot that answers questions around the clock without waiting for a person to become available. The deeper question is whether customers receive the same quality of service from an automated system as they do from a human being. Only when the answer is genuinely yes will they experience and appreciate that convenience.
In many B2B organisations, customer relationships are built on access to specific people. Inside sales teams carry institutional knowledge that no system captures: product expertise, awareness of a customer's ongoing projects, a feel for when to bend the rules. Customers come back not because the process is efficient but because the person on the other end of the phone understands their situation. That knowledge is the real asset. If an AI has access to the same information and knows how to apply it, it can be just as effective. The question is how you get from one to the other.
Before you can decide what to automate, you need to understand the information needs behind a customer interaction. A good inside sales conversation follows a consistent pattern. Someone checks the order history, reviews the customer profile for ongoing projects, verifies stock levels and delivery times, and only then recommends a product that fits the budget and planning. That sequence is not random. It reflects a logic the business has refined over years.
An AI system needs the same playbook. It has to know which information to gather, in what order, where that information lives, and how to combine different sources into a useful answer. We find it helpful to map this out as a data journey: a visualisation of how customers collect information on their way to a decision. Much like the more familiar customer journey model, it reveals where things break down. When a person gives the wrong advice, they deal with the fallout themselves. If an automated system is going to carry that same responsibility, you need to define exactly which checks it must run and how much latitude it has.
This is what we mean by data readiness: knowing, for every use case, what information is needed, where it sits, and how it flows into the AI system.
A large part of the value AI brings sits in automation. But AI is not always the best answer to an automation problem. To check whether a pair of shoes is available in size 46, a database query will do. To calculate a shopping cart total, there are plenty of tools and functions that simply work better. Where AI becomes genuinely useful is in semantic and fuzzy queries: which winter jacket suits casual streetwear? Something like product X, but more sustainable?
Once you understand where AI adds value and where it does not, you can identify the use cases worth pursuing. Today, the strongest cases tend to sit in the research phase, where customers are gathering and combining specific product information before making a choice.
Start with use cases where most of the data you need is already available in a small number of systems and is relatively complete. The more data-ready the domain, the shorter the path from proof of concept to daily use.
We are working on exactly this kind of problem with a client right now. Buyers and technical installers send orders in by fax, email and Excel spreadsheet. The inside sales team has to read each one, interpret it, match items to the product catalogue, attach the order to the right buyer account and enter it into the commerce platform. It is slow, repetitive, and it ties up people who could be spending their time on work that actually requires judgement. The AI feature we are building reads those incoming orders, converts them into structured orders in the platform, links them to the correct account, and suggests alternatives where a requested item cannot be matched exactly. Because we had already done the data readiness work and built a GraphQL layer that gave us clean, unified access to products, accounts and inventory, we were able to prototype and deliver rapidly. The hard part was not the AI. It was having the right data in the right shape before the AI ever entered the picture.
An AI coding tool can write code that accidentally destroys data. A virtual assistant can make incorrect offers because it relied on stale pricing or stock information. The risks that matter most are not dramatic incidents. They are small mistakes that repeat at machine speed. The difference between a human error and an AI error is scale: the same wrong assumption can reach hundreds of customers in minutes. That is why you should never give AI open access to all your systems, only to the data it needs for a specific use case.
In practice, access and ownership are not always well organised. Teams follow different rules and standards, documentation is patchy, and it is often individual experts who keep things running despite the gaps. If you want to add AI to that environment, someone first needs to be accountable for data quality and access rules at every important data source. Without that ownership, AI will simply reproduce and amplify the inconsistencies already present in the underlying data.
To set this up well, you need an architecture that treats AI as a separate component rather than something woven through your entire system.
If your platform is a monolith, every AI feature becomes a integration project. You are negotiating with the internals of a system that was never designed to be consumed this way. If your platform is composable, built from independent components that communicate through APIs, AI is just another component you plug in. You can introduce an AI capability without touching your checkout or product database, and when you want to change providers you replace one component rather than migrating the entire stack. This is the architecture we build for our clients, and it is the reason the order automation feature described above went from concept to working prototype in weeks rather than months.
The challenge is that AI needs access to information from several systems at once. Products, customer history and inventory typically live in a CMS, PIM, CRM and commerce platform, each with its own API and data model. Without a normalisation layer between them, every AI feature has to solve that integration problem from scratch. A GraphQL supergraph gives you that layer. It pulls data from every source, assembles it into one consistent, typed model, and presents AI with a single interface. You control exactly which information is accessible per use case. You can see what the AI is consuming. And because all traffic flows through the graph, you gain observability over which queries prove difficult to answer and where data quality or coverage needs to improve.
You cannot trust AI to decide for itself which information it should or should not use. But you can put it to work effectively by presenting that information through a layer you control. Organisations that invest in this foundation will find that AI projects stop being fragile experiments and start becoming a real part of how the business operates. Data readiness is not a nice-to-have. It is the prerequisite.

About the author
Seb PotterStrategist
Seb has more than 30 years of experience helping clients turn business needs into programmes of technical and organisational transformation.