If you run an operational business in 2026, you have almost certainly had this conversation: "We need to be doing something with AI." Maybe it came from the board. Maybe from a competitor's LinkedIn post. Maybe from a staff member who used ChatGPT to draft a document and thought, quite reasonably, that there must be more to this.
There is. But the conversation usually starts in the wrong place.
"How can we use AI?" is the question everyone asks. The better question is: what do we actually need to solve? Because AI is genuinely useful. But only when it is pointed at the right problem, built on reliable foundations, and deployed with enough discipline to be trusted. Most of the disappointment with AI in business comes from skipping one or more of those steps.
What People Mean When They Say "AI"
The term gets used loosely enough that it is worth being specific.
When most people say AI in a business context, they are talking about one of three things.
Large language models (LLMs) are what powers ChatGPT, Claude, Grok, and similar tools. You give them text, they generate text back. They are exceptional at summarising, drafting, translating, and working through unstructured information. They are not databases. They do not store your data, retrieve facts reliably, or guarantee consistency. They generate a response every time, and that response can vary.
Agentic AI is the next step. An AI agent is an LLM that has been given a specific role, a set of instructions, access to your company's knowledge, and the ability to take actions or make recommendations within defined boundaries. Instead of a general-purpose chatbot, you get a purpose-built assistant that knows your business, your processes, and your constraints. Think of it as the difference between asking a stranger for directions and asking someone who works in the building.
For example, a customer service agent built on your internal knowledge base can answer questions from customers or staff using your actual documentation, your actual pricing, your actual procedures. It does not make things up (if it is built properly) because it is drawing from sources you control. A compliance agent can review documents against your regulatory requirements and flag gaps for your team to act on. An onboarding agent can walk new staff through your systems and policies in plain language on day one.
These are not hypothetical. Platforms like OpenClaw make it possible to deploy agents like these for individual staff members or teams, each with their own role, their own knowledge base, and their own guardrails. We have done this. It works. But it works because the agents are built deliberately, tested properly, and supervised by people who understand what they are actually doing.
Automations and integrations are the third category, and this is where the confusion usually starts. A webhook that fires when a new work order arrives and creates a job in your scheduling system is not AI. A workflow that routes an invoice to the right approver based on the amount and the cost centre is not AI. These are automations. They are deterministic. They run the same way every time. They are reliable, auditable, and predictable.
AI is none of those things by default. It is probabilistic. It generates responses. That is what makes it powerful for some tasks and completely unsuitable for others.
Where AI Sits in the Stack
Here is the mistake we see most often: businesses trying to use AI instead of building proper systems.
AI does not create structure. It works with structure you give it. If your data is scattered across spreadsheets, shared drives, email threads, and the heads of three people who have been there the longest, AI will not fix that. It will give you faster, more confident-sounding answers derived from unreliable sources. That is worse than not having AI at all, because now you are making decisions based on outputs you think you can trust but cannot.
When someone feeds a spreadsheet into ChatGPT and asks it to analyse the data, here is what actually happens. The model writes a script, runs it, and gives you a summary. Ask the same question tomorrow and it writes a different script. It might interpret columns differently. Handle edge cases differently. You get a fresh, non-repeatable analysis every time.
That is fine for exploration. It is not a data pipeline. In an operational business, that analysis should be a defined integration: a process that runs the same way every time, producing consistent, auditable results. AI should sit at the end of that pipeline, summarising and interpreting data that has already been collected, validated, and delivered reliably. Not rebuilding the analysis from scratch on every run.
The order matters. Systems first. Data flowing through real integrations. Then AI on top to augment, summarise, and provide insight from data you can actually trust.
Where AI Works Well Right Now
When the foundation is there, AI is genuinely transformative. Not as a buzzword. In practice.
Customer service and internal knowledge. An AI agent built on your company's actual documentation can answer questions from customers or staff instantly, accurately, and consistently. This is not a replacement for your team. It is a tool that handles the repetitive questions so your people can focus on the ones that require judgement. For businesses handling hundreds or thousands of enquiries a month, the impact on response time and staff capacity is immediate.
Operational reporting. If your systems already capture the data (your QMS, your ERP, your project management tools), AI can summarise it faster and more usefully than anyone manually pulling reports. NCR trends from Jira. Production metrics from your ERP. Service patterns from your CRM. The value is not in the data collection. That should already be automated. The value is in the interpretation.
AI-assisted development. We have co-built applications from scratch using AI-assisted coding. Not AI writing code unsupervised. A human architect making every design decision, with AI accelerating the implementation. The result is faster delivery without sacrificing quality or control.
Agentic assistants for staff. Individual AI agents configured for specific roles within a business. A compliance agent that reviews documentation against regulatory requirements and flags gaps. An onboarding agent that walks new staff through systems in plain language. A research agent that synthesises information across multiple systems to answer questions that currently require someone to open six tabs. Each with access to the right knowledge, the right guardrails, and clear boundaries. The value is giving your people tools that make them faster and more accurate at the work that actually needs them.
Where AI Does Not Work (and What to Do Instead)
Replacing a properly designed workflow. If you need a process that runs the same way every time, with an audit trail, you need an automation. Not a language model.
Making decisions from unstructured data. If your data is not in a system, AI cannot make it reliable. The answer is not a smarter model. The answer is getting the data into a proper system first.
Anything where auditability matters more than speed. Quality management, regulatory compliance, financial reporting. These need deterministic systems that produce consistent, defensible outputs. AI can sit alongside them. It should not replace them.
Skipping the systems work. The integration between your ERP and your scheduling system. The webhook that updates your CRM when a job is completed. The automation that routes approvals. This is not glamorous. It also cannot be done by AI. This is infrastructure. It needs to be built properly.
"How Can We Use AI?" vs "What Do We Need to Solve?"
The businesses getting real value from AI right now are not the ones that started with the technology. They are the ones that started with the problem. They identified what was slow, what was manual, what was costing them time or accuracy. Then they looked at what tools could help. Sometimes AI was the answer. Sometimes it was an automation. Sometimes it was a system they should have implemented years ago.
The businesses struggling are the ones that bought the tool first and are now looking for problems to point it at. That is not a strategy. It is a solution looking for a question.
How We Use AI
We use AI every day at Big Finish. It is one of the most important tools in our kit. We use it to draft, research, analyse, develop, and build. It makes us faster and it makes our work better. We are not cautious about AI. We are intentional about it.
The simple filter we apply for our clients is the same one we apply to ourselves: if a task can be written as a flowchart with defined rules, it should be an automation. If it requires reading the room and deciding what to do, that is where AI earns its place. One is infrastructure. The other is intelligence. Both matter. Knowing which is which is the whole game.
We start with what you need to solve, not what is trending. If AI is the right answer, we build it properly. If it is not, we tell you. That is what we do.
Big Finish works with manufacturing and operational businesses to design and implement systems that work as a whole. If you are thinking about where AI fits in your operations and want a straight conversation about what is worth doing, book a discovery call.