Artificial intelligence has become a standard feature within media and technology companies’ tech stacks. Generative AI is already assisting with everything from archive searches to marketing copy. But the next wave of investment is shifting to a more powerful class of tools: agentic AI.

Unlike earlier systems, agentic AI can execute complex, sequential tasks with contextual reasoning. Analysts at EY estimate that nearly half of technology executives are now prioritising investment in this space. The implications range from travel bookings to customer service and healthcare – but perhaps most visibly, live broadcasting.

For the past nine months, UK broadcasters have been testing how AI-powered assistants could reshape the nerve centre of live television: the production control room.

The initiative – AI Assistance Agents in Live Production, part of the IBC Accelerator Media Innovation Programme – aims to prove that agentic systems can play a practical, day-to-day role in one of the most pressurised environments in media.

Rewiring the control room

 

The project explores how intelligent assistants could manage show running orders, spot errors, retrieve video sources and editorial clips on demand, and respond to voice commands from directors in real time.

The goal is not simply efficiency, but to reduce cognitive overload, free up human talent for higher-value creative tasks, and establish a framework for how AI can integrate securely into mission-critical broadcast workflows.

For proponents, the bigger prize is positioning agentic AI not just as a tool but as the next generation of user interface – allowing operators to act faster, with greater accuracy and confidence.

Jon Roberts, CTO of ITN, describes the effort as “multiple proof-of-concept agents assisting operators across the live production stack as an intelligent new UI”, adding that the more radical ambition is to explore “how task-based, natural language, agent-to-agent interactions could redefine how we think about system integration.”

Broadcasters: John Roberts, CTO, ITN

Jon Roberts, CTO, ITN

 

The focus for these broadcasters is on practical deployment. The project’s structure is designed to ensure the technology augments human decision-making rather than automating editorial choices.

Building an intelligent assistant director

 

The technical framework is being built as open and vendor-agnostic, supported by Google’s Gemini integrations. At the core sits an “orchestrator agent” – effectively an AI assistant director – which coordinates a suite of specialist agents for specific tasks such as running order management, operator voice control, video verification, reformatting, content discovery and error flagging.

The aim is an end-to-end system in which agents work both for and with each other, enabling natural language commands to trigger complex chains of action.

For Channel 4, the potential to relieve pressure on newsroom staff is significant. Paul Lindsay, its lead project manager, notes that freeing news gallery teams to focus squarely on editorial output is a big win “especially as we grapple with a global news environment that has seen so many dramatic developments in the last few years. Going to air with AI that works for us is a big thing.”

The collaboration has forced participants to confront early-stage challenges, from workflow design to the environmental footprint of large-scale AI. Lindsay for instance, stresses that the industry cannot ignore sustainability: “We know there are concerns about the environmental impact of AI, and we’re not deaf to that — it’s something the news and media industry is going to need to work on.”

From concept to capability

 

The project also demonstrates how quickly agentic AI has shifted from concept to operational pilot. Media groups are already testing agents that generate highlight reels from live events, provide real-time subtitling, and flag potential misinformation before broadcast.

Lindsay argues that agentic AI “is no longer just a concept. It’s likely to become a core part of how media organisations operate… a capability that must support creativity, safeguard editorial integrity, and help us respond to the pace and complexity of modern media.”

In practice, executives expect adoption to follow the familiar arc of AI deployment: initially perceived as revolutionary, then quickly normalised into everyday operations.

Roberts predicts that “over the next 12 months, we will increasingly be using agents regularly, largely without thinking about them,” with the biggest barrier less about technical builds and more about “general education and empowerment.”

Guardrails and governance

 

According to Morag McIntosh, solution lead for Live Production Control at the BBC, trust is the non-negotiable currency.

Broadcasters: Morag McIntosh, BBC

Moran McIntosh, BBC

 

The BBC has emphasised that any deployment must come with transparent audit trails, visible confidence scores, and instant human override. “The tech is ready, and we’re clear it’s assistive, with humans firmly in charge,” says McIntosh.

“You still need audit trails of every agent action… and the ability to instantly override,” she adds.

Roberts echoes this point: “As trusted producers of news and factual content, it will be particularly important for us to continue to stand on bedrock principles of transparency and strong journalism as we continue to engage with these new technologies.”

Channel 4 is also keen to show that deployment aligns with its wider ethical stance. “You want to be able to trust your processes in a newsroom and news gallery one hundred percent,” Lindsay says. “Channel 4 has strong principles about using AI responsibly.”

A showcase of applied AI

 

The initiative is supported by a network of technology partners including CUEZ, Amira Labs, Highfield-AI, Monks, Cuepilot, Shure, EVS, Moments Lab and Google Cloud.

The first public demonstration will take place at the annual broadcast conference IBC2025 in Amsterdam, where the consortium will showcase proof-of-concept agents in action.

The Accelerator Media Innovation Programme has long been a test bed for emerging technologies, but the agentic AI pilots mark a step change. They signal an attempt to build a new layer of technical infrastructure around live production.

For business executives outside the media sector, the project offers a case study in how agentic AI might be applied in high-pressure, real-time environments. The lessons, from orchestrating multiple agents through a supervisory AI, to embedding governance from the outset, will resonate well beyond television studios.

The road ahead

 

Agentic AI is moving rapidly from laboratory experiments to practical deployment. For broadcasters, it promises to ease operational bottlenecks, safeguard editorial standards, and expand creative capacity.

For the technology industry, it offers a high-profile proving ground in a complex real-time environment. The next 12 months are likely to see agents embedded more deeply into workflows, often unnoticed. Tech leaders across sectors will need to assess how quickly their teams and processes can adapt to harness it – without losing sight of the trust and principles that make the output valuable in the first place.

Personalized Feed
Personalized Feed