Generative AI Was The Warm‑up, Agentic AI Changes The Game

Most agencies are now experimenting with generative AI for summarisation, policy drafting, and customer-facing Q&A.

Most agencies are now experimenting with generative AI for summarisation, policy drafting, and customer-facing Q&A.

These use cases are powerful efficiency drivers, but they are fundamentally about producing content in response to prompts.

Agentic AI is different. Agentic systems “plan, decide, and act autonomously,” orchestrating complex workflows with minimal human intervention. Instead of answering a question about a citizen’s case, an agent might:

  • Retrieve relevant policy and case history
  • Draft a recommendation
  • Trigger updates across multiple systems
  • Notify the right human for oversight

In other words, we move from prompts to plans. That shift introduces a new “complexity gap” between today’s genAI pilots and tomorrow’s agentic services. It also brings new failure modes: task orchestration risks, goal misalignment, error compounding, fragile integrations, and much tougher testing and governance challenges.

Beyond technology choices, this shift is forcing public sector mission business and technology leaders to face governance and legitimacy questions. The more AI participates in government decisions, the higher the expectations for explainability, fairness, and accountability from ministers, auditors, and the public.

From Platforms And Knowledge Stores To Orchestration And Control Planes

In my keynote I’ll walk through three architecture shifts agencies must make to move from “clever” chatbots to manageable, governable AI agents in production.

1. Data platforms become orchestration layers

Today’s data platforms were built largely for analytics and reporting. In an agentic world, they must evolve into orchestration layers that support:

  • Real‑time context sharing so multiple agents operate on the same live, semantically consistent reality at decision time.
  • Event pipelines and process intelligence so decisions are triggered by the right signals rather than batch uploads.
  • Policy‑aware access controls for both human and non‑human identities, enforced consistently across agents and tools.

If you treat the data layer as “just storage” you’ll end up with agents improvising on stale, fragmented facts — the opposite of what regulators, auditors, and citizens expect.

2. Knowledge architecture becomes the control plane

GenAI has already put pressure on knowledge management. Traditional waterfall knowledge practices can’t keep pace with real‑time, conversational access to organisational knowledge.

Agentic AI takes this further. Your knowledge layer effectively becomes the control plane for agents. That means:

  • Using knowledge graphs and semantic models to ground agents in structured, domain‑specific facts and relationships, reducing hallucinations and improving explainability.
  • Designing retrieval and reasoning patterns (including RAG variations) that respect context, user attributes, and policy constraints — not just “best effort” search.

Without a well‑designed knowledge control plane, each agent becomes a bespoke science experiment, and you lose the ability to assure behaviour at the portfolio level.

Read also: Why are we letting yesterday’s systems fight tomorrow’s threats?

3. Governance becomes continuous, not stage‑gate

Most AI governance to date has focused on model selection, testing, and approvals at defined checkpoints. That made sense for discrete models and batch decisions but it’s not enough when AI is embedded continuously into business operations.

Agentic AI requires continuous governance that:

  • Monitors agent behaviour and outcomes in production
  • Applies consistent policies across heterogeneous stacks and vendors
  • Supports rapid, auditable rollback when behaviour drifts

This is why Forrester’s research on the agent control plane matters: governance must sit outside both the build plane and orchestration plane to provide independent visibility and policy enforcement as agents proliferate.

Policy Governance Evolves From People And Pdfs To Metadata, Contracts, And Decision Logs

During the Canberra Summit, I’ll argue that governance needs new artifacts to become real and repeatable in an agentic context. We’ll unpack three in particular:

  1. Policy‑linked metadata. Agencies are already rich in policy, but poor in machine‑readable policy context. Linking policies to data, workflows, and agent capabilities via metadata is essential if you want agents to interpret obligations consistently and support external audits.
  2. Knowledge contracts. Think of these as explicit agreements about what each knowledge source means, how fresh it must be, and how agents may use it. They’re the counterpart to APIs and SLAs in traditional integration — but aimed at facts, interpretations, and decisions.
  3. Audit‑ready decision logs. In developing our framework on agentic guardrails (AEGIS – Forrester’s security framework for agentic AI), we highlight emergent behaviour, cascading failures, and obscured provenance as key risks. For government, the only sustainable answer is end‑to‑end decision logging that records agent intent, context, actions, and human oversight — in a form that can withstand parliamentary committee hearings and FOI requests.

Together, these artifacts turn “responsible AI” from a slide into an operating discipline.

Where Do Agents Arrive First? Frontline, Back Office, Or Cross‑Agency?

The question I get most often from public sector leaders is not “What is agentic AI?” but “Where do we start?”

In my keynote, I’ll share a simple way to think about where agents belong first, informed by our broader research on agentic use cases and service management.

  • Frontline services. Look for high‑volume, rules‑heavy interactions where human staff are overwhelmed and where AI can operate under tight guardrails: eligibility triage, standard enquiries, appointment scheduling, and status updates are all candidates. Especially in areas where raw data input is low or context is provided at point of interaction.
  • Back‑office operations. Service management is already seeing an agentic race, with AI systems diagnosing issues, applying fixes, and orchestrating end‑to‑end workflows. For the APS, think about claims processing, case reconciliation, or HR operations where delays and rework erode trust.
  • Cross‑agency collaboration. This is the most attractive and the riskiest zone. Before you deploy cross‑jurisdictional agents, you’ll need shared governance, shared data semantics, and a control plane that spans portfolios, not just programs. The summit’s focus on interoperability and future‑fit architecture is exactly the right backdrop for this discussion.

The lesson from our broader AI research is clear: start where value is provable and risk is bounded and let that success fund the next wave of investment.

Join The Conversation At The Aus Gov Data Summit In Canberra

Between tightening sovereignty requirements, rising public expectations for AI regulation, and the maturation of agentic AI, 2026 is a pivot year for government data and AI leaders.

At the Aus Gov Data Summit, we’ll dig deeper into:

  • The three architecture shifts that make agentic AI safer, controllable, and valuable in government
  • The new governance artifacts that turn policy into practice
  • How to prioritise agent use cases across frontline services, back‑office operations, and cross‑agency collaboration

If your agency is ready to move from pilots to provable, accountable AI‑enabled services, I hope you’ll join me at High Noon in Stream A along with your hardest questions!

Sam Higgins
VP, Principal Analyst at  |  + posts

Sam is a technology advisory professional who works with senior business and technology leaders to navigate continual, technology-driven transformation. At Forrester, he provides research-led guidance to financial services institutions, public sector agencies, and asset-intensive organisations across Australia and the Asia–Pacific region, with a focus on business–IT alignment, cloud and platform optimisation, and technology-enabled value creation.

Leave a Reply

Your email address will not be published. Required fields are marked *