Future-proofing public services with responsible AI

With two-thirds (64%) of agencies already exploring or piloting generative AI, the focus is shifting from “if” to “how.”

Australia’s public sector is under growing pressure to modernise systems, deliver better citizen services, and strengthen cyber resilience, all while managing tight budgets, legacy infrastructure, and skills shortages.

With two-thirds (64%) of agencies already exploring or piloting generative AI, the focus is shifting from “if” to “how.”

Yet many leaders are still unsure of where and how to begin. Fragmented data across multiple agencies, complex compliance and privacy obligations, hidden costs, limited in-house AI expertise, and the challenge of integrating new technologies with legacy systems make it difficult to scale AI initiatives. Not forgetting concerns around public trust, governance readiness, and the sheer number of potential use cases, it’s clear why moving from pilot projects to meaningful, government-wide AI outcomes can feel overwhelming.

Having a framework for responsible AI can help government agencies achieve accurate, reliable AI outcomes. At the heart of this framework lies grounding large language models (LLMs) with relevant context. This practice is known as context engineering, in which a capable search and relevance platform utilises structured and unstructured, proprietary data from across the organisation to feed LLMs with accurate, up-to-date information.

Security and observability are equally critical components of this framework. With sensitive and regulated data at stake, government agencies must invest in a unified platform that integrates search, AI, security, and observability into a scalable solution, delivering end-to-end security through a zero-trust architecture, role-based access controls, encryption, and real-time threat detection. Observability provides visibility into AI interactions and data usage, enabling traceability, accountability, and continuous improvement. This helps agencies build secure, accurate, and transparent generative AI (GenAI) experiences using preferred models and providers while maintaining full data control and sovereignty.

Read also: Rethinking ROI in government: Delivering public value through incremental, outcome-driven projects

Across the public sector, these capabilities are already enabling new GenAI experiences. For example, agencies are using unified solutions that combine search and observability to power policy research assistants that summarise complex documents, citizen service chatbots grounded in verified information, and case triage tools that help staff quickly review large volumes of data. These solutions can also be used in other GenAI use cases, such as observability, where they provide instant explanations for system anomalies or performance issues, turning raw telemetry into actionable insights.

Similarly, these solutions can transform how security analysts work. It summarises incidents, correlates related alerts, and generates clear, compliant reports, accelerating response times while ensuring accuracy and transparency. GenAI also helps detect emerging patterns and unusual activity across logs and networks, giving agencies a proactive defence against threats.

The Cost of Doing Nothing and the Impact of Action

In Australia, the average cost of a data breach has surged to A$4.26 million in 2024, up 27 % since 2020. Meanwhile, the opportunity cost of doing something is equally compelling; responsible adoption of generative AI could contribute between A$45 billion and A$115 billion annually to Australia’s economy by 2030. For government agencies, investing in a mature GenAI foundation comprising search, observability, and security isn’t just about avoiding losses; it’s about unlocking value, improving services, and driving efficiencies that benefit citizens while reducing risk.

Another crucial point that should not be overlooked is working with vendors that have a shared framework for responsible AI. Doing so allows agencies to move confidently from pilot projects to whole-of-government capabilities. By grounding GenAI in truth, security, and transparency, Elastic helps the government deliver not just smarter services, but a safer, more trusted digital future for all Australians.

Anna Mascarello
Regional Vice President, Public Sector and Education ANZ at Elastic |  + posts

Anna Mascarello is the Regional Vice President, Public Sector & Education ANZ at Elastic and a recognised data pioneer with more than 20 years of professional experience.

Her career has been shaped by a commitment to data-driven innovation and to fostering inclusive cultures where individuals from all backgrounds can thrive in the technology sector.

In her role at Elastic, Anna leads strategic initiatives and key business functions to support government and education organisations across Australia, helping them harness the power of data to deliver more secure, effective and user-centric digital experiences.

Previously, she held senior roles at Adobe and Novell, where she championed data literacy to enable organisations to uncover insights faster, monitor infrastructure for cyber risks, and protect highly targeted systems and data.

Anna is a strong advocate for women in technology and believes the true value of technology lies not only in its capabilities, but in the meaningful impact it can have on people and organisational culture.

Leave a Reply

Your email address will not be published. Required fields are marked *