facebook-script

10 Takeaways from the Gold Coast: Building Workforce Capability with AI in Aged Care

10 Takeaways from the Gold Coast: Building Workforce Capability with AI in Aged Care

cover image

Subscribe to the Ausmed Toolbox

Toolbox Newsletter

From the Building Workforce Capability with AI event, Gold Coast, September 29


The aged care sector stands at a critical juncture with AI adoption. While 58% of workers are already using AI tools at work, according to KPMG and University of Melbourne research cited at our Building Workforce Capability with AI event on the Gold Coast last week, organisational adoption remains cautious. Here are the top insights from industry leaders on navigating this transformation.

1. AI Adoption is a Workforce Problem, Not a Technology Problem

George Gouzounis delivered a crucial insight that challenges how organisations approach AI implementation: "AI adoption is not a technology problem. It's a workforce development problem."

He shared a revealing anecdote about a robotics developer in Singapore whose trial with robotic cleaners in nursing homes failed - not because of technical issues, but because workers "kept forgetting to charge the robots overnight" and were the first to point out spots the robots had supposedly missed (spots that hadn't actually been missed).

This resistance stems from two fundamental concerns unique to aged care: values alignment and fear of replacement. As Gouzounis noted, the people who stay in aged care "are here because of the values and because, you know, let's be honest, they're not here for the big money. They're here because they believe in it." Organisations purchasing AI that sits unused have missed the critical element of creating human connection and helping teams understand what the technology means for them personally.

2. Start with Low-Risk, High-Inefficiency Tasks

Will Egan provided a pragmatic framework for AI implementation that reduces fear and maximises early wins. "There's so much inefficiency and digital paperwork in our healthcare system at large," he noted, advocating for breaking down jobs into component tasks and identifying "which parts of your job are low value enough or repetitive enough" for AI to handle.

Rather than jumping into high-consequence clinical applications, Egan emphasised starting with administrative burdens that consume healthcare workers' time without adding value. He noted these foundational AI models are "exceptionally good out of the box, and most of the time they're more performant than humans" for routine tasks.

The key is maintaining human oversight - Egan stressed that as long as the use case isn't high-consequence and there's a human in the loop making the final decisions, organisations can confidently proceed with AI implementation for these routine tasks.

3. Small Providers Have a Unique Advantage

Simon Miller challenged the assumption that larger organisations have an advantage in AI adoption. "You don't have the overhead of just the bureaucracy that large organisations have," he explained. "My organisation, getting something done requires legal and digital tech and vendors and procurement, multiple stakeholders and procurement and finance and boards."

In contrast, small providers can "try stuff" with agility. "You've got the opportunity to use some great off the shelf tools that you don't need to customise. The moment it goes into a shop like mine, you've gotta customise and you've gotta fit it into a really complex stack."

George Gouzounis reinforced this collaborative opportunity: "Find providers from a different state... meet with other providers who are also looking at achieving the same thing, and go to the developer together with, you know, an increased budget."

4. Data Security Requires Airport-Level Thinking

Camille Rico presented a comprehensive framework for AI security by comparing it to the familiar experience of airport security. Just as airports have multiple checkpoints to ensure safety, AI systems need layered security to protect resident data.

Layer 1: The Passport Check (Identity Verification)
Just as you must show your passport to prove who you are at an airport, Regis's AI system requires users to prove their identity through single sign-on and multi-factor authentication. Only authorised staff can access the system.

Layer 2: Security Gates (Keeping Data Internal)
Like airport security gates that control who enters secure areas, AI systems must be built within the company's own digital infrastructure. Rico explained that data "stays within the company and it's not being used to train someone else's model" - everything happens behind private endpoints within Regis's own systems.

Layer 3: Domestic Travel Only (Data Sovereignty)
Rico compared this to only flying domestically: "All the processes, all the AI microservices that's running in the background are all within Australian borders." This ensures resident data remains protected under Australian jurisdiction and privacy laws.

Rico emphasised that "technology is just one side of the equation" - equally important is "having a strong governance and policies in place." This includes training staff on procedures for handling AI hallucinations, always emphasising to "use your human judgement. Use your clinical judgement because that is something that we would not want AI to replace."

5. Focus the Middle Ground, Not Futuristic Robots

George Gouzounis warned against getting distracted by extremes in AI discussions. On one end, organisations get bogged down asking "Is Gemini better than Claude? Is it safe to put people's information online? We need policies, we need frameworks." On the other end, media coverage focuses on "some sort of a futuristic robot that dances or does something or makes coffee."

"Between these futuristic possibilities and these very real problems, we are missing the middle ground, which is where the real decisions need to be made," he argued. This middle ground is "where we are looking at this technology and what it can do for our operations, what it can do for our rostering, what it can do for the things that, for the problems that we are facing every day in order to increase productivity."

6. Co-Design with Frontline Workers is Non-Negotiable

Brad Chesham brought a nurse's perspective to the importance of frontline involvement. "When electronic records came into nursing, it was a shit show. You open up the electronic record and there's 16,000 tabs. Someone's in cardiac arrest and you need to order something, you've gotta open eight tabs to find something."

The solution? "Co-designed, co-developed inclusion workforce context every day of the week, because particularly in aged care, it's nursing driven, nursing led on the floor in the trenches, and those nurses need to be empowered and comfortable to drive the tech."

Will Egan reinforced this message, encouraging providers to compel software vendors to "really co-design with you... drag them kicking and screaming" if necessary, and "hold the contract over their head" to ensure accountability.

7. Build Your Own Custom GPT for Compliance

Simon Miller offered one of the evening's most actionable suggestions: "Build yourself a little Aged Care, new Aged Care Act, custom GPT that you can ask questions to and you can test your policies against. It's going to save you hours, it's going to do all the boring stuff for you. And there's almost no risk in that."

The process is straightforward - load a custom GPT with the Aged Care Act, Aged Care Rules, and Strengthened Standards. Staff can then query it to understand how policies align with regulations or interpret specific requirements. This represents the practical "middle ground" Gouzounis advocated for - achievable AI implementation solving real problems without massive investment.

8. Workforce Capability Requires New Training Approaches

Will Egan revealed a challenging truth about current training: According to Ausmed's Chief Nursing Officer, "there's no evidence that it says that [training] actually works." Current approaches have become "this kind of compliance ticker box exercise."

The future involves AI-powered adaptive learning that can assess competency in real-time. For example, "the learner can sit with an auditor who's asking them about SIRS, have a role play simulation at scale. In five minutes, we can tell you if they answered the questions according to your policy and procedure."

Egan noted that "three providers that we know of in Australia" are already building these systems themselves, with some "giving our modules to LLMs and asking them to make them" into personalised learning experiences.

9. AI Success = Time for Human Connection

Camille Rico shared evidence from Regis's AI assistant pilot, which surfaces "key clinical events that happened to residents in the past 24 hours." The system has achieved "about 50% of the manual processes has been reduced."

But the true value isn't just efficiency. "Every day our care managers read hundreds of pages of progress notes only to understand what happens to residents overnight," Rico explained. By having AI handle these "mundane, boring, repetitive tasks," staff can "focus on what truly matters... providing direct resident care. This care, it's something that a human can only provide and no machine, no AI can ever replace."

Simon Miller reinforced this: "If I want to find the nurses, I go to where the bank of computers are at the back of the nurses station, because that's where they're sitting doing data entry. And most people didn't become nurses to do data entry."

10. The Interface is Changing - Voice Will Dominate

Brad Chesham predicted a fundamental shift in how we interact with AI: "You're not going to be sitting down writing prompts. That's not going to happen. You'll be talking to it. That's the UI, and that'll happen in 18 to 24 months."

This isn't speculation - Google has "a new pair of glasses coming out" early next year, and "if you go to the Oakley store now you can buy a pair of Oakleys that have computation." These wearables allow hands-free interaction with AI agents.

Chesham contextualised this change: "Who in the last 24 hours has spent more than one hour away from their mobile computer?" When no hands went up, he noted, "We've already done it. We've already put computation into our lives in one generation."

The Bottom Line

The message from all panelists was clear: AI in aged care isn't about replacing human touch - it's about amplifying it. Success requires treating AI adoption as a workforce development challenge, starting with practical applications, and maintaining rigorous governance while empowering staff to innovate.

As George Gouzounis noted, referencing Google DeepMind's CEO Demis Hassabis: "In the very near future... we are going to need to keep learning and learning how to learn is a skill of the future."


Join us for upcoming Building Workforce Capability with AI events in Melbourne, Sydney, Adelaide and Perth to explore these strategies for your organisation. Dates released soon.