Last week, Caitlin Merritt and I had the incredible opportunity to attend SXSW Sydney, and wow - what a ride. The energy, the ideas, the conversations... it all left us with a lot to think about. While AI was undeniably the star of the show (surprise, surprise), what struck me most was how every talk circled back to the same core theme: the human element. As Fenella Kernebone (Head of Conference) said in her opening speech - this time last year we were talking about what AI can and can’t do…this year, we seem to be talking about what AI should and shouldn’t do.
Here are my top takeaways from some of the most thought-provoking sessions:
If there was one talk that set the tone for the entire conference, it was Mo Gawdat's keynote. The former Google X executive didn't hold back when discussing AI's potential impact on humanity. His message was clear and urgent: AI's ethical use is crucial for humanity's future.
What resonated with me most was his emphasis on responsibility. We're not just building tools anymore - we're shaping the future of human existence. Gawdat reminded us that the decisions we make today about how we develop and deploy AI will echo for generations. It's not about whether AI will be powerful - it already is - but whether we'll be wise enough to use it responsibly.
My takeaway: This isn't just a tech problem - it's a human problem. It’s also not just a problem - it’s an opportunity. And it requires all of us to think deeply about the world we want to create
In a conference dominated by serious AI discussions, this session was a refreshing reminder of something we often forget in our productivity-obsessed world: play is vital for innovation and adaptability.
The panel of speakers made a compelling case that play isn't just for kids - it's essential for navigating future uncertainties and enhancing human potential in the digital age. When we play, we experiment without fear of failure. We explore possibilities. We build resilience.
In an age where AI can handle so many of our routine tasks, maybe our uniquely human ability to play, imagine, and create is exactly what will keep us relevant and fulfilled.
My takeaway: Don't just accept the status quo blindly - play with it and see what happens. Creativity and resilience aren't luxuries; they're necessities. Favourite quote - “play greases the potential to better futures.”
This panel tackled the question everyone's been asking: Will AI replace creative workers? The answer was reassuring and challenging at the same time: AI amplifies creativity; human imagination remains essential.
The speakers emphasised that AI is a tool, not a replacement. It can handle technical execution, iterate rapidly, and process vast amounts of data - but it can't originate truly novel ideas or understand the deeply human contexts that drive meaningful creative work. Creativity is an expression of self - for the foreseeable, AI has no self, so at its core it lacks that vital ingredient required to be a truly creative force.
The real competitive advantage in the future? People who can harness AI to amplify their creativity while maintaining their unique human perspective.
My takeaway: Learn the tools, but never lose sight of what makes you human. Your imagination and unique perspective on the world is still and will always be your superpower.
Perplexity's session was fascinating because it reframed what we expect from AI tools. Instead of building yet another general-purpose AI, they're focusing on serving curiosity with accurate, concise answers.
What struck me was their commitment to accuracy and citation, fighting back against the hallucination problem that plagues many LLMs. Their vision of AI as a curiosity engine, proactively helping users explore and discover, felt fresh and genuinely useful.
My takeaway: The best AI tools won't try to do everything. They'll excel at specific things that genuinely improve how we think and learn.
This was one of the more technical sessions, but the implications were massive. The future of AI isn't just chatbots - it's autonomous agents that can take action on our behalf. The key insight? Modular design enhances AI agent interoperability and safety.
Instead of monolithic systems, we're moving toward ecosystems of specialised agents that can work together, hand off tasks, and operate with appropriate safeguards. Think of it like microservices for AI - each agent does one thing well, and they collaborate as needed.
My takeaway: We're moving from AI assistants to AI coworkers. If you’re not already doing this, you need to start thinking about structured data at rest and how it flows between your systems (contact us if you aren’t sure what this means, we can help get your organisation AI-ready).
This session was equal parts inspiring and unsettling. Future bionic limbs may exceed human abilities and raise ethical concerns.
We saw demonstrations of prosthetics that don't just restore function - they enhance it. Athletes are already performing at elite levels with bionic limbs. But this raises profound questions: When do we cross from restoration to enhancement? What happens when "disabled" athletes outperform "able-bodied" ones? How do we think about fairness, accessibility, and what it means to be physically human?
My takeaway: Technology is moving faster than our ethical frameworks. We need to have these conversations now, not after the fact.
If you weren't worried about deepfakes before, this session would change that. The sophistication of AI-generated fake content is advancing at a terrifying pace, and critical thinking skills are essential to combat deepfake misinformation.
What concerned me most was the impact on young people, who are growing up in a world where seeing is no longer believing. The speakers emphasised the urgent need for education initiatives and collaboration across tech, education, and government sectors.
My takeaway: Media literacy isn't optional anymore. We all need to become better at questioning what we see and teaching others to do the same.
Walking away from SXSW Sydney, I'm feeling both energised and sobered. The technology being developed is extraordinary - truly science fiction becoming reality. But what stuck with me most wasn't the tech itself; it was the repeated insistence from nearly every speaker that we need to keep humans at the centre of this transformation.
AI will be as good or as bad as we make it. The tools are powerful, but they're still just tools. How we choose to build them, deploy them, and regulate them will determine whether we're heading toward utopia, dystopia, or something in between.
The future is being written right now, and after SXSW, I'm convinced that staying curious, staying critical, and staying human are our best strategies for navigating what comes next.