One of the most interesting aspects of the AI panel we held at Hotwire Australia this October was the extent of our agreement.
Despite the different roles, backgrounds, and perspectives on the panel, there was a striking consensus. The audience also showed strong alignment with the early results from new Hotwire global research being developed with the House of Beautiful Business.
The conversation was part of our global effort to understand the impacts of this technology in communications and marketing — from using it to create content and reports to building agents that can do most of our jobs and make decisions.
Bernice Muncaster (Senior Director, Marketing and Communications, APJ-EMEA at DXC Technology), Scott King (Principal Strategist, Growth and Innovation APAC from Adobe), and Andrew Birmingham (Tech Editor from Mi3) joined me in the panel moderated by our Australian MD, Melissa Cullen. Laura Macdonald, Hotwire’s Chief Growth Officer, revealed some fascinating early findings from our AI research.
Here is a breakdown of the most relevant points we discussed:
Do We Still Need the Pilot?
The panelists — and the market — see AI today as a tool or assistant, able to speed up work and help humans perform better. But increasingly, people are starting to see it as a “colleague,” which raises serious questions about decision-making power.
However, for most of us (me included), autonomous AI agents that can make choices still raise red flags. Not necessarily ideological ones, but practical and technical. So the answer was, yes, we need the pilot.
Agents in Action: Promise and Pitfalls
Bernice Muncaster, from DXC Technology, described a lead management agent her team uses. It prioritizes incoming leads, provides context, and connects them directly to the right salespeople. However, it doesn’t make decisions beyond the initial selection. The team doesn’t trust it to do that.
At Hotwire, we’re developing AI agents for a wide range of tasks. Recently, we created a team of three agents specialized in reporting — one processes data, another formats the content, and a third checks the work.
It’s impressive. For some reports, it has reduced production time by 50%. But the agents still make simple mistakes, from capturing the wrong number to drawing conclusions that sound right but aren’t. The panel conclusion was that we need to have humans verifying the AI work at each step.
Autopilot Isn’t Enough
Scott King, from Adobe, compared AI to autopilot on a plane. The system can take off, cruise, and land, but everyone still wants a captain in the cockpit.
Andrew Birmingham, from Mi3, who’s been running experiments with AI-generated journalism through his project Unprompted, pointed to another problem with AI agents. He mentioned a Stanford study that found when AI agents were optimized for making decisions around sales or engagement, dishonest or inaccurate content increased dramatically.
There is also the issue of biases and of how hard it is to know why an AI made a decision. Today, it’s very difficult to understand the processes that led AI to get to a conclusion or make a call. To complicate matters, the panelists pointed to ethical and legal discussions, as it’s not clear how to attribute accountability in a workplace where AIs are making decisions.
Who’s Reading Your Content?
Another key theme of the panel was the shift in how people search for and find information online.
ChatGPT is already among the top 10 most-visited websites in the world. Perplexity handles over 100 million queries per week, growing 25% month-on-month. According to Andreessen Horowitz, 60% of US consumers recently used an AI chatbot to research or decide on a product.
Unlike traditional search, which helps users find information, AI search is an answering — and often a suggestion — machine. Most people are being satisfied by the AI response and are not clicking on the links behind it.
That’s already changing how the web works, giving rise to Generative Engine Optimization to ensure content is accessible to, and understood by, the systems shaping AI answers. Some companies are even producing two pages for each piece of content, one for humans and another for AIs.
Getting The Right Visibility
Bernice shared an example that should concern every communicator. DXC discovered ChatGPT was giving inaccurate or incomplete information about the company. After an audit, they found the AI was referencing outdated pages that were supposed to be inactive. The fix involved ensuring AI read the correct data and actively disseminating updated content.
At Hotwire, we’ve been developing tools like Hotwire Spark to track where AI models pull information from and how brands appear in responses. We’ve found surprising patterns. Sometimes an obscure publication carries more influence in AI models than a major outlet. The overlap between Google search results and AI answers can be as low as 5%.
So, Where Does This Leave Us?
In short:
- AI is already transforming how we work, but it’s mostly assisting us, not making decisions.
- Human oversight remains essential — machines still make silly mistakes and struggle in key moments.
- Agency and autonomy are the next frontiers, raising practical, technical, ethical, and governance questions we can’t ignore.
- One certainty: search is being redefined — brands must rethink how they’re discovered and understood in the AI era.
AI is reshaping the landscape and infrastructure of communication and marketing. As professionals, we have the opportunity — and the responsibility — to help define what these machines are for and how they will be used.
By Edson Porto, AI Champion at Hotwire Australia