Introducing the Hotwire Frontier Tech Confidence Tracker
Hotwire’s new Frontier Tech Confidence Tracker is a timely pulse check on how business leaders and the public feel about emerging technologies. Based on insights from over 8,000 members of the public and 730 business leaders across five European markets, the report explores perceptions of 15 frontier technologies – from AI and quantum computing to biotech and robotics. Its aim? To understand the emotional and practical readiness of people to embrace a future that’s being shaped by technologies we’re still coming to grips with.
And what it reveals is a widening gap. While business leaders are surging forward with a high degree of optimism (index score: 77), the public remains hesitant (index score: 48). The result is a growing disconnect between those implementing the technologies and those expected to live with or work with them.
The consent gap: What happens when technology moves faster than understanding?
The most pressing tension the report surfaces is what we might call the “consent gap.” This isn’t about the now-familiar data privacy prompts or cookie disclaimers. It’s something much deeper and structural. AI is bleeding into everyday consumer interactions – invisible, frictionless, and often unacknowledged. From content recommendations to insurance pricing, recruitment filtering to customer service automation, frontier technologies are making decisions that shape our lives without us ever knowing they were involved.
Does the public even know when AI is being used? According to Ofcom’s 2024 Media Literacy Report, 44% of UK adults say they don’t feel confident identifying when AI is being used online. And our own Parliament agrees the issue is serious enough to warrant urgent scrutiny – as evidence by the new media literacy inquiry launched by the Culture, Media, and Sport Committee, which will examine how digital literacy efforts are keeping pace with evolving technologies like AI.
But why should we care? And what makes AI different?
The age of AI and the illusion of informed participation
In their book The Age of AI: And our Human Future, Henry Kissinger, Eric Schmidt, and MIT computer scientist Daniel Huttenlocher warn against the unchecked acceleration of technological change by a self-selecting elite. They argue that we are entering an era where the nature of knowledge, decision-making, and even what we consider to be “truth” is being reshaped by systems we barely understand.
Through that lens, Hotwire’s findings gain sharper context. The belief that technological progress naturally earns public trust is outdated. We’ve reached a point where systems are no longer being adopted through persuasion or shared understanding – they’re simply appearing in our lives, with little transparency and less debate. Think Meta AI integrating overnight into everyone’s WhatsApp and Instagram apps.
Accountability without explanation: Why AI is not a calculator
There’s a familiar analogy in the tech world: we don’t need to explain when someone uses a calculator – so why should we explain when someone uses AI?
But that analogy collapses under the lightest of scrutiny. Calculators don’t decide whether you get a mortgage, who gets shortlisted for a job, or whether a piece of news is misinformation. AI does. These aren’t computational functions, they are judgements. When machines are involved in life-changing decisions, people need to understand how and why those decisions are made. Without explainability, there is no accountability. And without accountability, there can be no trust.
The technocrats are losing the room
Another fault line exposed by Hotwire’s report is who we trust to guide us through this era. Business leaders see tech entrepreneurs as credible. The public disagrees. Even our more forward-thinking, freedom-loving friends across the pond agree. The Pew Research Center found that only a minority of Americans trust companies to use AI responsibly. Trust is shifting away from the mavericks and back toward scientists, engineers, and researchers.
And here’s the truth: the people leading the charge – Zuckerberg, Musk, Altman, Bezos – are not like the rest of us. This isn’t an insult. It’s an observation. They are high-functioning overachievers, often neurodivergent, wired to solve at scale and think abstractly. These traits are superpowers in fields like tech and science, where edge-case cognition creates breakthroughs. But they also mean the people reimagining society often perceive it differently than the people living in it. That’s not inherently bad. But it isn’t neutral either. This matters, because in any functioning democracy, the future can’t be built by a cognitive elite alone. Progress demands participation. If only the most technically or commercially literate voices are shaping how society evolves, then we’re not innovating – we’re excluding.
Reimagining tech communications: From disclosure to dialogue
If progress no longer equals trust, then tech communication must shift from explanation to inclusion. As the UK Government’s recent Public Attitudes to Data and AI Tracker Survey shows, the public doesn’t just need visibility – it needs a voice. People want to understand the role AI plays in their lives, how it impacts them, and how their data is being used in the process.
Communications can’t just be about ethics statements, AI usage disclaimers and flashy demos. Brands need to build communities where people feel invited into the conversation. A place where questioning the system is not a PR risk, but a strategic necessity.
Here’s a few ways brands can start communicating better with their customers:
- Create opt-in explainability: Offer simple, comprehensible explanations for how AI features work – especially when they impact decisions or outcomes.
- Introduce third-party validators: Partner with academics or regulatory bodies to audit AI usage, and publish the findings transparently.
- Elevate credible messengers: Scientist, not salespeople. Engineers, not evangelists.
- Host real conversations: Use forums, content series, or listening sessions to actively invite user perspectives on how AI is deployed.
In the absence of these steps, people will assume decisions are being made about them, not with them. That’s not a communications issue anymore; it’s one of legitimacy.
Building a future that includes everyone
We live in a moment of extraordinary promise, but equally rising fear. AI might transform everything from diagnosis to education, logistics to creativity. But if that transformation is silent, opaque, and exclusive, it will spark backlash, not belief.
The good news? Business still has a choice. Credibility and transparency are the entry points to trust. And trust is the real engine of adoption.
So, let’s ask better questions:
- Do people know when they’re interacting with AI?
- Do they understand why and how AI is influencing decision-making processes?
- And does the public feel included in the future that is being built around them?
If the answer to those questions is “no,” then it doesn’t matter how advanced the system is. It won’t be progress. It will just be power, misunderstood.