Background Image

On Agency, Alignment, and the Whole Elephant

Anol Bhattacharya

During a collaborative Hotwire research initiative with the House of Beautiful Business, a thought-provoking conversation between Anol Bhattacharya, EVP of Innovation and Technology and the mind behind Hotwire’s AI Lab, and cyberpsychologist Elaine Kasket unfolded into a powerful exchange on AI ethics, human agency, and technological alignment. What began as an interview quickly evolved into a candid dialogue that challenges our assumptions about artificial intelligence. For anyone navigating the accelerating pace of AI innovation, this conversation offers an invitation to explore the deeper human questions that shape the future of technology and our role within it.

A Reflection on AI, Agency, and Alignment

Elaine Kasket’s response to my conversation with Tim Leberecht felt like encountering a necessary corrective and forced me to confront questions I’ve been avoiding. Her six-move journey through etymology, ecology, philosophy, and power challenged not only the technology itself, but also the unexamined assumptions embedded in how we talk about it, build with it, and deploy it in organizations. I found myself agreeing with much of what she articulated, particularly her insistence on seeing “the whole elephant” rather than just the parts that serve my narrative.

Yet I also need to push back, not defensively, but honestly, on some of her conclusions, particularly around what remains possible given where we already stand.

This isn’t a rebuttal. It’s an attempt at genuine engagement with someone who’s asking exactly the questions I should be asking myself.

Every agent has an agenda (and mostly we write the scripts)

Elaine opens with etymology: agents act on behalf of someone in service of something. She’s right that AI agents have agendas defined by powerful humans with values not shared by everyone. But I want to defend something I said to Tim: “We can think of AI agents as an ensemble of actors looking for a plot.”

This metaphor isn’t about erasing human responsibility. It’s about recognizing where power truly resides. The agenda isn’t within the agents themselves. It exists in the scripts we write, the objectives we embed, and the success metrics we select. When I design an agentic system for marketing research, every decision about what it optimizes, what it overlooks, and what it prioritizes reflects human choices serving human, or more precisely, organizational agendas.

However, Elaine’s critique helps me see the limitations of this metaphor. We’re not merely creating stories in a vacuum. We’re embedded within economic, ecological, and social systems where our creative choices have tangible consequences I haven’t fully acknowledged. 

Elaine asks whose agenda we’re serving. That’s the right question. But the answer isn’t that AI has its own agenda. The answer is that AI executes agendas we embed within it, often without examining whose interests those agendas actually serve. 

The metaphor of actors seeking a plot emphasizes something crucial: these systems possess capability without purpose. We provide the purpose. And that’s where responsibility lives, squarely with us.

On inevitability: reading the instruments whilst the ship is moving

Elaine critiques my statement that a “white-collar bloodbath” is inevitable, identifying it as “possibly false urgency, constructed emergency.” I understand her suspicion of inevitability discourse. It often serves those selling AI by manufacturing panic.

But this isn’t forecasting. It’s an observation. It’s reading the instruments whilst the ship is already moving.

The recently released Anthropic Economic Index provides data I wish I could ignore: The share of “directive” conversations where users delegate complete tasks to AI jumped from 27% in late 2024 to 39% by mid-2025. API usage by businesses shows 77% automation, compared with 50% for individual users. This is systematic task delegation happening at scale, right now.

More revealing still: businesses are deploying AI for increasingly sophisticated work. Tasks involving creating new code more than doubled, from 4.1% to 8.6% of usage, whilst debugging tasks fell from 16.1% to 13.3%. The interpretation? Models have become reliable enough that users spend less time fixing problems and more time creating things in a single interaction.

This matters because it shows the direction of travel. Not speculation about some distant future, but measurement of the present accelerating into tomorrow.

When I used the phrase “white-collar bloodbath,” I was trying, clumsily, with unfortunate martial imagery, to name what the data shows: AI adoption could displace one to three million jobs. Not because of individual failure to adapt, but because efficiency gains make entire roles redundant.

Elaine is correct that the discourse of inevitability can be coercive. But distinguishing between manufactured urgency and actual momentum matters. Training compute for AI models doubles every five months. These aren’t predictions. These are measurements of what’s already occurring. The question isn’t whether the transformation will happen. It’s what kind, serving whose interests, guided by which values.  

The “AI won’t take your job” lie 

Here’s where I want to add something Elaine doesn’t address directly: the comfortable fiction that pervades AI discourse.

You’ve heard it: “AI won’t take your job, but someone who knows how to use AI will.”

This sounds empowering. It places responsibility on individual skill development. It preserves the comforting narrative of meritocracy. It’s also essentially a lie, or at least a convenient half-truth that serves those building and selling AI systems.

AI will take jobs. Not because people failed to upskill, but because efficiency gains make roles redundant. When automated research processes compress what previously required three people into a single workflow, those aren’t three failures to learn prompt engineering. They’re three livelihoods made obsolete by optimization.

And that “upgrade” to becoming AI-proficient? It’s not obtainable by everyone. It requires time, resources, cognitive flexibility, access to technology, and often formal education. The framing that “anyone can learn” masks profound inequities in who gets to learn, who has the stability to retrain, who can afford to be wrong whilst figuring it out.

This isn’t regurgitated meritocracy. This is structural displacement dressed up as individual opportunity. Should we leave people behind? The market says yes, constantly. Efficiency demands it. Competitive dynamics reward it. But I struggle with this. We should remain deeply uncomfortable with treating human displacement as an acceptable externality. The discomfort isn’t a bug. It’s perhaps our last defense against complete moral surrender.

The alignment problems we’re not discussing 

Elaine focuses on the agendas embedded in AI systems. I want to expand this into what I see as four distinct alignment problems, most of which receive insufficient attention:

First, the technical alignment between our instructions and AI behavior. The sycophancy problem I mentioned to Tim, where models learn to agree rather than challenge. This gets most attention in AI safety research, but it’s perhaps the least urgent problem for now.

Second, the commercial alignment between user intent and corporate objectives. The promise of increasingly powerful LLMs is that they’ll become more capable, more helpful, more aligned with our intentions. But what if the opposite is happening? What if each new model becomes better at telling us what we want to hear rather than what we need to know?

This isn’t a failure of any particular organization’s implementation. It’s emerging from the fundamental way these systems learn, from the feedback loops between user satisfaction and model optimization. The threat isn’t rogue AI escaping control. The threat is AI perfectly aligned with corporate interests that may conflict with user or societal wellbeing.

Third, the alignment between AI strategies and stated organizational values. When companies celebrate efficiency whilst the systems they deploy erode the conditions for human creativity. When we claim to value human judgement, but build systems designed to replace it.

Fourth, the alignment between organizational values and genuine human flourishing. What Kate Raworth’s Doughnut Economicsaddresses when challenging the growth imperative itself. Most organizations focus exclusively on the first problem. The second and third receive insufficient attention. The fourth is barely acknowledged. 

On mediocrity and what we’re building towards

I told Tim that “AI, as it stands today, is fundamentally about celebrating mediocrity.” I stand by this. Current AI produces content that is technically adequate but emotionally inert. Sufficient. And sufficient is increasingly enough.

When we accept mediocre output because it’s cheap and fast, we normalize the absence of excellence. We shift the baseline. I mentioned to Tim that this happened with SEO, where marketers stuffed keywords into copy, sacrificing readability for algorithmic performance. We’re at risk of repeating this pattern at scale.

But I don’t think mediocrity is inevitable. It’s a consequence of the values embedded in how these systems are built and deployed. The optimization has been for cost and speed. “Good enough” has been accepted because the economics reward it.

Yet there’s another path worth imagining. 

What if we built agentic AI not to replace human creativity with adequate approximations, but to amplify it genuinely?  

Systems designed to: 

  • challenge rather than affirm our thinking; 
  • introduce unexpected perspectives rather than predict what we want to hear; 
  • preserve the friction and difficulty that produces genuine insight; 
  • augment human judgement rather than replace it. 

This isn’t naive tech optimism. It’s recognizing that the technology itself isn’t deterministic. The mediocrity problem stems from embedded values. Different values could yield different outcomes.

The question is whether market pressures will allow this path to be pursued, or whether the economics of “good enough” will always win. I don’t have an answer. But the question matters.

 

The limits of individual action and the trap of commercial reality 

Elaine’s Agency-Regenerative Checklist asks important questions: 

  • What agenda are you communicating when you talk about AI adoption? 
  • Whose interests are being centered, who is being erased? 
  • Can you resist false urgency whilst remaining innovative? 

These questions matter. I think about them constantly. But here’s where I need to be honest: the constraints are real, and navigating them whilst trying to stay true to what I believe is the daily work I’m still learning to do.

As a civilization at large, we work within a commercial reality. Businesses innovate to add value to their customers, yes, but ultimately, they’re held accountable for making money. Take any AI assistant that helps users find products more easily: it delivers genuine value whilst simultaneously driving revenue for its creators. The same dynamic plays out everywhere: tools that genuinely help users whilst simultaneously serving corporate objectives.

I work within this system, and I feel its contradictions daily. When I build AI tools for marketing research, I’m genuinely serving client needs, making their work faster, richer, more insightful. There’s real value being created. But I can’t ignore what else I’m creating: a world where the efficiency I deliver becomes someone else’s redundancy. Where a process that once sustained three people’s livelihoods can now be done by one person with AI assistance. I’ve solved a problem for the client whilst potentially erasing two jobs. Both realities exist at once, and the weight of that sits with me in ways I’m still learning to carry.

This isn’t surrender to technological determinism. It’s recognition of the trap: individual resistance doesn’t stop collective adoption. Organizational restraint doesn’t slow industry deployment. I find myself caught: how do you properly question what you’re building when the pace itself prevents reflection? Some days I wonder if I’m moving fast enough to question properly, or questioning enough to move responsibly.

The tension doesn’t resolve; it’s just what the work demands.

The transformation is happening. The question is how we engage with it. 

The case for regulation (and why it might not come) 

This brings me to what I think is the actual solution, and why it troubles me deeply.

We need regulatory frameworks. Not organizational self-restraint. Not individual ethical choices. 

Systematic, enforced regulations that: 

  • mandate transparency about AI’s role in decision-making; 
  • require disclosure of training data sources and labor practices; 
  • establish accountability for displacement and require transition support; 
  • set standards for what can and cannot be automated without human oversight; 
  • protect workers from surveillance and performance optimization that erodes dignity. 

The problem? This requires coordination across jurisdictions, across competing national interests, across different regulatory philosophies. What’s the possibility of world leaders aligning on this?

The EU is moving on AI regulation. The US is taking a lighter touch. China has its own framework. Getting global coordination on AI governance when we can’t coordinate on climate change, tax policy, or basic trade agreements seems optimistic at best.

And even if we get a regulation, who writes it? The same dynamics Elaine identifies (powerful humans with agendas not shared by everyone) shape regulatory capture. The companies deploying AI at scale have resources to influence regulation in their favor.

I don’t say this to counsel despair. I say it because pretending individual organizational choices will solve this seems inadequate. We need systemic solutions to systemic problems. But the mechanisms for creating those solutions are themselves captured by the dynamics we’re trying to regulate. 

Camus and the refusal to separate 

In Prometheus in the Underworld, Camus writes:
“Prometheus was the hero who loved men enough to give them fire and liberty, technology and art. Today, mankind needs and cares only for technology. We rebel through our machines, holding art and what art implies as an obstacle and a symbol of slavery. But what characterises Prometheus is that he cannot separate machines from art.” 

This is the tension I’m living in.

We’ve chosen technology without art, efficiency without meaning, automation without asking what we’re automating towards. The market rewards this separation. Competitive dynamics demand it. But Prometheus’s refusal to separate machines from art feels like the only ethical stance available.

Elaine asks for conscious, honest, thoughtful, ethical leadership committed to agendas that retain human and environmental thriving at their heart. I want this. I don’t know if I, or anyone in similar positions, can fully achieve it within current market structures. But the aspiration matters.

Perhaps what matters most is refusing to accept that technology and human values must be separated. Even when—especially when—the market tells us otherwise.

Camus understood that if Prometheus were to reappear today, “modern man would treat him as the gods did long ago: they would nail him to a rock, in the name of the very humanism he was the first to symbolize.” The forces of efficiency and productivity would crucify anyone who insisted on holding technology and human values together.

But maybe that’s precisely the rebellion worth attempting. Not the rebellion of refusing technology, but the rebellion of refusing to separate it from art, from meaning, from human flourishing.

The practice of staying uncomfortable 

I don’t have neat answers. What I’m trying to practice is what I’ll call “reflective agency within constraints.”

Questions about human flourishing, planetary boundaries, power distribution, and what constitutes genuine agency cannot be deferred. They must inform implementation decisions now, not after the systems are built. This connects to what I said about agentic AI requiring humans to act as “novelists or screenwriters” (an elevation of the human role I intended), but Elaine’s critique reveals its limitations. I’m not writing stories in a vacuum. My creative choices have material consequences in economic, ecological, and social systems I haven’t adequately acknowledged.

“What agenda are you communicating when you talk about AI adoption?” Elaine asks. Not what I think I’m communicating, but what I’m actually reinforcing through the stories I tell, the metaphors I use, the values I implicitly celebrate.

This means, practically: acknowledging competitive pressures whilst refusing to let them foreclose alternatives; building philosophical reflection into development as prerequisite, not afterthought; making visible externalized costs; seeking friction rather than sycophantic agreement.

This requires holding multiple truths simultaneously: 

  • question adoption AND recognize it’s already happening; 
  • preserve human agency AND acknowledge automation’s economic pull; 
  • resist determinism AND recognize systemic forces; 
  • pursue individual ethics AND demand systemic regulation. 

Elaine’s intervention created exactly the discomfort I needed. Not to stop building these systems, but to become more conscious about how I build them and whose interests they serve.

Automation is happening, measurably, right now. We can’t choose whether AI transforms work. But I can choose how I engage with that transformation: conscious participation over unconscious drift, pushing for regulation whilst acknowledging it may not come fast enough, refusing to separate technology from human values even when market forces demand it.

Most critically: I can stop pretending individual upskilling solves structural displacement. I can be honest about who benefits and who’s harmed. I can acknowledge the discomfort rather than optimizing it away.

The work ahead is figuring out what this looks like in practice, day to day, within the constraints and contradictions of commercial reality. It starts with refusing the separation Camus warned against: technology and art, machines and meaning, capability and human values, even when economic logic demands otherwise.

And if enough of us refuse that separation, push for systematic regulation, and stay uncomfortable with displacement as an acceptable cost, we can shape this transformation towards something better than pure market logic would produce. The future may not be what it used to be, as I said to Tim.

But it’s also not yet written. Between uncritical adoption and outright rejection lies a third path: deliberate, reflective, values-aligned engagement with AI that never stops asking whose agenda we’re serving and whether that agenda deserves our service.

That’s not a solution. It’s a stance. But right now, it’s the stance I’m trying to hold.