Pharma’s AI Problem Isn’t Technology. It’s Decision Quality

Pharma is moving quickly on AI. Tools are being deployed, pilots are underway, and investment is accelerating. But despite the activity, many organisations are struggling to convert that momentum into meaningful, scalable impact.

Over the past 12–18 months, AI has moved from curiosity to priority across pharma. Budgets have been allocated, use cases have been identified, partners and platforms have been brought in. On the surface, it looks like progress and in many ways, it is. But beneath that, a different pattern is starting to emerge. More tools, more pilots, more activity but not necessarily better outcomes.

The assumption that doesn’t hold

There’s a widely held assumption underpinning much of the current momentum: That introducing better technology will lead to better decisions, better execution, and ultimately better outcomes. In practice, that assumption doesn’t hold. AI doesn’t improve decision-making by default. However, it amplifies whatever decision-making capability already exists and if that capability is strong, AI can accelerate progress. But, if it isn’t, AI tends to scale inconsistency, fragmentation, and noise and this is where many organisations are starting to feel the strain.

Where investment starts to drift

Most AI efforts don’t fail outright, they drift. A use case is identified (often quickly), a pilot is launched and the Initial results look promising. But then momentum slows, scaling proves harder than expected and adoption is uneven.

The use case remains isolated rather than becoming embedded, so another use case is identified and another pilot begins. Over time, this creates a portfolio of activity, but not a system of value.

You can see this clearly in marketing teams experimenting with generative AI. One brand team uses AI to draft HCP emails, while another uses it to generate rep call notes and a third uses it for content ideation. Each initiative works in isolation and each shows efficiency gains.

But none are connected to a broader decision about:

  • how content should be created

  • how channels should be orchestrated

  • how engagement should actually improve

So the organisation becomes more efficient at producing outputs, without becoming more effective at driving outcomes.

Why pilots don’t scale

When pilots fail to scale, the default explanation tends to focus on:

  • technical limitations

  • data readiness

  • change management

Those factors matter, but they’re rarely the root cause and more often, the issue is that the original decision to pursue the use case wasn’t grounded in how the organisation actually operates. Take a common example in medical affairs:

An AI tool is introduced to summarise scientific literature and generate draft medical responses.

In a controlled pilot, it works well:

  • faster turnaround

  • consistent summaries

  • reduced manual effort

But when rolled out more broadly, adoption stalls. Why is this?

Because:

  • medical reviewers don’t fully trust the outputs

  • there’s no clear guidance on what “good” looks like

  • accountability for final content still sits with individuals

  • the process hasn’t been redesigned to incorporate the tool

So the tool sits alongside the existing process, rather than replacing or reshaping it. It works but it doesn’t integrate and over time, things revert.

The illusion of progress

One of the more subtle risks with AI is that it creates a convincing sense of progress. Dashboards improve, outputs look more sophisticated and the time to produce assets decreases.

For example: A marketing team uses AI to generate multiple variants of email content for different HCP segments. Open rates improve and content production is faster. But the underlying questions remain unchanged:

  • Are we targeting the right HCPs?

  • Are we solving a meaningful need?

  • Are we influencing behaviour—or just increasing activity?

Without clarity on those decisions, AI simply accelerates the existing model. The organisation becomes more efficient at doing things, but not necessarily better at doing the right things.

Where decision quality becomes the constraint

As AI capability increases, decision quality becomes more, not less important because AI expands what is possible:

  • more content

  • more personalisation

  • more data

  • more options

But it doesn’t tell you:

  • what matters most

  • where to focus

  • what to ignore

Without clear decision frameworks, organisations default to:

  • generating more content across more channels

  • launching additional campaigns “just in case”

  • personalising without a clear view of impact

You see this in omnichannel execution: AI enables more touchpoints, more variation, more responsiveness. But without a clear strategy, teams end up:

  • increasing frequency without improving relevance

  • adding channels without removing others

  • measuring activity rather than outcomes

The result is not precision. It’s noise at scale.

The connection to the Human Adoption Gap

This is where the link to the Human Adoption Gap becomes clear. When decisions are made in isolation from how the organisation actually works, the output is often:

  • solutions that are technically sound, but operationally impractical

  • tools that require behaviours that haven’t been designed for

For example: an AI-driven “next best action” model is introduced for field teams. In theory, it should guide reps towards the most relevant engagement with each HCP. But in practice:

  • reps are still measured on call volume

  • time pressures favour familiar approaches

  • the rationale behind recommendations isn’t always transparent

  • managers aren’t reinforcing its use

So the tool requires a shift towards:

  • more selective engagement

  • trust in data-driven recommendations

  • different performance conversations

But none of those behaviours have been actively designed or incentivised, so the tool is used selectively or ignored. But not because it isn’t valuable, but because the organisation isn’t set up to use it.

What better looks like

Improving AI outcomes doesn’t start with more tools or more pilots. It starts with improving how decisions are made. In practice, that means being more deliberate about:

Where to focus

For example: prioritising a small number of use cases that directly impact HCP engagement or patient outcomes, rather than spreading effort across multiple low-impact experiments.

What success looks like

Not just efficiency gains, but:

  • changes in behaviour

  • improvements in decision quality

  • measurable impact on engagement or outcomes

What needs to change for this to work

If an AI tool changes how content is created or how reps engage, then:

  • workflows need to be redesigned

  • incentives need to be aligned

  • expectations need to be reset

What not to do

Actively stopping or deprioritising use cases that don’t align with strategic outcomes, even if they are technically interesting or easy to implement. This is often where organisations struggle most. AI makes it easy to start things, but it also makes it so much harder to stop them.

In practice, this means being willing to:

  • stop pilots that show technical promise but don’t translate into real workflow change

  • deprioritise use cases that improve efficiency but don’t impact HCP engagement or patient outcomes

  • avoid duplicating similar initiatives across brands or functions without a clear plan to scale

  • say no to “quick wins” that create local success but increase system-wide complexity

For example: A brand team uses AI to generate social content more efficiently. It works well, reduces agency cost, and is easy to implement. But if social isn’t a meaningful driver of HCP engagement in that therapy area, the impact is marginal.

At the same time, more complex, but higher value use cases (e.g. improving field team decision support or integrating insights across channels) are delayed because they are harder to execute.

So the organisation progresses on what is easy, not on what matters.

Being deliberate about what not to pursue is what prevents AI efforts from becoming fragmented and is often the difference between activity and impact.

A shift in perspective

AI is often framed as a technology transformation. In reality, it is a decision-making test. It exposes how clearly an organisation can prioritise, how consistently it can align around those priorities, and how effectively it can translate them into action.

The organisations that succeed won’t be those with the most tools.

They will be those that can make better decisions about how those tools are used—and where they shouldn’t be used at all.

Closing thought

There is no shortage of AI activity in pharma, but activity is not the same as progress. As capability continues to expand, the differentiator will not be access to technology. It will be the quality of the decisions that shape how that technology is applied.

If this resonates

If you’re investing in AI but seeing fragmented progress or limited scale, the issue is often not the capability itself. It’s how decisions are being made around it.

This is the work Human Arc focuses on: helping organisations make clearer, more deliberate decisions about where AI creates value and where it doesn’t.

Previous
Previous

From Omnichannel to Optichannel: Why Many Pharma Engagement Strategies Will Fail

Next
Next

The Human Adoption Gap: Why AI and Omnichannel Strategies Fail in Pharma