Sponsors

AI in healthcare could be risk for patients

The rapid rollout of AI tools across healthcare and pharmaceuticals is creating significant risks for patients and intellectual property, warns nexos.ai.

A recent Cybernews study of S&P 500 companies found that among 44 major healthcare and pharmaceutical organisations using AI, researchers identified 149 potential security flaws. These included 28 cases of insecure AI outputs, 24 data leak vulnerabilities, and 19 direct threats to patient safety where algorithmic errors could spread across entire hospital systems.

Unlike other industries, where AI failures may mean lost revenue, mistakes in healthcare can directly endanger lives. Biased datasets risk reinforcing health inequalities, while “black box” models leave clinicians unable to verify outputs.

Intellectual property is also at stake: with AI-driven drug discovery deals worth up to $2.9 billion, a single breach could wipe out a decade of research.

“The biggest AI threat in healthcare isn’t a dramatic cyberattack, but hidden failures that spread quickly,” said Žilvinas Girėnas, head of product at nexos.ai. “Without strong accountability, organisations put patients and valuable data at risk.”

To ensure safe adoption, Girėnas urges leaders to establish an AI governance framework built on three essentials:

  • Approved tools only – clinicians should use vetted AI models for high-stakes tasks like diagnostics.
  • Automatic data protection – systems must strip sensitive information from AI queries by default.
  • Traceable AI use – every interaction should be logged with user and timestamp for full accountability.

 

Girėnas stresses that responsible AI governance is not about slowing innovation, but enabling safe, scalable use across healthcare and life sciences.

Latest Issues