Generative AI in precision oncology faces a trust and safety challenge
As cancer medicine drowns in data, a new review asks whether generative AI can help oncologists keep up, and finds both striking promise and serious risks that make human oversight non-negotiable.
A narrative review published in the Journal of Hematology & Oncology examined how generative AI tools could support oncologists in precision cancer care.
In this increasingly data-intensive field, clinicians must cross-reference genomic sequencing results, patient records, imaging findings, and a rapidly expanding body of biomedical literature to inform their decisions.
Researchers found promising results for AI-assisted clinical trial matching and diagnostic report drafting, but also highlighted significant risks that make unsupervised deployment dangerous.
On the positive side, the AI tool TrialGPT demonstrated 87.3% agreement with expert assessments when matching patients to clinical trials, while reducing processing time by an average of 42.6%.
Meanwhile, the vision-language model Flamingo-CXR matched or exceeded the performance of board-certified radiologists in 94% of chest X-ray cases with no clinically relevant findings.
Researchers cautioned, however, that clinically significant errors appeared in 24.8% of evaluated imaging reports, whether AI- or human-generated, underscoring the need for combined oversight.
The review’s authors advocate for ‘Human-in-the-Loop’ workflows, in which human experts review all AI outputs before clinical implementation, and for Retrieval-Augmented Generation techniques that force AI systems to draw on current medical guidelines rather than relying solely on their base training data.
The key conclusion is that AI should function as an assistant to oncologists, not as an autonomous decision maker.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
