ACCIDENTAL INJUSTICE: HEALTHCARE AI LEGAL RESPONSIBILITY MUST BE PROSPECTIVELY PLANNED PRIOR TO ITS ADOPTION

Accidental injustice: Healthcare AI legal responsibility must be prospectively planned prior to its adoption

Accidental injustice: Healthcare AI legal responsibility must be prospectively planned prior to its adoption

Blog Article

This article contributes to the ongoing debate about legal liability and responsibility for patient harm in scenarios where artificial intelligence (AI) is used in healthcare.We note that due to the structure of negligence liability in England and Wales, it is likely that clinicians would Bubblers be held solely negligent for patient harms arising from software defects, even though AI algorithms will share the decision-making space with clinicians.Drawing on previous research, we argue that the traditional model of negligence liability for Ox Bows clinical malpractice cannot be relied upon to offer justice for clinicians and patients.There is a pressing need for law reform to consider the use of risk pooling, alongside detailed professional guidance for the use of AI in healthcare spaces.

Report this page