Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Artificial intelligence (AI) methods have been proposed for the prediction of social behaviors that could be reasonably understood from patient-reported information. This raises novel ethical concerns about respect, privacy, and control over patient data. Ethical concerns surrounding clinical AI systems for social behavior verification can be divided into two main categories: (1) the potential for inaccuracies/biases within such systems, and (2) the impact on trust in patient-provider relationships with the introduction of automated AI systems for "fact-checking", particularly in cases where the data/models may contradict the patient. Additionally, this report simulated the misuse of a verification system using patient voice samples and identified a potential LLM bias against patient-reported information in favor of multi-dimensional data and the outputs of other AI methods (i.e., "AI self-trust"). Finally, recommendations were presented for mitigating the risk that AI verification methods will cause harm to patients or undermine the purpose of the healthcare system.

Original publication

DOI

10.1038/s44401-024-00001-4

Type

Journal

Npj health systems

Publication Date

01/2024

Volume

1

Addresses

Center for Interventional Oncology, Clinical Center, National Institutes of Health (NIH), Bethesda, MD USA.

Keywords

Bridge2AI Voice Consortium