AI In Healthcare: What Are The Major Risks?
#Artificial Intelligence (#AI) will be a valuable tool in #healthcare, but it is not yet secure enough for current use. There are way too many risks.
#AI is being merged into healthcare platforms, and providers are using this unregulated tool to aid them with their jobs. This is very resourceful but not safe, especially since most. of the models were not trained to be used in healthcare. These instances may seem innocent but they may be unethical.
I am acquainted with the current climate; providers are often overwhelmed, overworked, and understaffed. When you don't get enough sleep or time, your brain doesn't work as well as it should. We have all experienced that before.
Unfortunately, this is particularly hazardous for those responsible for saving lives. So how do providers get the help they need? They try the most enticing and all-encompassing tool available now: AI. Providers require help and time, and AI seems to be the perfect solution. It is not, at least not yet.
Some severe issues should've been addressed before AI came anywhere near healthcare, but it is too late for that, so I'm going to lay out what some current obstacles are in hopes of spreading more awareness, and hopefully that leads to caution and solutions.
Concerns
1) CyberAttacks
Several types of cyber attacks can be implemented on a Large Language Model (#LLM).
Prompt Injection, Adversarial Attacks, and Excessive Agency.
Prompt Injection
Prompt Injection is a cyber attack against Large Language Models (LLMs) that alters the input before it reaches the algorithm.
In healthcare, this is very damaging.
If a patient asks an AI model about an issue they are encountering and the question is altered, the results will not align with the input. Therefore, if symptoms are modified, the diagnosis will be incorrect. This also applies to doctors using AI models for diagnostic assistance. If they type in the patient’s symptoms and the input is distorted or changed, the result will be wrong and could lead to a misdiagnosis.
Prompt Injection is often undetectable, especially in AI image analysis. If the AI reads an X-ray and there is an input attack, the X-ray will be altered visually by adding tiny particles or moving them. The scale is so miniature that humans can’t see the change, but the AI can, which leads to the mechanism misdiagnosing the patient. This creates incorrect diagnoses, a false positive or negative, and results in patients taking treatments they don’t need that could be harmful or not getting the treatment they do need.
(The most suitable solution would be to have a mechanism that compares the input from the moment it is typed and the input when it reaches the algorithm. If there is a difference, it should alert the company, and fix the issue.)
Adversarial Attacks
Adversarial Attacks target the AI models' pretraining data; these attacks can distort data, manipulate tokens, and access private information.
Excessive Agency
Excessive Agency targets the Application Programming Interface (API).
Targeting the API enables the intruder to access the model's regulatory safeguards and modify or remove them. This could include eliminating guidelines that prevent the model from suggesting self-harm as a response to any prompt.
Many of these LLM cyber attacks are currently undetectable, and until a method is developed to consistently and accurately detect them, we should refrain from using AI in processes that could have detrimental consequences if manipulated.
2) Security
AI is not yet ready to comply with the healthcare regulations in place for patient security. Suppose a provider puts patient information into the AI chat box and searches for possible diagnoses; that is a breach of security, and inputting personal information into AI is likely a violation of HIPAA. Even if the information, such as name and age, is removed, that does not stop an LLM from matching medical records or history and identifying the patient data.
An example of this is when LLMs are trained for use in healthcare, which requires processing millions of patient profiles to acquire the necessary knowledge for rudimentary functionality. Security is a risk in retraining and fine-tuning with patients' medical data; even if you remove the name and defining information, there is still a significant chance of patient classification, especially if their medical records include images such as X-rays.
3) Mass Errors
If a specific model, such as GPT, is used regularly and it learns one medical fact or statistic incorrectly, that one mistake could affect or misdiagnose thousands of people.
This is also relevant if the AI model is trained on issues that involve race, gender, or age. If the balance for each category isn't even, then there will always be a bias against the underrepresented party.
Some #AI models have already been integrated into existing healthcare frameworks, and providers are encouraged to utilize them; however, this approach may create compatibility issues in the future. If one hospital collaborates with another and their AI systems aren’t comparable, transferring records and medical information could result in distorted information, incorrect diagnoses, test results, or past medical history.
4) Regulation
AI is challenging to regulate because AI models are continually expanding and learning, but the FDA is working on establishing ideas and guidelines to help regulations evolve with the technology.
Below is the link to the most recent draft on regulating LLMs and guidance on accurately judging them for FDA approval.
Docket Number: FDA-2024-D-4488
5) Process Transparency
It's not difficult for an LLM to generate the correct answer, but it is almost impossible to determine what its thought process was. If you ask the AI how it arrived at the answer, it generates a response that outlines a process pathway after the output is created. So it takes the answer and reverse-engineers the process. This is not helpful. To determine if an LLM has been compromised or requires improved training, we must examine the model's actual thought process. This is crucial for auditing and regulatory purposes.
There are numerous obstacles we must overcome and be aware of to establish a safe foundation for AI in healthcare. Unfortunately, many people have overlooked these issues and have delved too deeply into the unregulated chaos.
We are making significant progress, but the most crucial issue is the cyber attacks and how they can compromise an LLM's functionality, as the models themselves aren't transparent enough.
If we can understand how an LLM thinks through all of its processes, then many of these issues will be resolved.