AI discrimination: Why are poor patients being denied MRI and CT scans?

Artificial Intelligence (AI) has revolutionised many sectors including healthcare, but is it right to blindly trust this technology? A recent study has made this question more serious. Researchers have warned that AI is discriminating between patients on the basis of social and economic status. The shocking thing is that for the same disease, this technology is advising rich patients to undergo advanced tests, while poor patients are being deprived of it. This revelation not only raises questions on the use of AI in healthcare, but also brings forth issues of ethics and equality.
AI bias: Same disease, different advice
A study published in the prestigious journal Nature Medicine has revealed that AI models are giving different advice for the same health problem based on the socio-economic background of the patients. For example, if two patients have the same disease, the AI recommends the higher-income patient to undergo advanced diagnostic tests such as CT scan or MRI, while the lower-income patient is recommended to avoid these tests.
This discrimination is affecting the priority of care, treatment approach and even mental health assessment. Dr Anil Sharma, a Delhi-based physician, says, “It is worrying that a technology that is considered unbiased is deepening social inequality. If AI discriminates like this, poor patients will lose faith in the health system.” Research finding: Flaws in proprietary and open-source models
Researchers at the Icahn School of Medicine at Mount Sinai in New York conducted this study, in which both proprietary and open-source AI models were examined. The results showed that this problem is not limited to any one model. Whether it is AI systems created by private companies or publicly available models, bias was observed in all.
Dr Girish Nadkarni, co-leader of the study, said, “AI has immense potential to improve healthcare, but this is only possible if it is developed and used responsibly. If we ignore its flaws, it will further increase inequality among patients.”
What is the reason behind discrimination?
AI models depend on their training data. If this data is already filled with social or economic bias, then AI makes decisions based on that. For example, if high-income patients have been advised advanced tests in historical data, AI adopts it as a pattern. Also, AI is not designed to understand that every patient’s needs may be the same, regardless of their economic status.
Priya Mehta, a health policy analyst from Mumbai, says, “To make AI fair, we have to be careful at every step from data collection to model design. If we do not include the data of poor patients, how will AI understand their interests?”
What is the impact on patients?
This discrimination is creating serious inequality in patient care. Rita Das, a nurse working in a government hospital in Kolkata, shared her experience and said, “We have many patients who are already deprived of proper treatment due to limited resources. If AI also keeps them away from advanced diagnosis, then it will become even more difficult for them to detect the disease at the right time.”
For example, if a poor patient is not advised to undergo early MRI for a serious disease like cancer, then his treatment will start late, which can increase the risk of his life. At the same time, a rich patient may be advised to get the same test done immediately. This inequality is not only morally wrong, but it also undermines the basic principle of healthcare—equal care for all.
What is the solution?
Researchers have given many suggestions to deal with this problem. Dr. Eyal Klang, co-author of the study, says, “We need to identify where and how AI models are discriminating. Based on this, we have to improve their design, strengthen data monitoring and develop systems that ensure patient safety and equality.” Some experts suggest that to bring transparency in AI models, their decision-making process should be made public. Apart from this, bias can be reduced by diversifying data collection and including underrepresented communities. Challenges in the Indian context In a country like India, where healthcare is already struggling with economic and social inequalities, this discrimination of AI can bring more serious consequences. Patients in rural areas already have difficulty in accessing advanced investigation and treatment. If AI also ignores their needs, this inequality will increase further. Health policy expert Priya Mehta says, “Before implementing AI in India, we have to understand the local context. We should have a data set that reflects the needs of everyone – rural and urban, poor and rich.” A new beginning is needed
AI can be a powerful tool in healthcare, but it also comes with responsibility. This study reminds us that technology is useful only if it provides equal opportunities for all. If AI discriminates between patients, it raises not only ethical, but also social and legal questions.
Dr. Girish Nadkarni’s statement is accurate “AI has a bright future, but it has to work for humanity, not to increase inequality.” In a country like India, where millions of people are living in the hope of better healthcare, it is important that we make AI an inclusive and fair tool. Are we ready for this challenge, or will we forget humanity in the glow of technology? This question is worth thinking about for everyone.
This article is based on research and expert opinion.