Introduction
India, boasting a gargantuan population of over 1.4 billion, stands poised on the cusp of a paradigm-shifting digital health revolution. Leveraging artificial intelligence (AI) and robotics, this revolution endeavours to surmount the trifecta of challenges plaguing the healthcare sector, which are accessibility, accuracy, and affordability. AI and robotics have transformed the modern healthcare ecosystem, from mechanized diagnostics to robotic-assisted surgeries, promising enhanced efficiency, reduced human error, and improved patient outcomes.
Nevertheless, as India rapidly adopts these cutting-edge innovations, it is imperative to scrutinise the challenges they pose to fundamental rights, privacy, and the cyberspace framework. While these technologies offer futuristic solutions to systemic challenges, they also raise profound concerns regarding patient autonomy, informed consent, data protection, and cybersecurity. These issues are not merely administrative conundrums but also intricate legal, ethical, and societal dilemmas.
Doctors vs. Devices: Ethical Considerations in Using AI and Robots in Healthcare
We must reframe the narrative: it’s not “doctors vs. devices,” but “doctors with devices.” The ethical path forward lies in augmentation, not automation. AI is an assistant, not a substitute.
In the age of rapid technological advancement, the integration of Artificial Intelligence (AI) and Robotics into healthcare has sparked both awe and anxiety. While machines can now analyse massive data sets, diagnose conditions, and even perform intricate surgical procedures, a looming concern persists: will these technologies replace human doctors? The simple and ethically grounded answer must be “NO”, said by many experts like IMA chief Dr. R.V. Asokan 2 AI and robotics are not meant to supplant doctors but to empower them. These tools are designed to enhance human capabilities, not render them obsolete. Medicine is not just a science; it is an art steeped in empathy, judgment, and human connection. No matter how advanced machines become, they cannot replicate the compassionate touch, nuanced decisionmaking, and ethical reasoning of a human doctor. Modern AI excels at pattern recognition, making it invaluable in fields such as radiology, dermatology, and pathology. Algorithms can scan thousands of X-rays or MRI images in seconds, spotting anomalies with impressive accuracy. But what they can’t do is interpret those findings in a holistic, patient-centred context. Only a doctor can integrate a patient’s history, emotional state, family circumstances, and cultural background into a meaningful diagnosis.
Take, for instance, an AI program detecting a cancerous lesion on a scan. The machine can highlight the issue, but it is the doctor who must convey the diagnosis to the patient with sensitivity, suggest personalised treatment options, and address emotional reactions. The
algorithm doesn’t sit with the patient through tears and confusion. It doesn’t hold their hand before surgery. The doctor does, said by Atul Goel, Director General of Health Services.
‘Robotic Surgery’ is often hailed as the future of operating rooms. Machines like the Da Vinci Surgical System offer incredible precision and control. Yet, even here, the surgeon is the maestro. The robot may hold the scalpel, but it is the surgeon’s mind, experience, and intuition that guide its movements. In India, hospitals like AIIMS, Medanta, and Fortis have adopted robotic surgery, but always under the careful supervision of trained surgeons. The robot becomes an extension of the surgeon’s skill—not a replacement. The ethical safeguard here is clear: machines are tools, not autonomous actors. One of the key ethical duties we owe to the future is to prepare the next generation of doctors to work alongside intelligent machines.
Medical education must evolve, not to compete with AI, but to integrate it wisely. Doctors must be trained to question, interpret, and override AI when necessary.
“AI will not replace doctors, but doctors who know AI will replace those who don’t “- Dr. D. Nageshwar Reddy.
Autonomy vs. Supervision: Ethical Use of Robots in Treatment
In this ever-advancing world of medicine, Artificial Intelligence (AI) and Robotics stand as the epitome of modern precision, speed, and possibility. Far from threatening the human touch, these innovations are revolutionising healthcare by empowering doctors to reach new heights in diagnosis, surgery, and patient care. AI is not a replacement for physicians—it is their most powerful ally. Imagine a rural clinic in India where a single doctor caters to hundreds. With AIpowered diagnostic tools, that doctor can access accurate test interpretations, predictive analytics, and even real-time treatment suggestions, bridging the urban-rural healthcare divide.
Robotic-assisted surgeries offer precision beyond the human hand, minimising error, reducing recovery time, and delivering life-changing results. The Da Vinci Surgical System6
, now adopted by many top Indian hospitals, has demonstrated how robotics can make even the most complex procedures less invasive and more effective. Legally, this progress is supported by emerging frameworks. The Digital Personal Data Protection Act (DPDP), 20236 emphasises consent, data security, and accountability in processing sensitive health data. Meanwhile, Sections 43A and 72A of the IT Act, 20007 hold service providers liable for negligence and unauthorised disclosures, ensuring that privacy is not sacrificed in pursuit of progress. Even the AI Will Not Replace Doctors, but Doctors Who Know AI Will Replace Those Who Don’t: Dr. D. Nageshwar
Telemedicine Practice Guidelines (2020)9 encourage the ethical use of AI-assisted
consultations, particularly in remote regions.
Cybersecurity Threats
As artificial intelligence (AI) and Robotics make their way into the heart of healthcare systems, India faces a growing and deeply concerning problem: the vulnerability of patient data and systems to cyber threats. While these technologies promise speed, efficiency, and accuracy, they also open the floodgates to serious privacy violations, data breaches, and accountability gaps that the current Indian legal system is struggling to address effectively. At the core of this issue lies the vast volume of sensitive health data being collected, analysed, and stored by AI systems—data that is highly susceptible to misuse. Hospitals and health-tech firms often rely on cloud servers, third-party processors, and foreign software, many of which lack the adequate safeguards mandated by law.
The Information Technology Act, 2000, particularly Sections 43A and 72A8 , attempts to regulate the protection of personal data and penalise unauthorised disclosures. However, these provisions are outdated and insufficient for the complex algorithms and interconnected networks AI demands. Though the Digital Personal Data Protection Act, 2023 seeks to modernise data governance, enforcement remains a challenge. Consent mechanisms, a cornerstone of privacy protection, become obscure when patients are unaware of how AI uses their health data. Who is accountable if a diagnostic AI system misdiagnoses a patient due to a software bug? The law remains unclear.
Moreover, real-world incidents—such as the ransomware attack on AIIMS Delhi in 2022—
reveal the weak cybersecurity infrastructure in Indian public health systems .
If premier institutions are this vulnerable, rural and semi-urban healthcare systems are practically defenceless.
Now, coming to the solution, India must establish a dedicated, binding regulatory framework that explicitly governs the use of Artificial Intelligence (AI) and Robotics in the healthcare domain, focusing on cybersecurity, patient data protection, algorithmic transparency, and medico-legal liability. The statute must include mandatory certification of health care AI systems where all AI and robotic tools must be certified by a statutory regulatory authority such as the proposed National Digital Health Regulatory Authority (NDHRA). Certification should assess cyber security protocols, data encryption, breach resistance, and interoperability standards. There should be cross border data flow restrictions, no health data collected by AI tools should be allowed to leave Indian servers unless the country of storage has data protection laws on par with India’s (modelled on Section 17 of the DPDP Act11 ). There must be an algorithmic accountability and explainability, all AI systems must be auditable.
Healthcare AI algorithms must generate explainable decisions (e.g., explain how a cancer diagnosis was reached), aligning with the ethical right to be informed. These provisions would find support under Article 2112 of the Indian Constitution (Right to Life and Privacy), as reaffirmed by the Justice K.S. Puttaswamy judgment (2017)
AI and Robotics in Indian Healthcare: A Threat in the Absence of Legal Protection
As India rapidly embraces the wave of Artificial Intelligence (AI) and Robotics in healthcare, a critical void threatens to undermine this progress—the absence of a dedicated legal framework. In a country of humongous unruly people, where access to healthcare is already a complex struggle for many, introducing AI without sufficient legal guardrails is like flying a high-tech plane with no radar and no pilot. It’s thrilling, but dangerous.
While the Digital Personal Data Protection Act, 202315 and Information Technology Act, 2016 provide some semblance of protection, they are skeletal in nature when it comes to the intricacies of AI-enabled diagnosis, robotic surgeries, and algorithmic decision-making. These laws don’t address critical questions: Who is liable when an AI makes an incorrect diagnosis? Can a machine be held accountable in a court of law? Who owns the data fed into or generated by these machines?
The consequences of this legal ambiguity are more than hypothetical—they’re deeply human. Imagine a patient receiving an automated cancer diagnosis from an AI-powered system without the empathetic presence of a doctor. What if that diagnosis was wrong? The mental trauma, then loss of time and money, and perhaps life itself, cannot be reversed with a software patch. And who takes the blame?18 Today, the answer is: no one clearly does. Moreover, our healthcare systems are already vulnerable—as evidenced by the ransomware attack on AIIMS Delhi in 2022. With AI systems constantly processing patient data, any breach can expose sensitive health records, violating privacy and dignity in ways that current statutes can’t adequately remedy.
Bias and discrimination are further dangers. AI models trained on foreign or skewed data can misinterpret symptoms in Indian patients, leading to flawed recommendations and reinforcing systemic healthcare inequalities. Yet, there is no Indian regulatory body tasked with auditing or certifying these algorithms for safety, fairness, or cultural sensitivity. In the global push toward smart medicine, India risks prioritising innovation over human rights. Until a robust legal and ethical framework is instituted—one that protects patients, regulates developers, and mandates transparency—AI and robotics may do more harm than good. In medicine, trust is sacred. And without laws to protect that trust, we’re letting machines operate in a moral vacuum—where lives hang in the balance.
INFERENCE
India, with its gargantuan population and strained healthcare infrastructure, stands at the brink of a transformative era led by Artificial Intelligence (AI) and Robotics. These technologies promise to revolutionise the medical field by enhancing diagnosis, enabling precision in surgery, and expanding healthcare access—especially in rural regions. However, while AI and robotics offer immense potential in optimising medical outcomes, their integration into healthcare systems raises profound legal, ethical, and privacy concerns. At the heart of the issue lies India’s lack of a dedicated, comprehensive legal framework governing the use of AI and robotics in medicine. Currently, protections under the Information Technology Act, 2000 and the Digital Personal Data Protection Act, 2023 are insufficient to address the complexities of AI-enabled decision-making, algorithmic accountability, and cybersecurity vulnerabilities. Incidents like the 2022 ransomware attack on AIIMS Delhi expose how fragile healthcare systems are to cyber threats. Furthermore, questions around liability, consent, data ownership, and bias in AI algorithms remain unresolved, posing serious threats to patient autonomy and dignity.
This essay emphasises that AI and robotics must complement doctors—not replace them.
Medicine requires empathy, judgment, and human connection—qualities that machines cannot replicate. Robots may enhance surgical precision, and AI may assist in diagnostics, but the doctor’s role remains irreplaceable in interpreting, guiding, and supporting patients. To protect both innovation and human rights, India urgently needs a statutory framework specifically tailored for AI in healthcare. This includes mandatory certification, algorithmic explainability,
Also Read:
Rights of undertrial prisoners in India
How To Send A Legal Notice In India