Technocrats want to build Machines that are more Robust than Physicians Practices

Updated: Jun 17

Artificial Intelligence-driven Instruments working as Doctors still bearing the Liability on Physicians

Photo by h heyerlein on Unsplash

Originally Published by Illumination Curated on Medium

The common belief midst technocrats place their liberal virtue in Artificial Intelligence (AI) capacities. Technocrats hope that robots will ultimately diagnose diseases and offer treatment options even more flawlessly than humans to even more sophisticated ailments. They are relentlessly convinced that machines will learn, make differential diagnostic workups and make the best treatment decisions for the particular patient without direct physician participation.

But, it’s utterly impulsive and extreme to take on such a presumption.

Still, providing we give it the benefit of the doubt, let’s contemplate that such an assumption is conceivable. And a synopsis where the doctor-patient affair, is in fact, a machine-patient association or corporate-patient union. Even then, it is utterly presumptuous to ponder that achieving such a goal would require a transition period where physicians must periodically trespass. But then, as a bystander and not a compassionate professional who can relate to the patient.

With the speedy momentum of healthcare rushing towards automated medicine, the human intervention must be painstaking detrimental to the medical community’s influence and safety. Failure to see to so; will spawn a void, potentially drawing in the factors likely to swing the standard of medical care. In addition, it will potentially adversely affect the clinical judgment and expose the physicians to adverse legal implications.

The machine is never without flaws and is predisposed to provide recommendations procreated short of the aptitude to interconnect the underlying differential diagnostic for the selected treatment choices. If machine learning were trained in irrelevant clinical scenarios, using unreliable methods, or on fuzzy data sets, the physician would be potentially liable. If an individual fails to adhere to the standard of care, hence resulting from such a particular deviation, injury transpires. Amidst the application of AI, one can potentially foresee many potential avenues open for legal remedy.

Due to its multifaceted climate, the deviation from the standard of care applicable to Artificial intelligence does not halt there. The continuous yet frequent shift of social expectations, science, technology, and sociopolitical landscape around medical practice, along with the ever-changing socioeconomic healthcare scene, boils down to updating and confirming algorithms parallel to those disparities. Nonetheless, considering the current medical community’s skepticism and disengagement from their technology domain, such a task is second to impossible.

The repercussions encompass physicians’ liability at the mercy of the tech industry and non-physician algorithms. Therefore, until we reach a point in juncture when the public is ready to place their faith in automation to stay healthy without human empathy, the potential for legal implications will remain high and unpredictable.

#AI #Artificialintelligence

0 views0 comments