Is Machine Learning Artificial Intelligence Or The Real Thing?
In the March 15, 2018 issue of The New England Journal of Medicine, an editorial describes the consequences of machine learning on the practice of medicine.
On the surface, the use of big data to assist in medical decision-making doesn’t seem to have a down side. Everything has a down side.
The article by Char, Shah and Magnus makes some critical points having to do with the ethical challenges of using artificial intelligence, machine learning, and algorithms in the practice of patient-centered care. Apparently, bias has been noted when such tools were employed in aiding judges during sentencing guidelines so this is not a theoretical issue.
First, bias can affect data if the data used to build the learning system is inherently biased. For example, if an algorithm is built on data collected solely from white patients to affect clinical decision-making used in the treatment of non-whites, this could be a problem.
Developmental problems or mental disorders could influence the evaluation of human life and cause conclusions to be drawn by machines that were fatal because the conclusions were drawn from data entered by people with a bias.
The algorithms themselves can be designed unethically such as what Volkswagen did to influence the results of emission tests.
And don’t forget an algorithm built by a health care delivery system to maximize profit not to optimize health. It would be very hard for the clinician using such software to discriminate between what was good advice for the patient and better advice for the CFO. Heck, what’s one more MRI?
The article puts it this way: “the collective medical mind is becoming the combination of published literature and the data captured in health care systems, as opposed to individual clinical experience. Although this shift presents exciting opportunities to learn from aggregate data, the electronic collective memory may take on an authority that was perhaps never intended.”
This is strong stuff. Are the machines going to overrule clinical judgment when it comes to ordering tests and making diagnoses? Who’s really reading that CT scan anyway? Is the computer looking out for the patient in the same fiduciary manner as we doctors are supposed to?
There is no doubt that our ability to use machines may be outstripping the advisability of doing so. Is the relationship of key importance the one between the doctor and the patient or the patient and the health care system and its information system?
Unfortunately, the machines don’t have ethics so their programs better have them and it is up to us to make sure they do.
This article is well worth seeking out and reading. Machines are just machines, but these modern machines can learn and they may start doing so autonomously. We physicians have to remain vigilant that we are above all else patient advocates and fiduciaries and if the machine says one thing and your clinical judgment another, careful weighing of all options is critical.
In the end, the ethics of the machines are only as good as our ethics when we program them.