The GP Will See You Now

Or will they?

With healthcare apps disrupting traditional treatment methods, AI has come to the forefront of digital health as a saviour for all our doctor shortages. Apps like Babylon have created AI doctors that can take a history and determine if someone needs to see and human doctor or not.

This is exactly what this first assignment will address. Could a doctor really be replaced by a computer?

Here are my thoughts:

What are three advantages of having access to a remote GP?

  1. One great advantage of access to remote doctors is that it allows filtering of minor and major problems before the patient reaches the practice or hospital. This will free up time for human GP’s and A&E departments, bring waiting times for appointments down, as well as being able to provide better care due to being in a less time-pressured situation.
  2. Remote GP’s means that healthcare can be provided to more people, like those living in rural areas that may not have a nearby GP, or for example, in developing countries, where doctors may be scarce. It also means that care can still be provided when patients may be away from their home.
  3. A benefit with apps like Babylon is that they can have more GP’s available that a traditional practice and they can be available 24/7 which results in shorter waiting times for appointments, better patient satisfactions and improved wellbeing for doctors.

What are three disadvantages?

  1. A main issue with the UK system is the fact that these AI apps are coming from private companies, yet as a society, we rely on the National Health Service. GP’s are paid a fee for each of their patients, this means that if private company start poaching patients from the NHS, then GP’s will lose out on funding and it will affect those that are the least likely to use these apps, like the elderly for example. This means that the demographics that need the help the most will be the ones that suffer the most.
  2. If a patient ended up needing immediate care, they would be better off at their local medical centre as there will be medical staff on hand to deal with any situations. So having a remote doctor may lead to delayed response in some of these situations, although I imagine they are rare and there is still the option of the emergency services.
  3. Finally, a remote GP will always be limited by the technology used to access it. For example, if someone damages their phone and it is not usable for a week, then they will technically be without a doctor for a week or a poor internet connection may mean limited access to healthcare. Now, it may be unlikely that they will need a doctor during that time, but it is possible and it could lead to delayed patient care.

Do you think that a computer programme/algorithm will ever replace a human doctor?

In my opinion, if AI and machine learning remains in its current state, no. However, technology is advancing very quickly and I do think that it is entirely possible that one day, however far in the future, a computer could replace a first opinion human doctor. I am more hesitant to say that it could replace each doctor at every step of the diagnostic process for a complicated disease process and I believe that in these situations, it would be more appropriate to use computers alongside human doctors to aid the diagnostic process. Let me explain my reasoning behind this.

Let’s use diagnostic imaging as an example. In it’s current state AI is great, better than doctors in fact, at diagnosing single diseases, like pneumonia on a radiograph. On the other hand, as soon as you introduce a complex disease process involving comorbidities or uncommon presentations, the AI begins to fall apart as it typically cannot connect multiple pieces of information together without being told how they fit together, unless you have enough examples to train it on. I think this is one major step for AI to overcome, and it is not a simple problem, although it may be a rare occurrence. In this case, it would be beneficial to use this alongside doctors, possibly to diagnose the obvious pathologies and then flag up anything more complicated for a doctor to look over. That’s not to say that this doesn’t present it’s own issues. For example, doctors are not immune to bias and if the AI suggests that it may be a certain disease, the doctor may be biased against that suggestion through a distrust in the AI or be biased to agree and go against their own clinical expertise. Neither situation is something you want to introduce into the diagnostic process

Everyone makes mistakes (even doctors and GPs) this happens everyday; however, society is dreading the day when an app or a robot causes harm to a human life. Like the debate over driverless cars, there is a lot of controversy about human harm being caused by a robot or computer programme error vs harm caused by human error. Why do you think this is the case? Do you think that this is rational?

You could argue that this is irrational. As the question states, if doctors make mistakes, why should we expect anything different from a computer programme.

The Three Laws of Robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

– Taken from “Runaround”, by Isaac Asimov.

Now obviously, Asimov’s “Three Laws of Robotics” are a work of fiction, but I think that it is examples like this, and robotic uprisings from popular culture materials that worry people about the potential harm of robotics. Plus, the laws do bring up a good point. As the creators of AI, we should also define the laws and regulations that govern them. Currently the specifics are a bit vague in regards to healthcare, other than the fact that non-medical professionals cannot give medical advice or diagnosis. This seems like a good precaution, but it is one that we will have to change if an AI is to ever replace a doctor and we will have to be prepared for the time that a computer programme harms a patient.

Enter, the Engineering and Physical Sciences Research Council (EPSRC). They have come up with 5 Principles of Robotics. The first principle being:

Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.

– Principles of Robotics, EPSRC.

This is where AI doctors are currently. They aren’t designed to hurt people, but what happens in the event that they do hurt someone?

Three of the other principles have something to say about this:

Humans, not robots, are responsible agents. Robots should be designed; operated as far as is practicable to comply with existing laws & fundamental rights & freedoms, including privacy.

Robots are products. They should be designed using processes which assure their safety and security.

The person with legal responsibility for a robot should be attributed.

– Principles of Robotics, EPSRC.

With that in mind, I think there is a fairly rational argument for this question, and that is the issue of accountability. With the laws and medical regulations in their current state, a doctor will be held accountable for gross negligence or a mistake that puts a patient at risk of harm.

This brings up the problem with AI doctors. Who is to be held accountable? You could argue that the company that made the programme, the government for trusting the programme with patients lives or the healthcare system/provider that the AI is being used by. How then, would you punish an AI that potentially has the same number of patients as hundreds of human doctors combined?

Lastly, another point that I want to address, which is addressed by the EPSRC regarding assurance of safety and security. At the moment, AI doctors have little or no independent scientific evidence for their safety. We are relying on the companies that make them to prove they work, which in itself, is not always a reliable source. Along with that, we have to trust these companies with our personal data.

I think for people to really trust these apps, they should really undergo clinical trials, just like a new medical drug would. It should be done by an independent company, comparing their decision making to that of human GP’s. Only then, if proved equal or better to human GP’s, do I think they will have the trust of the general populous and the scientific community.

Leave a comment