Font size

Who will set our Moral Compass?

Navigating our way through AI technology
By Bill Jarvis

What is a Moral Compass? It’s a natural feeling that lets people know what is right and wrong and how they should behave. It serves to guide a person’s decisions, based on morals or virtues. Most of us, as we grow into adulthood, develop a personal, instinctive understanding of what’s ‘right’ and ‘wrong’. In other words, a ‘moral compass’ that helps to steer us through the complexities of life. However, over the next few years, as we can expect to see Artificial Intelligence (AI) take over more and more of our daily decision-making activities, we can also see AI making more and more of the decisions that we used to rely on our moral compass to help us with.

 

For example, let’s assume we are being driven through heavy downtown traffic in our autonomous vehicle, enjoying the freedom from traffic worries and vehicle-handling responsibilities. Suddenly, our vehicle sensors pick up a warning signal. A few meters ahead, a child is about to dash out from between parked cars directly into the path of our vehicle! What to do?

 

  •  Stop suddenly, and cause a major pile-up?
  •  Swerve left on to a crowded sidewalk?
  •  Swerve right into heavy traffic?
  •  Continue on and hit the child, likely fatally?
  •  Something else?

 

Regardless of the outcome, if we had been personally in charge of the situation, we would have made the best decision possible, within the framework of our Moral Compass. In this situation, though, we have abandoned our Moral Compass and handed it over to our AI-based autonomous vehicle. In my opinion, there are three serious issues here:

 

What moral responses will be programmed into our AI world?

Who will be responsible for identifying our choices in the first place?

Who will be responsible for ensuring that they are implemented properly?

 

On the surface, these issues seem unnerving as we question who will be entrusted with deciding technology’s moral compass. When viewed in the context of healthcare, they become much more so. According to Gartner, a world-renowned IT research and consulting firm, the enterprise AI market will be worth $6.14 billion by 2022.

 

Of this total, a significant portion will be committed to a wide variety of healthcare opportunities. Current commercial offerings such as Apple’s Siri, Amazon’s Alexa, and Google’s Home will become companions to people in long term care or retirement who are unable to have meaningful conversations with other people. Also, many doctors’ diagnostic procedures and the development and maintenance of patient care plans will be significantly augmented, if not replaced, by AI tools. The professional quality of these procedures, as well as their security and privacy, and the manner of their being communicated to the patient and patient’s family, will be of paramount importance.

 

Much of what we hear about AI and its implementation deals with its undoubtedly remarkable ability to replicate, if not improve upon, many human activities. What we must not lose sight of is the essential need for our AI applications to be imbued with the equivalent of a Moral Compass to guide their decision-making.


Bill Jarvis
By Bill Jarvis
Bill Jarvis is a Resident Innovation Ambassador at Revera. Bill was awarded a lifetime achievement award by the Ontario Long Term Care Association in 2017 for his advocacy for fellow long term care residents. A former captain in the Royal Canadian Airforce, he worked in a number of leadership roles for companies including Gartner Canada, Labatt Breweries and co-founded his own IT management firm before retiring in 2000.