Home » Articles » »

Ethics & AI: The Importance of Context

By Jim Selman with Shae Hadden

There are three requirements for making ethical choices: a background, a context, and an observer. What if we can’t tell whether our context is being created by human beings or by AI?

Black and white photo. An infant being held byt he hands of both a woman and a man. Credit: Isaac Quesada on UnSplash

Photo: Isaac Quesada on UnSplash


In my first article in this series on ethics and artificial intelligence, I wondered how you decide what is right and wrong in a particular situation when you can’t determine whether the reality you’re observing is being created by human beings or by AI.

Making an ethical choice requires three things:

  1. A background of historical practices and values that establish and anchor the standards against which you assess whether an action is good or bad, right or wrong
  2. A context for making that assessment that is not predefined, and
  3. An observer who understands that the assessment/choice they have to make is being made in a context that they have created and for which they are responsible.

Traditionally, human beings generated the context in which they perceived their future and, consequently, their behaviors and actions. Context was a function of human experience and values. We were ‘used’ by the possible futures context gave us. And, for better or worse, this has produced the world we have today.

Let’s say you have generated a vision and mission for yourself or your organization related to carbon capture. This aligns with the shift in social values toward protecting the environment. That vision/mission becomes the context for everyone’s actions, behaviors and choices in your company. This context is not predefined: it has been created by you. Whatever happens and however it happens, it will occur for you within this context (that is, you will perceive and interpret it through the lens of the vision/mission for which you have chosen to be responsible). You can evaluate the “morality” of each decision you have to make along the way as you make it, because you have created a context and adopted a set of historical standards against which to make such an assessment.

Here’s where it gets sticky.

When you interact with AI, you are being presented with a narrative for whatever subject or topic you’re talking about. This narrative, like all narratives, becomes the background or the context for how a particular reality/situation occurs for you (how you perceive it). This is how the human story has evolved since time began. Human beings live in the context of some narrative which effectively organizes how they see and relate to their world, as well as whatever meaning is relevant to them. Now we are talking about the potential of some other source (AI) creating or shaping the narrative. Either way, we are effectively being “used” by the narrative or the context.

The choice is ours. What context, what future will use us: one we generated ourselves or one that was generated for us by an artificial intelligence?

In 2015, Pope Francis released an Encyclical declaring the earth’s physical environment, including its climate, as a domain of ‘common good’. This shifted the narrative around climate change and other forms of environmental degradation from being strictly technical, functional or political to being fundamentally ethical in nature.

Given this human-generated context, while some short-term decisions related to launching AI might be justified based on immediate economic or social costs and benefits, long-term decisions that maintain the status quo of damaging practices to the ‘common good’ or our ecosystem cannot. They will always be unethical.

My experience has been that most business leaders are good people. They truly want to make ethical decisions and they want their employees to do so as well. Most of the time, this is what happens. However, in industries where products and/or business processes raise ethical concerns (such as energy, pharmaceuticals, chemicals, and defense), many leaders either rationalize that the societal benefit of what they offer outweighs any other consideration or become resigned that alternative scenarios are totally impractical or don’t even exist.

Perhaps we may never be able to tell the difference between an AI-generated context and a human-generated context. That may not matter.

I believe there will be one over-riding ethical basis for our choices in the years to come: being personally responsible in every moment and in every circumstance for the future.

We human beings are learning to live in what I call a “real-time world”. We are not waiting for or relying on predictions. That is, we are making the best choices we can based on whatever information and intuition we have available in each moment. Inevitably, we will likely make a lot of mistakes: that is how, as individuals, groups, nations, and as a species, we learn to survive and thrive in new contexts.

As we learn in real time, I suggest three things:

  1. We remain true to our vision of the future.
  2. We care about and prioritize our relationships with other human beings.
  3. Most importantly, we develop practices for using negative feedback to rapidly pivot or transform our conversations to align with the future we are committed to creating.

Success and accomplishment in this will be a function of how quickly we can ‘own’ our bad choices and make new ones that are more aligned with and effective in forwarding whatever vision of the future we choose.

I believe that, over time, most, if not all, conversations and processes will become about learning, collaborating, and creating a future that is open and large enough to include both human and artificial intelligences. Eventually it won’t matter which intelligence is more dominant in a particular situation. The human artist and the AI tool will have become so integrated that they work together like a dance. The only difference between this tool and others we have developed and mastered so far is GenAI will be intelligent and, possibly, free of human limitations.


© 2024 Jim Selman