Home » Articles » »

Where Did Sentience Begin?

By Jim Selman with Shae Hadden

The debate about whether artificial intelligence can—or will—become sentient continues. Meanwhile, here’s how to achieve some degree of serenity by taking charge of how you relate to AI.

Photo by Steven Diaz on Unsplash

Photo by Steven Diaz on Unsplash

I recently watched a conversation between meditation teacher Jack Kornfield and Sam Altman, CEO of OpenAI, about consciousness and Generative AI (GenAI). GenAI replicates how humans learn and operate: in language via neural networks. Kornfield and Altman make it clear AI isn’t sentient—yet. (There’s an ongoing debate about whether this will ever be possible.) However, Yuval Harari asserts that AI doesn’t need sentience to profoundly impact our reality and dictate our narrative.

Kornfield and Altman believe practicing mindfulness and meditation will help us maintain equilibrium as our new AI world emerges. We naturally anthropomorphize AI. We talk about ’it’ as having or potentially possessing qualities we associate with humans, such as intentionality, self-awareness, or moral capability. It is probably misguided to project human attributes onto AI applications, like ChatGPT4. Yet the film “Her” shows a human falling in love with an Alexa-like AI. While this is conceivable with current technology, the notion of a machine reciprocating that love is not.

What particularly intrigued me in the Kornfield-Altman discussion was the question, “Where does consciousness originate?”

The source of consciousness itself remains unknown. The most plausible theory I’ve heard suggests it emerged spontaneously with the emergence of language. The origin of language is another enigma. I don’t think it really matters which came first, consciousness or language. They co-exist now.

More important, we lack clarity about what consciousness is. We’re unsure if it’s an individual characteristic or a product of relationships. I believe moments of exceptional clarity or transcendence, when we escape our normal ego-centric consciousness, highlight the difference between being conscious and not conscious. This makes the case that consciousness is a relational phenomenon.

Now the emergence of AI has me pondering: what if consciousness preceded humans?

A distinction, as I understand it, is a linguistic phenomenon, a declaration creating a context or opening to discern differences. When you suddenly grasp a distinction, it’s like and ‘aha’ moment, enabling you to perceive something previously outside your awareness. Was that ‘something’ there all along, waiting to be distinguished? Or did it not exist until you created the distinction?

In the Book of Genesis, God said, “Let there be Light.” That statement created an opening to perceive light and dark.

Similarly, the distinction ‘color’ isn’t a color itself. But without it, there cannot be red, blue, or yellow.

You must create the possibility of something before it can exist.

Let’s start with undifferentiated everything/nothing. The first step in creating consciousness would be to declare, “Let there be consciousness.” This declaration would give us the possibility of consciousness, and create the opening for everything related to it, from theories and states, to history and assessment standards, even self-referentiality, its most fundamental aspect. Consciousness can be considered to be the distinction which allows us to present to/aware of our own thoughts, feelings and experience. It is the context for “being” part of “human being”.

After reflecting on Kornfield and Altman’s conversation, I am thinking the idea of practicing mindfulness is a way of owning and taking charge of how we relate to AI and whatever it is doing. Mindfulness is possible because we have consciousness. These practices quiet all the automatic, self-referential thinking that occurs in the human mind. They give us the freedom to “just be”, allowing us to be present with whatever we encounter and centered in a comprehensive experience of the ‘whole.’ Mindfulness in its various forms is a discipline for disengaging with the content of whatever is occurring, disconnecting from our thoughts, and generating a state of openness.

At some point, AI may take over many of the content-oriented functions in our daily lives—and in some ways, it already has. But, arguably, AI can never “be with” life or have experience of what is occurring. While mindfulness is not the only strategy for dealing with and relating to AI, it is perhaps the most accessible and practical way for individuals to achieve some degree of serenity in the face of the technological avalanche that has been unleashed by this new “intelligence”.

I don’t think “artificial intelligence” means “artificial Being”. As a consequence, AI will never be sentient, at least not in the same way that human Beings are sentient. AI may evolve some other form of awareness in the future. For now, whatever that new form of awareness may be remains an unthinkable and unimaginable mystery, since we can only think about it in the context of our own level of consciousness.


© 2024 Jim Selman