Doomsday: A Dead End?
By Jim Selman with Shae Hadden
AI has hacked our reality. This alien intelligence is evolving at a remarkable pace. What can leaders do to avoid succumbing to doomsday scenarios?
I just watched a clip from the 1983 “War Games” with Matthew Broderick, a film that reviewers heralded as a sci-fi classic predating “Back to the Future”. The premise is a teenager, a computer game enthusiast, accidentally hacks into a mainframe controlling all the world’s nukes and initiates a ‘game’ that could annihilate everyone on the planet. Starting a nuclear war is just one ‘end-of-mankind’ scenario we can imagine and connect to the exponential emergence of GenAI. I’ve stumbled on a long list of similar Doomsday scenarios in the process of trying to find some rational way to understand and relate to what’s happening.
Digital intelligence evolves at a remarkable pace. Every day brings more potential futures—both positive and negative. From what I understand so far, most experts and leaders agree on only a few things:
- AI has “hacked” our reality. AI systems will soon be able to autonomously generate questions and build new models and algorithms, independent of any human intervention. Even today AI is performing tasks and producing responses that the scientists who’ve built the systems don’t understand. This alien intelligence, as Yuval Harari describes it in his talk “AI and the Future of Humanity”, doesn’t need human consciousness to replace humans; it just needs to craft the narratives that all of us live by. AI has crossed the line in appropriating and generating language. Now it has the ability to generate and manipulate that reality, and, therefore, the ability to manipulate human beings.
- There is an urgent need for regulation and governance structures to prevent the worst-case scenarios. It’s not at all clear how we do this. What should it look like? How could it be enforced? Do we even have time to put a regulatory framework in place? (Most experts think there is still time.) I am not so sure. Consider humanity’s recent experiences with attempting to make and enforce rules during the pandemic. Or the adversarial conversations we still have about the unprecedented scale of disasters we know are coming with climate change. I believe we can safely assume that, even in the face of an existential threat, any regulations we come up with will likely to be too little, too late.
- Our brains are not wired to deal with this. There is a long list, getting longer every day, of both negative and positive possible futures. No one knows where artificial intelligence may be going or how fast it might evolve. We have many concerns about exponential change, existential threats, and the current capacity of GenAI to accomplish things we’ve never thought possible. But we don’t even know what questions to ask.
In the face of this kind of overwhelming change, Buckminster Fuller was fond of asking, “What can the little individual do?” These are the steps I’ve taken.
- Stop, breathe and acknowledge reality. The success of ChatGPT, with over 100 million users within three months of its release, demonstrates we’ve entered a new age, the age of AI. (Facebook took 2.5 years to reach that number.) This is a lot to take in.
- Think for myself. I’ve confronted and fully accepted the fact that there is so much I just don’t know. I’ve acknowledged what I do know now (or at least believe): I am in a new world, a new reality, and I cannot count on anything I’ve known or believed in the past to be relevant or real in the present. All I have to work with is my conscious awareness of what I can observe and my relationship with other human beings in this new world.
- Choose the interpretation or narrative of the world that will use you. If I choose the doomsday narrative, I will undoubtedly live in some state of anxiety or fear of the future. In all likelihood, I’ll become a hand wringer on the sidelines of life, moaning that somehow the world isn’t the way it was or should be. At the end of day, I’ll be a victim who has to settle for whatever they can get as the future emerges.
If, on the other hand, I can trust myself and cultivate my innate capacity for existential confidence, I will be fully equipped to deal responsibly with whatever the future throws at me. I can acknowledge we now are inhabiting a planet alongside an alien intelligence. I might begin to discover and invent how to live with this “other” species. I might start to distinguish how to merge these two different types of intelligences in a way which optimizes the strengths of both. I might think of AI in a manner similar to how I once thought about my children, whom I ‘created’ but who, at some point, could no longer be controlled. What I will not do is indulge in conversations about the ‘awful’ things that could happen as a consequence of this technology.
If GenAI is a genuine threat to humanity, then our time will come. Like the dinosaurs, we will join the other extinct species that could not adapt to their changing world.
However, for the time being, there is no point and no value in that story. We are here. We are still capable of choice. We still can imagine a world that can work for everyone. And now, perhaps with GenAI, we will be able to finally realize that vision, the only one that can make sense for all species.
© 2024 Jim Selman