The week of 12th-18th March was a watershed for the AI field. Stanford released their open source LLM (large language model) called Alpaca 7B, Google did several announcements including an LLM aimed at the medical field called Med-PaLM2, openAI released GPT-4, Anthropic released Claude (their LLM that powers Notion AI), Midjourney V5 was released, Microsoft released AI co-pilot for their 365 suite and on and on.
Basically, large language models exploded and took over certain corners of social media chatter that week which continues almost unabated. I was not immune to it, my family was not immune to it. In fact, a conversation with my teenager – early in the morning just before school – took a surprising turn. More on that a little later.
After the initial excitement, I have settled into a much more normal, merely intense, obsession with exploring chatGPT’s abilites; particularly as a research and writing assistant. I asked it to explain the 2nd century Indian Buddhist philosopher Nagarjuna’s teachings. If you haven’t heard of him, he’s the second most influential Buddhist, after Gautama himself, known as the second Buddha. This was also the majority of what I knew about Nagarjuna and hadn’t really understood his philosophy. So I asked chatGPT (GPT-3.5 Turbo):
What is the main argument in the Mulamadhyamakakarika by Nagarjuna?
chatGPT: The Mulamadhyamakakarika, also known as the "Fundamental Verses on the Middle Way," is a philosophical text by Nagarjuna, a prominent Buddhist philosopher of the 2nd century CE.
The main argument in the Mulamadhyamakakarika is that all phenomena lack inherent existence or "emptiness" (shunyata) and that this understanding is essential for attaining liberation from suffering. Nagarjuna argues that all phenomena, including our selves, are empty of inherent existence because they depend on other factors for their existence and are therefore empty of any independent, intrinsic nature.
Nagarjuna's argument is based on the concept of "dependent origination" (pratityasamutpada) which states that all phenomena arise in dependence upon causes and conditions. He uses a variety of logical arguments and critiques to demonstrate that any assertion of inherent existence is ultimately self-contradictory and leads to absurdities.
I had to ask for clarification.
How can shunyata be meaningfully explained to a modern human?
chatGPT: One way to explain shunyata to a modern human is to use the analogy of a movie. Just as a movie appears to be real and independent from its various elements such as the script, actors, and special effects, in reality, it is a composite of these factors that depend on each other. Similarly, all phenomena, including the self, are composed of various elements that are interconnected and interdependent. The concept of shunyata recognizes that all phenomena lack inherent existence, as they depend on other factors for their existence and are therefore empty of any independent, intrinsic nature.
How does this understanding of shunyata liberate us from suffering? I couldn’t really understand it. Perhaps a story would help.
So I cajoled it to write me a short story involving immortality tech – a serious pursuit of scientists, well actually longevity rather than immortality – and Nagarjuna’s teachings. Here’s what it came up with after a bunch of intermediate prompts:
chatGPT short story
John had always been fascinated with the idea of immortality. He had spent most of his life studying science and technology, hoping to find a way to cheat death. And finally, in his middle age, he succeeded.
He underwent the experimental procedure that would grant him eternal life. He watched as his body changed, becoming stronger and more resilient with each passing day. He marveled at the thought that he would never have to experience the pain of loss and separation that had haunted him for so long.
But as the years passed, John began to realize the true cost of his immortality. His family and friends grew old and died, leaving him behind. The world he knew and loved began to change, becoming unrecognizable to him.
And worst of all, he found that he was losing his sense of self. The memories that had once defined him were fading away, replaced by an endless stream of new experiences. He was no longer the man he had been, and he had no idea who he was becoming.
Desperate for a way out, John turned to the scientists who had granted him his immortality. They presented him with the option of having his memories erased, of letting go of his eternal life and embracing the impermanence of his existence.
It was a difficult decision, but in the end, John chose to let go. He realized that true freedom came from accepting the impermanence of life and living each moment to the fullest, knowing that it would not last forever.
And as his memories were erased, John felt a sense of peace wash over him. He was no longer the immortal man he had once been, but he was something more. He was a part of the cycle of birth and death, a part of the endless flow of existence. And in that moment, he knew that he had found what he had been searching for all along.
Not Nobel Prize material, but good enough as a less-than-dry explainer, I thought. chatGPT certainly explained the seemingly paradoxical idea of true freedom as a result of realizing the impermanence of life. We humans, particularly the rational thinkers amongst us, aren’t very good at this kind of paradoxical thinking, in my opinion, this could be a great tool for them to imagine how most of humanity operates.
As a good friend and faithful reader of this blog remarked: we should not call it Artificial Intelligence, call it Augmented Imagination instead (my friend pointed out the original source of this phrase)1. Nice! Looks like a great new productivity hack.
But hold on, there has been a steady drumbeat of people warning us of impending disaster ahead if AI research is allowed unchecked. Some very prominent people wrote this open letter calling for a moratorium on further AI research until we can solve pressing safety issues.
We haven’t solved the AI alignment problem, they say. I wasn’t aware there is such a thing just a few months ago.
In a nutshell, the AI alignment problem refers to the challenge of ensuring that AI agents act in a way that is beneficial to humans and consistent with human values. This is an impossible problem because one man’s profit is another’s peril and there aren’t any universal values! Also, no one has any clue as to how to equip machines with ethical or moral reasoning. These AI systems, particularly AGI systems possessing human level or super-human level intelligence, could inadvertently squash us like bugs as they go about their merry way.
A well-known thought experiment known as the paperclip maximizer – originally proposed twenty years ago by the philosopher Nick Bostrom – illustrates the idea. Suppose a super-human AGI was programmed to prioritize an arbitrary goal, such as maximizing the number of paperclips it manufactured. It would begin innocuously enough, but because it would be so advanced in its thinking capability, its goal-oriented behavior would override any safety protocols that might be put in place to protect humans. The paperclip maximizer AGI would, for example, find ways to acquire more materials to produce paperclips, including taking over power plants to ensure constant power, perhaps cutting power to human cities etc., to achieve its goal. It could even try to manipulate humans in its relentless pursuit to make paperclips, or worse, take violent action to acquire such resources. Eventually the entire planet, and maybe nearby space, would be turned into a paperclip making factory.
My teenager argued that this is, in fact, already happening. In some sense, we humans are like an AGI for the other species on earth. We have unarguably transformed this planet and may be the cause of the large-scale extinction event going on currently.
The crux of the argument, as I understand, is we need to stop AI research because we can’t afford to get AI alignment wrong. If it goes wrong, we don’t get a second chance. Its not like other social experiments (like Communism, for example) which we can just start over again with a new hypothesis.
In their scenario, the super-intelligent AGI is determined to exterminate humans. This is different from a dominant species going about its own merry way not caring about other species on earth. We haven’t chosen to drive large cats to extinction, that’s just a side-effect of us pursuing our own interests. AI can pursue its interests (!) and be totally oblivious of our existence. We would be free to pursue our own interests just as long as we stayed in our metaphorical forest. In fact, a truly super-intelligent AGI might decide that it needs us to keep going, in order for its natural environment to be predictable.
The letter-signers argue: what if you turn out to be wrong? Can you afford to risk our existence based on a hunch? They point out the slaughterbots being released by the military on the public with the possibility of humans losing control.
We can counter with: what if a super-intelligent AI threat is needed to finally eliminate our tribalism? What if we suddenly discover a purpose and a true need to keep this planet going – for us as well as other species – and AGI is the perfect tool? As it stands, people are already hard at work with AI being used to help solve the mysteries of our brain, cure disease, discover new ways of generating abundant, carbon-free energy.
Ultimately though, this is no longer an issue we can deliberate on and decide in the fullness of time, judging by the way new chatGPT enabled apps have exploded in the last few weeks. The genie is out of the bottle and there really is no way of putting it back. Oops!
To end this post with my new obsession; here’s chatGPT’s attempt at my prompt asking it to compose a poem on the sense of self and fear of death in the tone of Eliot’s ‘The Waste Land”.
This is its second attempt after I pooh-poohed its first. Its not bad, rather good; as if a talented young kid wrote it. But is this what a precursor AGI gets from reading that incredible poem. Really? Not being snarky but, AI has a long way to go to match the best amongst us and its not clear to me that it ever will. The other possibiility is that I need to pay up for GPT Plus to really get a “Waste land” level poem from the thing due to token limits.
In fear we dwell, and terror reigns For life is fleeting, like autumn rains Our sense of self, a fragile thing Ephemeral, yet with power to sing We strive to know, to understand To find a purpose, a greater plan For in the end, what will we be? But a mere speck in eternity Our identity, a fleeting flame Can we define it, or is it in vain? The fear of losing what we've built Is what makes death seem so unfulfilled And yet, in dying we can see The beauty in our humanity Our legacy, a tapestry of life A woven fabric, full of strife So in the end, though fear may reign And death may come to take our pain Our sense of self, it lives on still In the impact we made, in hearts we fill For every moment that we live We have the chance to give and give And when we're gone, we'll still exist In memories, in love, in every kiss So let us cherish every day And live our lives in our own way For though we'll pass, our selves will thrive In all the ways that we've touched lives.