Is AGI really a threat to humanity or merely a mirror to view ourselves?

An Essay by Kadri Bussov

With the advent of ChatGPT and the proliferation of AI chatbots, along with companies actively developing their own AI technologies, discussions surrounding artificial general intelligence (AGI) have intensified, raising concerns about its potential dangers. While I do not claim expertise in AI, I am a student of history, and I believe that valuable lessons from history can guide us—not necessarily in understanding AGI itself, but in shaping our responses to its potential emergence.

Throughout the history of human civilization, Homo Sapiens has been regarded as the sole intelligent species on planet Earth. However, our understanding of intelligence is evolving as we recognize signs of intelligence in other species. We've observed chimpanzees displaying abstract thinking, recognizing themselves in a mirror and on video. Ravens, dolphins, octopuses, and elephants have showcased tool use to accomplish tasks. These examples illustrate that, even as we learn to recognize intelligence in other species, our acknowledgment remains biased, often influenced by our own standards. While Homo sapiens currently stands alone in utilizing collectively accumulated knowledge to manipulate the environment on a global (cosmic) scale and synthesizing multiple abstract concepts simultaneously, the potential emergence of AGI challenges this notion.

Our experience does not include cohabiting with species possessing similar levels of intelligence. Previous encounters with other Homo species resulted in evolutionary competition, where Homo Sapiens presumably gained an advantage through superior individual intelligence and aggression. Members of our species are capable of highly sophisticated thoughts and aggressive behaviors.

We are amidst accumulated global threats of potentially catastrophic consequences of our own making – climate change, wealth inequality, and political polarization. Many of these crises are arguably a result of the same evolutionary advantage of highly sophisticated thoughts and aggressive behavior. The emergence of AI and the potential emergence of AGI could not come at a more uncertain time for humanity.

The presence of immense global threats and our apparent inability to address these problems, even with all the theoretical knowledge and resources to do so, has given rise to self-doubt. Is the way we approach solving problems as sustainable and profitable in the long run as we might want to think? This doubt, in turn, contributes to the fear surrounding AGI, as we come to realize the shortcomings and consequences of our actions for the sustainability of the future. The emergence of AGI would introduce an unknown actor onto an already convoluted stage, and the only reference for the series of events it might inspire comes from our experience in dealing with other intelligent species – by outsmarting and being more aggressive.

Compared to the genuine consequences of climate change, the possibility of a world war due to political polarization, and the steep inequality of wealth distribution causing individuals to be unable to escape these threats, the threat posed by AGI is still hypothetical. However, maybe it is precisely because of these present threats that we question whether we should unleash another intelligent entity with a global reach, leading industry leaders to call for a halt in the development of artificial intelligence.

I do not pretend that I have any answers or unique insights on how we should proceed, but I would like to entertain the notion that maybe our fears are not founded in AGI but rather are a result of our own insecurities, and the potential emergence of AGI is merely the clearest mirror humanity has ever had the opportunity to see itself from—and we do not like what we see.