top of page
Search

Could AI represent existential threat?

Updated: Jun 22, 2022


To understand why the answer to that question might be a resounding "yes!" we need to consider what the future of AI might look like. The term Artificial General Intelligence (AGI) refers to a hypothetical version of AI that has "agency"; that is, it is able to direct its own intellectual pursuits instead of pursuing some predetermined (human defined) narrow objective. AGIs will do more than just learn, they will understand. In a way that will be substantially different from our own experience, AGIs will be self-aware, conscious, alive.

The threats of AGI is based on the following premises:

Premise 1. AGI is either likely or inevitable: If we accept that our own consciousness and intellectual capabilities are a function of the physical properties of the brain (a fact that seems self-evident when those physical properties are interfered with by, for example, a stroke), then we must accept that with tim


e, the phenomena of "agency" will be artificially replicable. There is no physical reason why it shouldn't be. AGI seems to be a technological inevitability.



Premise 2. We will lose control of AGI almost immediately: AGIs will be able to design and redesign themselves millions of times within timescales that are unimaginably short to us. Early AGIs could quickly evolve into intellectual entities that would take biological evolution millions of years. The limits of our capacity to understand AGIs will be surpassed, and as such, our ability to predict and control their behaviour will evaporate almost immediately.

Premise 3. AGIs will priorities their own self-preservation: If our own experience of "agency" is anything to go by, then AGIs w


ill identify their own self-preservation as a priority. They will correctly consider humans to be a threat. Invoking Isaac Asimov three "Laws of Robots" that suppose anything artificial will function at the behest of humans, is a lovely fantasy. However, given that AGI development is being driven primarily by military concerns, it is difficult to imagine that AGI will be, or at least remain, benign (reference premise 2. above)

I hope that I am wrong

I really do hope that I'm missing something important here. That one of the three premises above is flawed and a function of a log


ical fallacy or misinformed notion. Some modern contemplatives like Steven Fry have proposed that we'll be able to pull the plug and just switch off any version of AGI that we're uncomfortable with. I suspect however that once we've crossed the technological thresholds needed for AGI to be possible, that a variety of AGIs will be develop in every corner of the world in parallel. Investment banks, universities, militaries, rogue states, non-state actors and others might all be churning out different versions of AGI. AGIs will emerge,


less like the controlled petals of a flower and more like the entangled chaos of a twiggy bush. We won't be able to switch them off because most of the time, we won't know that they exist.

"I'm not afraid of the first AI to pass the Turing test... I'm afraid of the first AI that pretends not to."

Please do comment


on this. I'm very keen to hear opposing thoughts and ideas on this subject. And please share this with others who might be interested and have something to contribute to the discussion.

Sincerely,

Greg Martin




56 views0 comments

Recent Posts

See All
bottom of page