During The AI Journey To AGI There Is Concern That We Will Be Like Frogs In A Slowly Boiling Pot

In today’s column, I examine an intriguing premonition: humankind may fall victim to the proverbial boiling frog theory during our journey from conventional AI to the much-vaunted attainment of AGI (artificial general intelligence).

The gist is this: we will get closer and closer to AGI on a gradual, stepwise basis without realizing that we are heading toward our ultimate doom. The subtle incremental steps will fool us into failing to recognize that we are in serious trouble and ought to abandon the AGI pathway. We will get cooked, just like an oblivious frog in a pot of boiling water.

Let’s talk about it.

### Heading Toward AGI and ASI

First, some fundamentals are required to set the stage for this weighty discussion.

There is a great deal of research ongoing to further advance AI. The general goal is either to reach artificial general intelligence (AGI) or, possibly, the even more far-reaching achievement of artificial superintelligence (ASI).

AGI refers to AI that is on par with human intellect and can seemingly match our intelligence. ASI surpasses human intellect and would be superior in many, if not all, feasible ways. The idea is that ASI would be able to run circles around humans, outthinking us at every turn.

For more details on the differences between conventional AI, AGI, and ASI, see my analysis at the link here.

As of now, we have not yet attained AGI. In fact, it remains unknown whether we will reach AGI at all, or if it might be achievable decades or even centuries from now. The predicted timelines for AGI vary wildly and lack credible evidence or ironclad logic. ASI is even further beyond our current reach.

### Boiling Frog Theory Comes Clean

You might have heard of the boiling frog theory — a popular metaphor warning about how gradual changes can lead to disaster without clear awareness.

Interestingly, this metaphor originated in the late 1800s. Scientists then were trying to locate the soul in humans. Since intrusive experiments on humans were off-limits, frogs became the next best subjects.

You may recall dissecting a frog in a middle school biology class. In that era, some experiments involved placing a frog into a pot of water on a stove. The water started at room temperature, and the heat was slowly increased.

Sometimes, the frog would jump out before the water reached boiling temperature. Other times, it would remain and perish. Experimenters even removed the frog’s brain to test if the soul resided there or elsewhere in the body.

These varied attempts aimed to uncover the soul’s location by observing frogs’ reactions to gradually heated water.

### Frogs Turn Into Lore

A wide range of frog boiling experiments were conducted repeatedly, though their popularity waned by the 1890s. The results were inconclusive and often contradictory:

– How fast was the water heated?
– What was the initial temperature?
– Could the frog escape or was it trapped?
– Was the frog intact or operated on?
– How big was the frog, and how much water was in the pot?

Because of these variables, sometimes a frog would jump out; other times, it would not. These contradictions gradually cooled scientific interest in the experiments.

Despite the inconclusive evidence, society clings to the assumption that a frog in boiling water won’t recognize the danger and escape. The metaphor has become ingrained lore due to its powerful imagery and easily grasped lesson:

> **Beware falling into a process that is harmful but too subtle to notice until it’s too late.**

### AGI and the Boiling Frog

Now that we understand the boiling frog theory, let’s see how it applies to the pursuit of AGI.

Here’s the rundown: assume AGI will ultimately destroy humanity. Yet, we do not realize this fate.

The public debate will cloud the issue, leaving us unsure of the consequences of achieving AGI. Some will claim it will cure cancer and solve massive world problems, hailing it as a godsend. Others will warn that AGI could enslave or even kill humans.

In this scenario, humanity itself is like a frog in a pot. Each step toward AGI is a gradual increase in temperature. The problem is that we won’t perceive how dire our situation is. The noise from conflicting opinions will delay meaningful action.

I don’t mean to sound crude, but you, I, and the other 8 billion people on Earth—we are all frogs. The question is: do you feel the heat yet?

### Maybe We Need More Heat

Some argue that until the water reaches a certain temperature, the frog cannot be expected to sense the danger.

Likewise, the current level of AI development might be “too cool” to alarm us. A frog in room-temperature water has no reason to flare up. It’s not in its natural habitat but is comfortable enough.

Similarly, humanity might presently be in the early phase—where AI feels benign.

We may collectively only become concerned as AGI nears. As we begin to detect the earmarks of true AGI, our collective intuition—the Spidey-sense—might warn us. Then, humanity could hit the brakes on AGI development, dousing itself with cold water and escaping the boiling frog trap.

In that hopeful vision, humanity saves itself from utter destruction. Boom, drop the mic.

### Humans Are No Better Than Frogs

A common retort is: what if the nearness to AGI doesn’t trigger awareness of existential risk?

There is a solid chance that even when we are a nudge away from AGI, we still won’t grasp the full gravity of the outcome. We might linger in the heating pot, unable to escape before AGI comes into existence. We will boil, sadly and inevitably.

Furthermore, unlike frogs, humans tend to pride themselves on their intelligence—and can convince themselves of things a frog could never imagine.

For example, we might realize AGI is dangerous but believe that our regulatory controls will keep us safe. We trust AGI will be constrained and unable to harm us.

The crux is this: yes, we may acknowledge the water is heating, but that awareness won’t translate into effective action. Those safety controls likely won’t work.

For an in-depth analysis of why controlling AGI is so problematic, see my discussion at the link here.

While a frog might be oblivious to the rising temperature, humans will see it coming from a mile away—but still might end up in the pot, lulled into a false sense of security.

Unfortunately, this could lead us to the same fate as the undiscerning frog.

### Food for Thought

What do you think of our capabilities compared to the lowly frog?

William Greenough Thayer Shedd, a famous theologian from the late 1800s, once made a striking observation about frogs:

> “Frogs are smart; they eat what bugs them.”

Maybe we aren’t giving enough credit to the revered frog. Frogs might well survive AGI’s arrival—though humanity’s longevity could be in question.

Take some quiet time to mull over these AGI qualms. Just don’t let your mind boil over as you sort out what the future holds.
https://bitcoinethereumnews.com/finance/during-the-ai-journey-to-agi-there-is-concern-that-we-will-be-like-frogs-in-a-slowly-boiling-pot/

Leave a Reply

Your email address will not be published. Required fields are marked *