This is the first post from the series Impossibility Results in AI related to the work published in ACM Computing Surveys in 2023 (link here). These series aim to present the findings in an approachable manner which may sometimes trade-off with precision.

Introduction: A World of Transformative Technologies

In today’s rapidly evolving landscape, numerous technologies are emerging with the potential to revolutionize our lives. From quantum computing and blockchain to nanotechnology and gene editing, these advancements are still in various stages of development. Among them, artificial intelligence (AI) is the most mature and poised to drive significant economic growth and increase productivity.

Understanding AI: A Clever Computational System

Defining AI is a complex task due to its ambiguous nature and a broad range of interpretations. However, we can describe AI as a computational system that cleverly adapts to shifting and challenging circumstances, despite its smaller size and relative weakness compared to the world around it. By computational system, we mean a bunch of different parts, like computer programs and clever algorithms, that all work together to solve problems or do tasks. While AI cannot do everything or possess infinite knowledge, it exhibits non-trivial problem-solving capabilities similar to certain human skills. For instance, it can recognize objects from images, generate interesting text, transcribe speech, or play chess.

The Positive Potential of AI: Enhancing Our Lives

AI offers numerous advantages that can significantly improve our lives. Computer programs capable of performing tasks like those mentioned earlier can bring forth greater performance thanks to their speed, reliability (never getting tired), and lower cost per execution. The collective and individual benefits of AI are substantial, with the potential to fulfill our wishes and drive positive change. Notable experts like Yann Lecun, Andrew Ng, and Marc Andreessen emphasize the high probability of AI’s positive impact.

The Cautionary Tale: Understanding Existential Risks

However, with every upside, there is a potential downside, even if it is smaller in scale. The concept of existential risk encompasses threats that could lead to humanity’s extinction or irreversible decline, endangering our existence as a species. These risks transcend conventional daily hazards and can cause irreparable harm on a global scale. While uncontrolled AI is one such risk, other examples include global pandemics, catastrophic climate change, nuclear war, and runaway nanotechnology. In this series, we will explore how AI poses unique challenges compared to these other risks and how to deal with them.

Unraveling the Attributes of AI: Compatibility, Capability, and Agency

To grasp the nuances of AI, we must consider several key attributes:

  1. Compatibility of Values: How well does AI align with our ethical and worldview values?
  2. Capability: What can AI do, and how proficiently can it perform? There are at least two dimensions to capability:
    1. Deep Capability (Performance): How well does AI excel in a specific area?
    2. Wide Capability (Generality): How many different areas can AI perform at an acceptable level?
  3. Agency: Can AI autonomously create and pursue its own goals?

The Current State of AI: A Spectrum of Systems

Presently, there exist various types of AI systems, each with its own characteristics:

  1. Savants: These systems demonstrate exceptional performance in narrow domains but cannot really do anything else. Also, they lack agency and are typically designed to align with the values of their creators. Concerns such as bias, fairness, and privacy have surfaced with these systems. Nevertheless, they serve as tools to enhance human performance. An example of a savant is an AI system that recognizes cats and dogs in photos or a system that plays chess.
  2. Shallow Mid-Generalists: Systems like ChatGPT have impressed the public with their fluency across many verbally-expressible tasks. Although their performance may not be exceptional in every area, their medium generality enables them to handle multiple tasks to a satisfactory extent, showcasing impressive results in some cases.
  3. Quasi-Agentic, Shallow Mid-Generalists: We get these systems by combining systems like ChatGPT with tools such as Wolfram Alpha. This addition allows them to create subgoals and gives them a degree of agency. These systems are similar to shallow mid-generalists but possess additional capabilities for the limited setting of subgoals and utilizing tools.

While these systems pose societal risks related to bias, fairness, privacy, and exploitation in the wrong hands, they do not present existential risks. However, AI’s increasing generality and performance may displace a growing number of jobs, warranting attention and action from society and governments. Technological advancements will also create new purposes, and the successful transition of individuals and societies into this new era will largely depend on utilizing technology effectively. Equally important is people’s active participation in governance and political processes to ensure a fair and balanced distribution of the benefits derived from AI advancements.

Reevaluating Perspectives: Scientists’ Shifting Stances

Prominent scientists like Geoffrey Hinton and Yoshua Bengio have recently shifted their focus toward AI safety. This change in perspective stems from the emergence of systems like ChatGPT, which exhibit remarkable generality across diverse tasks. Concerns arise when deep performance is rapidly combined with broad generality, leading to potential problems. Specifically, systems with high performance, high generality, and significant agency (which currently do not exist and may take a while until developed) can, under certain conditions, pose existential risks, as we will delve into further.

Notable figures in the field hold differing views on the matter:

  • Optimistic and Focused on the Upsides: Yann LeCun, Andrew Ng, and Marc Andreessen tend to emphasize the positive aspects of AI, often downplaying the associated risks due to their perceived smallness or manageable nature. They seem to implicity assume a lack of agency in future developed AIs.
  • Balancing the Upside with the Downside: Geoffrey Hinton, Yoshua Bengio, and Roman Yampolskiy adopt a more cautious approach, acknowledging the high potential of AI while not ignoring the potential downsides. They are wary of developing agency in future AIs.
  • Optimistic with a Neutral Outlook: Jürgen Schmidhuber leans toward the positive aspects of AI, suggesting that powerful AI agents may not have an interest in harming humanity and might simply ignore us and go on about their own business, something like in the movie “Her,” made in 2013.

Stay tuned for our upcoming post, where we will explore this topic further.

**This is a copy of the post from my personal substack “Peregrine”.

Written on: June 08, 2023

Written by : Mario Brcic

Let’s Work Together!

I collaborate with leaders, founders, and researchers to turn bold ideas into intelligent systems — from thought to execution.

Whether you’re exploring AI strategy, systems design, or future-facing innovation, let’s connect.

Get in Touch

Leave A Comment