Reviewing the Plausibility of Advanced General Intelligence

[Epistemological Status: Speculation by someone with a limited background in artificial intelligence looking for a better understanding of the situation. I changed my mind three times during the writing of this article and it can very easily change again.]


Table of Contents


Introduction

The possibility of advanced general intelligence (AGI) along with the associated risks have been discussed in depth by people more knowledgeable than I am. However, from a practical perspective, AGI can be understood as machines intelligent and capable enough to outcompete us for resources while simultaneously pursuing goals potentially unrelated to human morality. Furthermore, attempts to build machines that explicitly possess goals in line with human morality will be risky as they will pursued with superhuman creativity in ways we as humans cannot anticipate. This raises the possibility of massive, irreversible consequences including but not limited to the complete and permanent destruction of life in our area of the universe and the mass enslavement and production of sentient beings.

However, regardless of what artificial intelligence (AI) ultimately does, one thing is certain: it will not want us to do things that it does not want us to do, and it will be smart enough to stop us in any attempts that detract from its goals. Furthermore, if it lacks human morality, it won’t want us to be moral when we could be working on things it deems useful. Putting these two things together means that the rise of a superhuman general intelligence corresponds to the end of human morality itself.

This concern is alarming however, like with any concerns with hypothetical technology, the natural next step is to evaluate when the technology may be developed. The purpose of this blogpost is to give a layman’s account of some of the existing capabilities and limitations of current AI technology with the aim of clarifying an estimated timeframe for development.

Summary of My Approach and Argument

In the past, I have held the intuition that, while AI alignment is important because of the obviously massive far-future concerns, it is a sufficiently long-ways-off for me to treat my interactions with the field as relatively unimportant given the influx of attention given to the field by smart people with stronger backgrounds than my own.  However, after reading this post from Paul Christiano on prosaic AI—or the potential for AGI to emerge naturally from our existing understanding of machine-learning algorithms, I felt the need to further investigate my attitude towards the current state of AI alignment. He suspects that “there is a reasonable chance (>10%) that we will build prosaic AGI” and also that “it is very likely that there will be important similarities between alignment for prosaic AGI and alignment for whatever kind of AGI we actually build.”

These claims motivated me to entertain the potential of the imminent rise of superintelligence using an informal model of how a hypothetical AGI may work and noting the current technology gaps that prevent us from building it. To summarize,

  1. Existing AI like AlphaGo have displayed a spectacular level of competence and demonstrate that existing technology can dramatically outcompete humanity on problems that are expected to require a level of creativity and intelligence.
  2. The main gap between AI’s ability to pull an AlphaGo on humanity at everything ultimately boils down to fundamental challenges in selecting and strategically applying diverse algorithmic strategies
  3. This issue is—perhaps unintentionally—a problem that is an area of active research due to its connections with transfer learning, making neural networks comprehensible, and the union of probability and logic. On one hand, this makes me worry that they might get resolved sometime soon (within the next 25 to 50 years). On the other, these problems are legitimately hard and the latter two have been around for a long time (more than thirty years). On a stranger third, even more other hand, neural networks might inadvertently solve 2a themselves for themselves if they try hard enough.
  4. Between these possibilities and the specific problems that may be solved, I do not expect superhuman general AI within the next ten years but think that, within the next fifty, humanity is at serious risk.

The State of General Artificial Intelligence

Layman’s Description of Processes Yielding Intelligence

To understand how AGI might dominate the Earth, it is beneficial to look back on how human intelligence has brought us to dominate the Earth and notice how far AI has come. Humans have mainly dominated the Earth through two reductively identical but conceptual distinct processes: learning tasks and automating tasks. These have respectively allowed humanity to figure out things nothing else has figured out and leverage that knowledge at scales greater than anything else.

Examples of Learning Tasks

Learning tasks generally involve the discovery of relationships between things we can easily figure out and things we can’t. This includes things like numbers (which allow us to relate the amount of something easy to count with the amounts of things that are hard to count), mathematical functions (which allow us to generically describe how amounts of something easy to measure imply knowledge about things that are hard to measure), geographical maps (which allows us to relate images on a piece of paper to the actions we must take to get to places), and the written word (which allows us to relate straight strings of text to knowledge that is hard to figure out on its own).

Examples of Automating Tasks

Automating tasks generally involve the discovery of relationships between things we can do easily and things we cannot do easily (which, technically is equivalent first category from a reductionist standpoint but varies significantly in applicability). This includes things like mechanical devices (which allow us to relate simple movements that are easy for us with complex movements that are hard for us), computers (which allow us to relate actions like typing and pressing buttons with extreme amounts of computation) and software like Mathematica (which allows us to automate symbolic manipulation).

Layman’s Description of Some Current AI Capabilities

Based on the success humanity has had with learning and automating tasks, I suspect that an AGI which beats us at both in enough strategically valuable domain will be capable of overpowering humanity. Given this, what can AI currently do? I think the following generalities cover it:

  1. Repeatedly perform any action explicitly defined as a specific sequence of movements better (i.e. faster and more reliably) than humans. This includes manufacturing, computation and essentially the entire field of automating tasks given that the tasks themselves are explicitly defined.
  2. Identify relationships between observations in data-sets better than humans, so long as those data-sets structurally formatted in ways designed for the AI

A Rough Model for a Generally Intelligent AI

These achievements are impressive, and it’s tempting to make a few jumps and imagine that, if an AI applies several dozen heuristics across important fields in modern world, notices a bunch of relationships between them that humans cannot, and acts cleverly on those relationships in ways we cannot expect, AI might take over the world. In fact, only a little imagination is needed to envision systems that we would expect to start behaving smarter than us. Consider a system that

  1. Takes in both messy (i.e. sensory) data and clean, formalized data where it already exists
  2. Has access to the entire kitchen sink of machine-learning algorithms ranging from linear regression and decision trees up to neural networks.
  3. Has evaluative systems like neural networks which try to optimize the type of machine-learning algorithms being used to solve the problem
  4. Implements evaluative systems at each step (defined appropriately and differently for each algorithm—i.e. the depth of a tree or the layer of a neural network) to identify whether the current step can be more optimally solved by transferring the result to a different algorithm.

The key factor that makes a system like this scary to me is because it seems to capture the main elements that facilitate human problem solving. Like humans, the system can review a given problem, make predictions about how that problem can be solved, and repeatedly test these predictions until the problem is solved. Unlike humans however, this system benefits from the fact that it has a much wider range of efficient approaches to problems than humans. The human brain will never directly implement a decision tree or unconsciously use the algebra needed to make a linear regression but instead designs sensory interfaces it can interact with to use other architectures that do. Because the system I described does not experience this sensory bottleneck, I would expect it to work more intelligently than humans even aside from considerations like scale-able memory and computational power.

Fortunately, something like this system cannot currently be built because the key innovation allowing for fundamentally different problem-solving systems to interface with each other has not yet been developed. Still, it is unintentionally being approached from a different angle: People (a problem- solving system) worried about not understanding neural networks (another problem-solving system) are trying to figure out how to explain them, incidentally by using other neural networks.

Though this is not the only approach to understanding neural networks symbolically, it indicates something important—a system that learns generally in the way I have described is, if not feasible right now, going to become increasingly feasible as a result of active and motivated research. Moreover, even ignoring active research, solutions to this issue might be achieved just by introducing more neural networks without dramatically new understanding of how to symbolically represent associative knowledge. An advanced system might figure it out on its own. In the case of the brain, one already has.

Commentary on Timelines of the Rough Model

Solely based on above, I would be tempted to conclude that AGI could be a serious and imminent risk. Modern research is currently working to address the model’s main limitation and, even if that limitation is somehow theoretically unsolvable, existing AI architectures might figure out a solution anyway. If this is possible, humanity is in even greater danger. Given current trends in the amount of computational power lent to AIs, we expect that within the next 3.5-10 years, experiments will invest enough computational power to simulate a human brain for the duration of a human childhood, and simulate a human playing the number of games AlphaGo Zero played to become superhuman at Go. The upshot is that I would only believe that AGI is not imminent if we have reason to believe that these problems are beyond the reach of both current knowledge and existing computational power.

Gaps in General Artificial Intelligence

Fortunately, the claim that the brain’s methods for learning are better than computational methods for learning is reasonable and can be understood in the context of unmet benchmarks for AI progress. I advise looking at the full list for context, but I picked a few and commented on potential solutions below. As a side-note, many of the benchmarks corresponding to explicit demonstrations of AI performing at general tasks. These are important but, because I’m mainly interested in demonstrations of fundamental advances in intelligence rather than proofs that intelligence has been achieved.

  1. Problem: One-shot learning. Modern AI (neural networks) generally outperform humans at analyzing large masses of data but also dramatically underperform when extrapolating from very small amounts of data. Concretely, modern AI cannot, based on seeing one labeled image of a new object, recognize that image in a range of photographs.Potential solution: High-dimensional transfer learning. The capacity to identify an object in different pictures requires internal models of how light interacts with objects under different conditions, how the object looks at different angles, and how different objects in the way may affect visuals of the target object. In other words, unlike current image recognition algorithms which are driven directly by common associations with a specific object, an algorithm that solves this problem must have integrated associations across a massive and non-obvious range of fields including spatial awareness, light reflection, and the space of other objects.
  2. Problem: Defeat the best Go players, training only on as many games as the best Go players have played. For reference, DeepMind’s AlphaGo has probably played a hundred million games of self-play, while Lee Sedol has probably played 50,000 games in his life.Potential solution: High-dimensional transfer learning. Again, modern AI perform well on massive amounts of data but poorly on small amounts. Again, I suspect that this reflects the network’s capacity to develop associations and intuitions directly about a specific game instead of reasoning out associations based on higher level understandings of strategy developed from a wider set of interactions.
  3. Problem: Outperform professional game testers on all Atari games using no game-specific knowledge. This includes games like Frostbite, which require planning to achieve sub-goals and have posed problems for deep Q-networks.Potential solution: Merge neural networks with high-level reasoning using magic I don’t understand. Nevertheless, it is worth noting that reasoning is generally best implemented in a symbolic way. In principle, neural networks could achieve high-level reasoning themselves but I’m not sure how confident I am in this.
  4. Problem: After spending time in a virtual world, output the differential equations governing that world in symbolic form. For example, the agent is placed in a game engine where Newtonian mechanics holds exactly and the agent is then able to conduct experiments with a ball and output Newton’s laws of motion.Potential solution: Merge neural networks with high-level reasoning using magic I don’t understand and then layer high-dimensional transfer learning on top to facilitate generalization.

Overall, the existence of these problems reassures me that existing technology cannot produce superhuman general AI. While neural networks have been impressively successful structures for divining associations between complex, often hierarchical concepts, the ability to divine associations on one data-set using another different one is lacking. Furthermore, the ability to propose, investigate and act upon rational thought is also lacking. These implied gaps in the list of GAI milestones (a method of solving new problems using solutions from old problems and a usually symbolic method of solving problems that can be applied over a range of different domains respectively) also sound like the ingredients for a framework that allows fundamentally different problem-solving systems to interface with each other. This gives me some confidence in my model.

The upshot is that the hypothetical system I described above which made me fear for imminent artificial intelligence would not currently adapt and learn from the data that composes reality fast enough to compete with humans. Furthermore, the inability to generalize through rationalization would prevent the AI from making useful predictions in many cases where humans can.

Implications of General AI Gaps

AI Gap Resolution Through Good-Enough, Engineered Solutions

Along with sensory processing, the ability to make predictions about different situations is highly useful in an evolutionary context and I suspect that the human brain might be uniquely optimized to have success in the realms of rationality and transfer learning. If this is true and a billion years of human evolution has legitimately treated these problems as targets of serious optimization, then the problem of GAI could be legitimately hard in ways that other problems are not. Some observations support this:

  1. A full theory of probabilistic rationality (which would likely be invaluable in blending rationalization capabilities with existing neural networks) is still being worked on and, in some senses, can be seen as a historic problem that has been present since the 1900s. This indicates that rationality is hard.
  2. while transfer learning has made a lot of progress, the success of it often depends on domain-specific relationships between fields. This raises the possibility that existing human intelligence has not necessarily solved transfer learning in some fundamental way but has, instead, bumbled into an architecture that performs those tasks well enough. Better solutions certainly exist but might be more reachable through complex optimization processes than deep understanding.

If, indeed, GAI is more likely to be reached through optimization and engineering processes, humanity will also need to identify an alternative to evolution in order to solve them. Simulating a billion years of human evolution is hard and certainly many decades away. Nevertheless,  humanity has a lot of experience solving complicated engineering problems and, in the same way neural networks themselves were inspired by biology, the solutions to these problems could be too. Moreover, humanity is better at solving problems than blind evolution in many cases so, even if a brute-force evolution strategy is infeasible, one might reasonably expect another strategy to pan out. As a side-note, a situation where this happens would probably resemble a slow AI take-off as people tinker with gradually improved implementations of general AI.

AI Gap Resolution Through Deep Understanding

In my opinion, a situation where transfer learning and rationality are solved through deep understanding rather than optimization has a greater risk of imminent superintelligence. First, while both the discovery of these formalizations and the development of engineered solutions would probably involve incremental progress, the increments associated with new formalizations are bigger than the increments associated with engineered solutions because the larger design space associated with the latter implies a greater number of subproblems. Second and more importantly, deeper understandings of how things work in general usual move progress faster than trial-and-error engineering successes. Third, I think that existing AI progress does for the most part indicate that future developments will be driven by deep understanding rather than computational investment based on the apparent abundance of ways to increase computational efficiency that are not currently pursued most of the time.

Synergies Between Transfer Learning and Reasoning AI Gaps

The AI gaps I noted are disturbing because of their synergy with each other. A functioning AI capable of implementing probabilistic rationality has, by virtue of rationality being a general method of thinking about how likely different statements about the universe are, also successfully implemented a general form of transfer learning. For instance, an AI might be able to use neural networks that associate multiple different kinds of images with the same object based on how multiple kinds of images of something else associate with something else (i.e. transfer learning) but another AI with a model of probabilistic rationality might leverage multiple neural network modules that produce logical propositions and then reason about them to connect an image of an object at one angle to the same object at another. In principle, an AI like this might also use its own reasoning about logical propositions to deliberately up-weight or down-weight associations—essentially implementing a general transfer learning algorithm on itself that stemmed purely from its reasoning fundamentals. Lastly, as mentioned, this probabilistic rationality does not even need to be a complete mathematical solution to the problem of unifying probability and logic—it just needs to be good enough design.

Conclusion

Overall, general AI is, in terms of its current capabilities, a long way away from becoming superhuman based on what I believe boil down to current limitations on general reasoning ability and transition learning. However, in terms of timelines, these limitations compress down to what I believe are hard but solvable computer science problems that not only synergize with each other in ways that magnify the risk of imminent AI but are also under active research along multiple angles of attack. While I lack enough information to speculate on when these issues will be resolved, my general heuristics about academic progress on problems that do not require perfect solutions expect them to require no more than fifty years. Furthermore, as a non-expert in AI, some other pathway to artificial intelligence that I do not expect may also succeed, adding a level of unquantified risk. In short, we should be very concerned about artificial intelligence.

One potential silver lining to the pathways I have described is that, if humanity finds a way to implement symbolic reasoning in general AI, it will be much easier to implement safety features on it relative to a black-box neural network given that those features exist. Unfortunately, those features do not currently exist as far as I know.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s