top of page
Writer's pictureGreg Robison

Beyond the Code

THE EMERGENCE OF INTELLIGENT PROPERTIES IN AI


Thus, although each effect is the resultant of its components, the product of its factors, we cannot always trace the steps of the process, so as to see in the product the mode of operation of each factor. In this latter case, I propose to call the effect an emergent. It arises out of the combined agencies, but in a form which does not display the agents in action. -- George Henry Lewes

INTRODUCTION

Large Language Models (LLMs) such as ChatGPT are transforming our work and personal lives but are only designed to predict the most likely next words in a sentence based on their training of vast text datasets including books, articles, code, and internet conversation. However, these systems seem to understand human language at a very subtle level and generate convincing interactions, stories, and arguments at a human level. How can “glorified autocomplete” do these complicated capabilities if it only learns word probabilities? Did it learn something else from training, perhaps abstract concepts, relational comparisons, or even reasoning abilities? Perhaps these abilities emerged during training.


Emergent properties” are complex attributes or behaviors that appear from the interaction of simpler elements within a system and are not predictable from the individual parts alone. The idea is based in systems theory and biology but is now becoming relevant in the context of artificial intelligence. Emergent properties in complex systems, like natural ecosystems, human societies, bundles of neurons, or now computational models, represent a level of sophistication where something emerges that is more than the sum of the parts. They arise from the intricate interplay of the parts and their rules of interactions leading to new behaviors or capabilities that cannot be directly explained based on the properties of the individual parts. It is a highly abstract concept, so we’ll run through a few examples shortly.


AI model of emergent properties
Emergent properties are all around us (Dall-e3)

What do emergent properties mean for artificial intelligence? They are important for understanding current LLMs’ capabilities and future potential. We might see emergent properties in LLMs from their level of understanding, creativity or problem-solving abilities that go beyond the explicit instructions and architecture of these models. They challenge our notions of what machines can and cannot do and their potential to truly understand, learn, and even innovate. On the other side, they make us question predictability, control, and ethics in AI development. Understanding emergent properties of AI is necessary to grasp what they can really do and provide some guidance for the future of AI development.


UNDERSTANDING EMERGENT PROPERTIES

In complex systems, properties will emerge that the individual parts do not possess. Most systems are “additive”, which have predictable outcomes based on the parts of a system. Additive systems follow a linear and direct relationship with the elements of the system, like a traditional internal combustion engine where the total power of the engine is the sum of the power contributions from each of the individual cylinders (if the engine has 6 cylinders and each cylinder produces 50 hp, the total output of the engine will be 300 hp). On the other hand, emergent properties result from interactions and relationships between parts, often leading to unexpected outcomes or behaviors.


There is a higher level of complexity and organization where the whole becomes greater, and fundamentally different, from the sum of its parts, representing a qualitative leap in abilities.

Let’s explore some examples.

Bird flocks demonstrate emergent properties through the collective behavior of individual birds following simple rules. Each bird in the flock adjusts its position and velocity based on the movements of its nearest neighbors, maintaining a certain distance and alignment with the others. No single bird leads the flock or has a big-picture view of the entire flock's shape or direction. Yet, from these local interactions, complex patterns emerge, such as the fluid-like motion of starling murmurations or the V-formation of migrating geese. The flock exhibits properties like cohesion, coordination, and responsiveness to threats that are not present in any individual bird but emerge from their interactions as a group. Here’s a cool simulation of mumuration where an overall pattern emerges from a few simple rules:


AI Simulation of bird flock mumuration
Boids converging into complex patterns despite simple rules

Temperature is an emergent property that appears from the collective motion and interaction of particles in a system. At the microscopic level, particles in a substance move with varying velocities and collide with each other, transferring energy through these collisions. The average kinetic energy of these particles determines the temperature of the substance. However, temperature is not a property of any single particle - it emerges from the statistical distribution of kinetic energies across many particles. Temperature is a macroscopic property that describes the system as a whole and emerges from the microscopic behavior of its particles.


AI showing emergence property of temperature
Temperature emerges from the activity of atoms (Dall-e3)

The most interesting example to me is consciousness, which seemingly emerges from the complex interactions within the brain (and we’re still unraveling what our consciousness even might be). The brain consists of billions of neurons, each following relatively simple rules of activation and communication. These neurons are organized into complex networks and subsystems that process information, store memories, and generate thoughts and perceptions. From the coordinated activity of these neurons, the subjective experience of consciousness emerges, including self-awareness, qualia (subjective sensory experiences or to quote Dennett “the ways things seem to us”), and the unity of our various forms of sensory perception like seamlessly marrying the view of someone’s lips and the sound of their voice. Consciousness cannot be reduced to the properties of individual neurons - it arises from their collective interactions and the integration of information across different brain regions and structures. “I think, therefore I am” emerged from Descartes’ complex system of individual neurons working together.


AI image showing complex human brain
Our consciousness emerges from the insane complexity of the human brain (Dall-e3)

EMERGENT PROPERTIES IN LARGE LANGUAGE MODELS

Now that we have a better handle on emergent properties, let’s look for evidence in today’s LLMs. We’d look for abilities or behaviors that are not explicitly programmed but arise from the complex interplay of the model’s architecture during training and usage. These properties might reflect a level of understanding, reasoning, or creativity that surpasses simple pattern recognition or auto-completion based on direct inputs. The transformer neural network architecture of current LLMs solely finds patterns in the order of words from its training data. That’s it. But these models seem to develop a nuanced understanding of language subtleties (even across languages) and representations of abstract concepts even though the programmers who created the models did not directly add these features. If these capabilities can emerge in artificial neural networks, models can better mimic human-like intelligence and are better prepared to power autonomous AI systems.


But the most interesting thing about it is that it captures the depth of poetry. So it somehow finds in that latent space meaning that’s beyond just the words and the translation. That I find is just phenomenal. -- Satya Nadella on Freakonomics Radio on why he wanted Microsoft to invest in OpenAI

Advanced reasoning, creativity and the understanding of complex concepts are unexpected capabilities of LLMs that appear to mimic the depths of our human cognition. For example, LLMs an generate creative stories and poems, solve complex logical puzzles, or make up almost-funny jokes. These outputs suggest some high level of processing that goes beyond mere statistical analysis of language. Recent research has revealed some concrete examples of emergent behaviors in current LLMs.


CODEX'S PROGRAMMING CAPABILITIES

OpenAI’s Codex (which powered Github’s Copilot) is a GPT model trained on a number of programming languages and a huge amount of code that can create functional programs from natural language prompts like “Write a python script that creates a calendar app and to-do-list where I can easily add entries and see future entries.” The emergent property of code generation suggests the LLM has developed some understanding of programming concepts, syntax and logic that allows it to create successful code for novel problems. Codex outscores most students on CS1 and CS2 computer science exams. Coding is hard! I can’t do it well, but GPT-4 and Anthropic’s Claude 3 Opus are competent coders and I only need to describe what I want using a prompt and it creates a fully-usable script for me to run. If I was given a bunch of python repositories, it would take me a long time to learn python…


Example of python code written by AI Claude 3 Opus
Claude 3 Opus writes python code way better than I could

DALL-E'S IMAGE GENERATION

Another type of neural network, the diffusion model, like OpenAI’s DALL-E 3 exhibits some emergent understanding of visual concepts, object relationships, and artistic styles that allows it to create coherent images that can even exhibit creativity. This deep understanding comes solely from training on millions of image-text pairs – it’s not ever taught that apples come in only a few colors like red, green and yellow, but never blue. An emergent understanding of concepts like apples and kiwis from its training data allows us to create novel images like an apple-kiwi hybrid, something that it never encountered in its training. To come up with what a conceivable hybrid could look like, it must understand something about “appleness” and “kiwiness” to combine them, right? This “understanding” of semantic concepts is an emergent property.


AI image of new fruit kiwi apple
Introducing the Kiwapple (Dall-e3)

LLMS IN MATHEMATICS

LLMs also show some promise in mathematics, a notoriously difficult subject for these models (stay tuned for this blog post!). When trained on large corpuses of mathematical text and data, they can understand and generate mathematical content, including equations, proofs, and problem-solving steps. DeepMind’s FunSearch uses LLMs to discover new solutions to challenging mathematical problems and has solved the cap-set problem and optimizes packing algorithms. Additionally, LeanDojo is using LLMs to prove theorems by enabling collaboration between mathematicians and specialized models. Being able to understand and contribute to mathematical proofs reflects some meta-understanding of mathematics that emerged from its training data.


CHESS REPRESENTATION

In my favorite example of emergent properties, an LLM was trained on a dataset of chess games and has shown an impressive ability to understand and analyze chess positions. Through this training dataset, it has developed an emergent understanding of the game that seems to go beyond simple pattern recognition or rule-based play. When presented with an in-play chess board, the model can not only identify key pieces, but it can also assess the strategic implications of the board and suggest optimal moves. The model even creates a visual representation of legal moves.


AI emergence property of chess moves
Emergent representation of chess piece options

Even when not programmed with the rules of chess, seeing enough games played allowed the model to abstract some understanding of chess, including winning strategies. Current LLMs can play chess above a novice level (as measured by ELO).


THE DEBATE OVER EMERGENT PROPERTIES IN AI

It’s clear that something emerges from the combination of neural network and training data, but it’s not entirely clear what that something is and how similar/different it is from our conceptual understanding. What is clear is that these models are more than the sum of their parts. They are now exceeding human levels of programming, reasoning, and even creativity. Perhaps new architectures and multi-model training data (why limit the inputs when we can add images, video, etc.) might provide the basis for even higher-order emergent properties like real reasoning or even consciousness. But let’s not get ahead of ourselves…


Not everyone is convinced that these abilities are truly emergent or just artifacts of training on such extensive datasets. They do not have a human-like understanding of what an apple is, being relegated to online images and textual descriptions. The argument is that they are simply simulating understanding via highly advanced pattern recognition but lack any real awareness or insight that characterizes intelligence. To try to find the line between mimicry and true understanding often involves benchmarks that can assess not only things like reasoning or logic but checking their logic and rationale. However, finding this line is difficult and I often vacillate between finding these models deep and hollow. It’s certainly not how concepts and ideas emerge from our brain cells, but it doesn’t mean that something interesting doesn’t emerge.


are LLMs magic or math
Sometimes I’m not sure if it’s just math or magic…

IMPLICATIONS OF EMERGENT PROPERTIES FOR AI DEVELOPMENT

As AI systems get more complex with even more training data, there will be behaviors that appear to transcend their programmed capabilities and we need to test and monitor for these abilities. Beyond coding and writing Shakespeare-like sonnets, future emergent capabilities could include the ability to self-organize, adapt, or evolve in unanticipated ways. By using benchmarks like the AGIEval benchmark, we can hopefully measure and keep track of potential emergent abilities. We can evaluate and monitor for emergent properties and potentially recognize when a breakthrough in artificial general intelligence (AGI) is happening.


AI image showing future of AI
Who knows what properties may emerge from future AIs? (Dall-e3)

Emergent properties, by their very nature, can be unpredictable and may lead to unintended outcomes, posing safety and ethical risks. For example, a legal AI system might develop unexpected strategies to determine sentencing, while inadvertently violating ethical guidelines for oversight groups. And if AI systems start to exhibit behaviors that resemble true understanding or consciousness, we’ll need to think through important and troubling questions about rights, responsibilities, and the moral status of AI. We’ll need to completely reevaluate our relationship with technology. In the meantime, we need the adaptive ability to respond to the dynamic nature of AI evolution with flexible regulatory frameworks, that include monitoring, managing, and mitigating potentially harmful emergent behaviors. We also need transparent dialogue between AI experts, ethicists, policymakers, and the public to create an environment where we can harness emergent capabilities for the greater good.


Emergent properties could spur innovation and revolutionize many fields. In medicine, insightful AI could provide diagnostic insights from health data, body scans, personal and family history to personalize treatments. Or AI could simulate complex biological systems like the brain for research purposes. In law, AI could analyze immense amounts of legal documents to recommend new ways to understand and apply the law. Creative industries are already benefiting from the emergent properties of image generators, but further innovations will affect design, music, literature and art that could push the boundaries of creativity.


CONCLUSION

Emergent properties can explain some of LLMs’ complex, and often surprising outputs and point towards a future where AI’s capabilities extend far beyond our current expectations. Our discussion highlights the importance of recognizing and harnessing emergent cognition as a tool for innovation across industries. Despite the debate about AI’s capability to truly understand versus mimic higher-order functions, the practical evidence is becoming clearer that AI will soon replicate or even surpass human cognitive abilities. While our emergent consciousness is unique to our brains, some other kind of consciousness may emerge in the future within AI so we need to continue to explore possible emergent abilities with a critical eye and ethical considerations. AI is transforming our world, and the better we humans can harness its capabilities, the better our world can be.

button saying let's stay connected




bottom of page