top of page
Writer's pictureGreg Robison

Modern-Day Alchemy and AI

EXPLORING THE MYSTERIES OF LARGE LANGUAGE MODELS (LLMs)


“Alchemy is the art that separates what is useful from what is not by transforming it into its ultimate matter and essence.” --Philippus Aureolus Paracelsus

My father was a chemist, specializing in environmental chemistry and helping companies analyze their soil and water runoff for contaminants. He got his master’s degree in chemistry from MIT, so he knew his technical stuff. But when I was a kid, he also used to tell me stories about the history of chemistry, various myths like the four elements and vital force. The one that really caught my attention was alchemy. It turns out alchemists cared about much more than making gold, they wanted to find the underlying laws of the universe, including the ability to heal diseases. My dad was fascinated by them too, not just because they were the chemists of their time, but because he found a similar spirit to his own curiosity and interest in experimentation. I really hadn’t thought of alchemy in a while, but the more I work with AI, the more I feel like an alchemist. I’ll tell you why.


The Alchemical Nature of Generative AI

Alchemy is the ancient practice that sought to uncover the secrets of the universe, seeking knowledge and true understanding. They experimented with various substances, like our modern-day chemists, by combining and manipulating matter in hopes of achieving transmutation (converting one element into another). The practice was shrouded in mystery, with alchemists often using cryptic symbolism and esoteric language in their work. Alchemy was ultimately discredited as pseudoscience, but the desire to explore and uncover hidden truths lives on in modern scientists, including chemists like my dad.


In many ways, working with today’s large language models (LLMs) like ChatGPT, Claude, and Meta’s Llama models feels a lot like alchemy. Just as alchemists experimented with different inputs and outputs in their attempts to transform matter, those of us working closely with LLMs engage in a similar process of experimentation and iteration. We feed various prompts and datasets into these models, tweaking parameters (there are so many parameters to tweak!) and fine-tuning approaches, all in the hopes of eliciting the desired output or uncovering new capabilities. This process can be both exhilarating and frustrating, as the outputs from these models are not always predictable or easily controlled. Sometimes, seemingly minor changes to the input, like a space after a colon or not, can lead to different results, much like how alchemists might have been surprised by the outcomes of their experiments, failures and all.

Alchemist mixing chemicals in lab
Some alchemists sought the secrets of the universe in a lab

The quest to uncover hidden knowledge and capabilities within LLMs parallels the alchemist’s pursuit of uncovering the secrets of the universe. LLMs are known for their ability to generate human-like text and perform various language tasks, but they’re also a black box, with the true extent of their knowledge and capabilities largely hidden (did you know that you can coax some accurate answers about illicit topics with the right prompts from safe models?). Just as alchemists are associated with mysterious and sometimes inexplicable results, working with LLMs creates outputs that are difficult to explain or interpret. Understanding how these LLMs arrive at certain results or finding out what knowledge they are drawing from is still largely a mystery yet to be unraveled.


Eliciting Emergent Properties

While LLMs can’t transmute elements and turn lead to gold, there is a kind of gold that I’m often looking to uncover: signs of true emergent properties. In the world of LLMs, emergent properties refer to abilities or behaviors that arise from the complex interactions between the model’s neural networks, rather than being something explicitly programmed or trained for. For example, although trained on lots of Python code, ChatGPT learned enough of the patterns of programming to help write my unique game that is definitely not its training dataset. I look for evidence of higher-level cognitive skills or capabilities like inductive reasoning by posing various reasoning tasks and analyzing the “reasoning” process. Another emergent property I track is creativity – LLMs can generate original poetry, stories, and rhyming song lyrics. They can also engage in limited problem-solving, coming up with novel solutions to challenges that were not part of their training data.


The emergent property that many AI researchers are looking for is true reasoning which enables flexible problem-solving. LLMs have shown the capacity to break down complex questions, identify relevant information and provide coherent and logical responses for some tasks. For others, they fail miserably (but isn’t that also pretty human too?). But it’s still impressive how well they can do when not explicitly trained on reasoning tasks but seem to acquire this ability through exposure to their datasets.


However, reliably eliciting emergent properties in LLMs is difficult; it can depend very specifically on the model, the topic of the interaction, and how the questions are asked. It’s especially hard when we don’t understand how these incredibly complex neural networks really work. Therefore, eliciting emergent properties requires constant experimentation and iteration. Researchers often use a trial-and-error approach, testing different prompts, fine-tuning techniques, and model configurations to coax out the desired properties. This process is time-consuming and resource-intensive, but it is essential for unlocking the full potential of LLMs and pushing the boundaries of what is possible with language-based AI. We can best find out what these models are truly capable of by testing, testing, testing.


Exploring Latent Spaces

In the quest to find signs of true creativity in these models, alchemists explore the potential in latent spaces. Latent spaces are the higher-dimensional, abstract representations that the model learns during the training process. LLMs can clearly describe a dog, without ever having met one – what does its “understanding” of dog-ness look like? These spaces encapsulate the underlying patterns and relationships that the model has extracted from the underlying data it was trained on. Latent spaces are not directly observable or interpretable but exist only as complex mathematical representations. They are the hidden dimensions that give rise to the model’s observable outputs and behaviors but also hold endless possibilities – there are concepts of dog and cat and untold combinations of the two together in latent space. Is that something like creativity that is worth exploring? I think so.

dogs and cats combined
The latent space between dogs and cats is a goldmine for creativity

Like the secrets of universe that alchemists sought in the Philosopher’s Stone (no, not the Harry Potter one), latent spaces hold unexpected knowledge and abilities. Learning through exposure to diverse and extensive text data, they contain information and capabilities that go beyond what was explicitly presented in the training data. LLMs have shown the ability to draw upon the information encoded in latent spaces to perform tasks and generate outputs that are surprisingly coherent, informative, and even insightful. For example, by training an LLM on Seinfeld episode scripts, we can create believable Seinfeld episodes on any topic (sorry, this technology is too powerful to release to the public, we don’t know what effects it would have on the fabric of our society). Although the original characters have never discussed artificial intelligence, this information is encoded somewhere in the latent space of the model and can be summoned with the right prompt:


black screen with white text of script
AI-generated Seinfeld-esque banter about AI via alchemy

Navigating and mapping these latent spaces is a significant challenge – the high-dimensional and abstract nature of these spaces makes them difficult for humans to visualize and interpret directly. We have a hard enough time visualizing a 4-dimensional shape, let alone a 70-billion-dimensional one. Unlike the input and output spaces, which are translated into human-readable text, the latent spaces are represented by complex vectors and matrices that are difficult to decipher. It’s not always obvious how different regions of the latent space correspond to specific knowledge or capabilities or how they relate to each other. Some, like Anthropic, are finding ways to peer into these latent spaces better (and perform a “surgery” to make the model think it’s the Golden Gate Bridge). In many ways, exploring latent spaces is like embarking on an alchemical quest, seeking to uncover hidden treasures and capabilities through a combination of intuition, experimentation, and lots of persistence.


bubble map of things around Golden Gate Bridge
Anthropic 2D representation of the latent space around the Golden Gate Bridge

Anthropic created a 2D representation of the latent space around the Golden Gate Bridge in Claude, you can see it encodes similarity to San Francisco, landmarks both local and global, and even the 1906 earthquake.


The Role of "Alchemist" in Working with LLMs

Just as the alchemists my dad admired were driven by a deep curiosity and desire to uncover the secrets of the natural world, those working with LLMs must possess a similar sense of curiosity and wonder. The alchemical nature of LLMs demands a willingness to experiment, to try new approaches, iterate, and to persevere in the face of uncertainty and setbacks. The alchemist must be comfortable with the inherent unpredictability of working with these models, embracing the potential for surprise and serendipity. They must be patient and persistent, recognizing that the process of eliciting emergent properties and exploring latent spaces is often a long and iterative one.


AI Alchemists must also find a balance between systematic investigation, like standard and repeatable benchmarks, but also intuitive exploration (“What would happen if we asked it to…?”). Today’s LLMs encourage a degree of playfulness and experimentation, so it’s important to have rigor and methodology. The alchemist must be able to design and execute systematic experiments, carefully controlling variables and documenting results. They must be able to analyze and interpret the outputs of LLMs, seeking to identify patterns and insights that can guide further exploration. This ability requires a combination of technical skills, such as programming and data analysis, as well as a deep understanding of the underlying principles and architectures of LLMs. The alchemist must be able to bridge the gap between the abstract and concrete, translating the complex inner workings of these models into actionable insights and applications.

happy man wearing achemist robe at computer
Working with today’s LLMs often feels like alchemy

The alchemists didn’t keep their experiments and findings to themselves, they formed networks and guilds to share their knowledge and experiences. Today’s LLM researchers and developers also benefit greatly from collaborating and learning from one another. The challenges and opportunities posed by these models are too huge and complex for any one individual or team to tackle alone. By pooling their knowledge, skills, and resources, researchers and users can accelerate the pace of discovery and innovation. Whether it’s informal online discussions to formal research collaborations, the industry needs partnerships. The alchemist must be willing to engage with others, share their own insights and experiences, and to learn from the perspectives and approaches of their peers. We can only achieve our alchemical quest to unlock the full potential of LLMs through collaboration, leading to new breakthroughs and applications, always pushing the boundaries of what is possible.


The Future Alchemical Exploration

As the field of AI grows, with more interest and investment, the prospects of AI alchemists are ripe for exploration and discovery. There is so much development and refinement of LLMs right now that we’re driving rapid progress in natural language processing and language-based AI, moving from alchemistry towards a true science like chemistry. Researchers and developers will be constantly pushing the boundaries of what is possible with these models, experimenting with new architectures, training techniques, and optimization strategies. From the development of larger and more complex models to more efficient and sophisticated architectures, such as state space models like Mamba, the AI landscape is constantly shifting. And as models become more powerful, they open new avenues for eliciting emergent properties and exploring latent spaces. The alchemical nature of working with LLMs is likely to become even more evident as the complexity and depth of these models continue to grow.

Thoughtful man wearing alchemist robe in chemistry lab
Alchemy will live on into the future

As these models gain the ability to capture and leverage ever more nuanced and complex patterns of language, they may give rise to entirely new forms of language-based interaction and problem-solving. For example, LLMs may become capable of engaging in more contextualized and empathetic communication, allowing for more natural and intuitive interfaces with humans. They may also be able to tackle more complex and open-ended tasks, such as creative problem-solving, scientific discovery, and decision support. As the latent spaces of LLMs become richer and more expressive, the potential for uncovering unexpected knowledge and capabilities grows. The alchemical quest to explore these hidden dimensions may lead to breakthroughs in fields as diverse as medicine, education, entertainment, and beyond. The future of LLMs is one of potential and endless possibility, limited only by the creativity and perseverance of the researchers and developers who seek to unlock their secrets.


The spirit of alchemists must continue to be alive in research and exploration – it requires a sustained commitment to experimentation, analysis, and collaboration. Researchers and users must continue to push what is possible, explore new techniques for eliciting emergent properties and navigating complex latent spaces. It will require not only technical expertise but also a deep understanding of the ethical, social, and philosophical implications of working with such powerful language models. And as they become more integrated into our lives, it is crucial to ensure that their development is transparent, accountable, and socially responsible. Only through sustained research and exploration can we hope to fully realize the transformative potential of these models and harness their power for good.


Conclusion

Hopefully I’m continuing my father’s alchemical quest, just in a different domain - we are both driven by a deep curiosity and desire to uncover hidden knowledge in our world. Just as the alchemists of old sought to transform matter and unlock secrets of the universe, those of us working with LLMs are engaged in a quest to elicit emergent properties and explore the latent spaces within these models. The process of experimentation and iteration mixed with a little good fortune is similar, as is the inherent unpredictability and mystery. The challenges of interpreting the outputs, navigating the inner workings, and reliably eliciting desired behaviors provide the sense of mystery and potential that makes working with them so captivating. There is a buzz among researchers because of recent advancements in AI technology and a growing recognition of their potential to transform many domains. Alchemists are still alive; exploring, experimenting and transforming the world to its essence after all.

button that says "let's stay connected"




bottom of page