top of page
Writer's pictureGreg Robison

Metaphors, Machines, and Meaning

REFRAMING OUR RELATIONSHIP WITH AI ASSISTANTS

No matter how long users worked with Clippy, he never learned their names or preferences. Indeed, Clippy made it clear he was not at all interested in getting to know them. If you think of Clippy as a person, of course he would evoke hatred and scorn. -- Clifford Nass

AI assistants have become a big part of our daily lives, from Siri and Alexa to customer service chatbots and writing aids like ChatGPT. They are changing the way we access information, complete tasks, and most importantly, interact with technology. With Large Language Models (LLMs) becoming more commonplace (two-thirds of Americans currently use Gen AI tools) we rely more and more on them to enhance our productivity and creativity. However, it’s important to understand how we perceive and relate to these technologies through metaphors, which help us make sense of abstract concepts by comparing them to more familiar experiences. With AI assistants, the metaphors we use (whether consciously or not) shape our expectations, guide our interactions, and even influence the development of these technologies.


Understanding the implications of these metaphors will help create more effective and transparent relationships with AI as they both gain capabilities and become a bigger part of our lives. Some people are disappointed with the capabilities of today’s AI (they may be in Trough of Disillusionment), I think because they have too-high expectations. My expectations are constantly evolving the more I work with the latest and “smartest” models, leading to what I think is a healthier relationship as a collaboration tool that I use daily.


 


NOTE: We are continuing our experiment with an AI-generated podcast that summarizes this post by Google’s NotebookLM. Listen here and let us know what you think:


 

METAPHORS FOR AI ASSISTANTS

If you’ve been using Microsoft Office for a few decades like I have, you undoubtably remember Clippy, the animated paperclip that offered “help” and suggestions in Office applications. It was designed to be a friendly and approachable assistant that could guide users, especially beginners, through various tasks in Word and Excel. However, the promise was much bigger than Clippy could handle. Through annoying pop-ups and irrelevant suggestions, it was more annoying than helpful. The Clippy metaphor highlights the potential for AI assistants to be intrusive and disruptive, rather than helpful, which is what we all wanted and expected. These expectations were not met because of limitations in Clippy’s pre-programmed responses and lack of contextual understanding. It was Artificial, but it was not Intelligent, and wasn’t used.

clippy
Clippy, you inspired a generation of AI assistants, for better or worse

A more recent metaphor is that of the obedient servant, most notably Alexa and Siri. These assistants are designed to respond to voice commands and carry out tasks on demand, like a loyal and efficient helper. The servant metaphor emphasizes the convenience and time-saving potential of AI assistants, as they can handle a wide range of tasks like setting reminders or controlling our lights. Just like a servant, its capabilities haven’t developed much in the past decade, still having trouble understanding us and having little concept of context. Our expectations have stagnated too. This metaphor also perpetuates power imbalances and reinforces gender stereotypes, as many virtual assistants are given female personas.


Moving up the competence ladder, the intern metaphor presents AI assistants as eager learners, ready to absorb information and assist to the best of their abilities. Like an intern, AI assistants try to make you happy and carry out your request, but often make mistakes and need their work checked by an expert. This metaphor highlights the potential for AI assistants to take care of low-level work without much supervision, but anything requiring much experience or reasoning skills needs someone else reviewing the output before handing it in. It sets the bar higher than Alexa, as an intern is much more flexible. Interns can also surprise you with how well they perform a difficult task, even if it’s not perfection.

Anne Hathaway in The Intern
Maybe not this intern…

Finally, the expert metaphor positions AI assistants as knowledgeable advisors, capable of providing accurate and insightful info across a wide range of topics. It also emphasizes the knowledge bases and computational power of the latest models like GPT-4 or Anthropic’s Claude, which can process and analyze large amounts of data to answer questions, write, or code convincingly. The expert metaphor inspires trust and confidence in AI assistants, as users perceive them as more reliable sources of information and guidance. However, the metaphor may lead users to overestimate the capabilities of AI assistants and rely on them too heavily, leading to mistakes without some oversight. Additionally, these models lack transparency, and we can’t ask why the output is what it is.


THE PSYCHOLOGICAL EFFECTS OF AI ASSISTANT METAPHORS

These metaphors impact our psychological responses and expectations when interacting with these technologies. When we view AI assistants as Clippy-like characters with few benefits that don’t outweigh their intrusiveness, we are likely to end up frustrated and stop engaging. And if we view AI assistants as obedient servants or eager interns, we may expect them to be always available and ready to help, leading to unrealistic expectations and disappointment when they don’t meet our exact needs. The expert metaphor can create high expectations for accuracy and insight, leading to over-reliance on AI and reluctance to question their outputs.


The trust we place in AI assistants is also heavily influenced by metaphor. The servant and expert metaphors create a sense of trust and reliance as we perceive them as capable and knowledgeable. However, this trust may be misplaced if the AI’s capabilities are overstated or if its limitations are not clearly communicated. Also, using anthropomorphic metaphors, such as intern or expert, can lead us to attribute human-like qualities to AI assistants, such as empathy or moral reasoning, which they might not actually possess. This false sense of trust can lead to disappointment or even harm if AI assistants fail to meet these expectations. Finally, the metaphors we use can limit our understanding of the true capabilities and limitations of AI assistants. By focusing on the familiar, human-like comparisons, we may overlook the unique strengths and weaknesses of these technologies. The Clippy and intern metaphors do not fully capture the ability of today’s AI assistants to understand context and provide relevant suggestions and the servant and expert metaphors don’t account for errors and potential biases in their outputs.


AI ASSISTANT AS COLLABORATION TOOL

It’s clear that none of these metaphors adequately captures the complexity, nuance, and potential of today’s AI assistants. I think we need a better metaphor – that of a collaborative tool. Collaborative tools are designed to work alongside human users, complementing their skills and knowledge to achieve a shared goal more efficiently and effectively. Like Adobe Photoshop or a Steinway piano, a collaborative AI assistant can amplify human capabilities and opens innovation possibilities. We should emphasize the partnership between humans and AI assistants, recognizing that the most successful relationships arise from complementary pieces - there are things we’re better at and some things AI is better at. We can also set more realistic expectations, recognizing the individual strengths and weakness of different AI models.

man with clippy
Collaboration is the new Clippy (Dall-e3)

The collaboration metaphor emphasizes the importance of human agency and control in the use of these technologies. Rather than viewing AI assistants as autonomous entities that can replace human judgement and decision-making, the collaboration metaphor positions them as resources to be utilized and guided by human users. The metaphor also reflects the potential for AI assistants to enhance and extend human capabilities, rather than simply automating tasks or providing canned responses. By working in partnership with AI assistants, human users can unlock new insights, generate creative solutions, and achieve goals that were previously out of reach for us. Like any tool, the effectiveness depends on the skill and judgement of the user as well as the context and appropriateness of the tool. As AI capabilities increase, our role in collaboration may change, but it’s still a collaboration.


CONCLUSION

The metaphors we use to conceptualize AI assistants shape our interactions, expectations, and understanding of these increasingly common technologies. By examining the limitations of existing metaphors, such as Clippy, Alexa/ Siri, interns, and experts, we can develop a more nuanced and accurate representation that reflects their current capabilities and potential. The collaboration metaphor emphasizes the partnership between human users and AI assistant while also highlighting the potential for these technologies to enhance and extend human intelligence and creativity. Just as the metaphors have evolved in the past couple of decades, we need to be open to further evolution of our relationship - it should always be focused on empowering humans and setting realistic expectations.

button says "let's stay connected"







bottom of page