THE HITS AND MISSES IN THE PIXEL 9 PRO
The Gemini assistant uses multimodal models, which means you can communicate with it through text, images or voice. You can use written prompts, upload images or even speak directly to Gemini on your Pixel — whatever you prefer. - Google
Everything seems to have AI shoved into it, whether we like it or not. I’m an AI fan and it’s even too much for me. But I needed a phone upgrade from my Google Pixel 7, so I opted for the recently released Pixel 9 Pro, which comes with a number of AI features. After several weeks of daily use as my primary mobile device, I've found the results to be a mixed bag of genuinely useful innovations and features that feel more like tech demos than practical tools. While some may criticize this shotgun approach to AI integration, Google deserves credit for experimenting broadly - though not all their experiments are equally successful. The key AI capabilities include Google Gemini integration, intelligent screenshot processing, Circle to Search gesture controls, advanced photo editing tools, and the ability to run local AI models. Some of these innovations have fundamentally changed how I use my phone, while others feel like they're still searching for their purpose.
PODCAST
NOTE: We are continuing our experiment with an AI-generated podcast that summarizes this post by Google’s NotebookLM. Listen here and let us know what you think.
WHAT WORKS WELL
Among the Pixel 9 Pro's AI features, screenshot intelligence is the killer app for me. The phone's ability to automatically process and understand text within screenshots has changed how I handle digital information. Recently, when I took a screenshot of my boarding pass for a flight, the phone not only extracted the flight details but also offered to add the flight to my calendar and provided quick access to flight status information. Thanks to the Pixel’s Tensor processor, a small model sitting on the phone is used in favor of sending the data to the cloud for analysis – I very much appreciate the privacy protection this feature offers. This seamless integration between image capture and practical functionality feels like the future we were promised - technology that anticipates our needs and reduces friction in daily tasks.
The natural language photo search capability is another key feature that demonstrates the practical benefits of local AI processing. Instead of relying on manual tags or precise dates, I can now search my photo library using natural language queries like "sunset photos from our Costa Rican vacation" or "pictures of my dogs in the park." Again, all processing happens locally on the device, ensuring privacy while delivering accurate results. The speed and reliability of local processing means I spend less time organizing or searching for photos and more time enjoying them.
Circle to Search might sound like a gimmick, but in practice, it's become another frequently used feature of mine. The ability to circle any element on my screen - whether it's a TV show recommendation in a Reddit post, an unfamiliar landmark in a photo, or text in a message - and instantly getting relevant search results feels incredibly intuitive. I've used it to identify a flower in my neighbor’s yard, find biographies of researchers in videos, and translate text from images. The gesture-based interface eliminates the process of copying and pasting or taking screenshots to search later, making information access feel effortless and natural.
MIXED RESULTS
The Pixel 9 Pro's AI-powered photo editing tools sound pretty impressive but deliver inconsistent results that highlight both the potential and limitations of current AI technology. The Audio Magic Eraser, designed to remove unwanted sounds from videos, and the Magic Editor, which promises Photoshop-like manipulation capabilities, both fall into a category I'd call "good enough for a quick draft or a joke, but not for anything serious." As an amateur photographer proficient in professional editing tools like Photoshop, I find these features produce results that often look artificial or contain noticeable artifacts. While they might satisfy casual users looking for quick edits - like removing a photobomber or reducing background noise in a dark concert video - they lack the precision and control that more sophisticated tools offer. That said, having these capabilities in your pocket for quick edits is convenient, even if the results are sometimes hit-or-miss and not worth the time and frustration of trying to get great results.
ROOM FOR IMPROVEMENT
Google Gemini on the Pixel 9 Pro currently feels more like a showcase of potential rather than a fully realized AI assistant. Despite having a huge context window and the ability to access real-time information, Gemini's practical applications remain surprisingly limited. The AI assistant seems overly cautious in its responses, sometimes refusing to answer even basic historical questions (like details about Bill Clinton's campaigning to settle a bet) that other AI assistants handle without issue. This excessive content restriction creates a frustrating user experience that often sends me reaching for ChatGPT instead. While Google emphasizes Gemini's capabilities, the reality is that it frequently feels more constrained than helpful.
Privacy considerations add another layer of complexity to the Gemini experience. Although Google has implemented various privacy measures, there's something about using our company’s ChatGPT Teams subscription that "feels" more private, even if that feeling isn't entirely rational. This perception matters because it influences how and when we engage with AI assistants. The challenge for Google isn't just about matching ChatGPT's capabilities; it's about creating an environment where users feel comfortable having natural, unrestricted conversations with their AI assistant while maintaining appropriate safety guardrails. Local, private models can only get you so far, they’re just not yet smart enough to do everything Gemini can online.
The automation features on the Pixel 9 Pro represent perhaps the biggest missed opportunity. While the potential for AI-driven task automation is enormous - from setting contextual reminders to managing complex workflows – it currently falls short. Basic tasks like setting reminders or creating calendar events based on context don't work as seamlessly as they should, and more complex automation scenarios seem entirely out of reach. The foundation is there, with the phone's impressive array of sensors and AI capabilities, but the software doesn't yet connect these elements in meaningful ways. There’s unrealized potential here. Looking ahead, Google could significantly improve the user experience, perhaps by implementing more sophisticated task prediction and automation frameworks that learn from user behavior patterns.
LOCAL AI CAPABILITIES
One of my favorite, but niche, features of the Pixel 9 Pro is its ability to run a number of open-source, local AI models, thanks to its huge-for-a-mobile-device 12GB of RAM. Using apps like ChatterUI, I've successfully run smaller language models like Microsoft’s Phi 3.5, Meta’s Llama 3.2 and Qwen 2.5 (1-3B parameter models) directly on the device. The performance is surprisingly smooth, particularly for character-based interactions and casual chatting, proving that we don't always need massive cloud-based models for engaging AI experiences. This local processing capability not only ensures privacy but also demonstrates how far mobile hardware has come - we're now carrying phones capable of running AI models that would have required significant server infrastructure just a few years ago. While these smaller models can't match the capabilities of their larger cloud-based counterparts, they offer a glimpse into a future where more AI processing happens on our devices rather than in distant data centers.
CONCLUSION
After a few weeks with the Pixel 9 Pro's AI features, it's clear that we're witnessing the early stages of a significant shift in how we interact with our phones. While some features like screenshot intelligence, photo search, and Circle to Search have become key parts of my daily routine, others like Gemini and the AI editing tools still feel like works in progress. The phone's ability to run local AI models is particularly promising, hinting at a future where powerful AI capabilities won't require constant cloud connectivity. For potential buyers, I'd recommend focusing on the practical benefits rather than the AI buzzwords - the Pixel 9 Pro excels at integrating AI into my everyday tasks without drawing attention to the technology itself (it’s also a great non-AI phone). As Google continues to refine these features and potentially relaxes some of Gemini's restrictions, the Pixel 9 Pro could evolve into an even more capable AI-powered device. For now, it is a glimpse into the future of innovation in mobile AI - imperfect, but full of potential.