top of page
Writer's pictureGreg Robison

Silicon Confidants

PRIVACY AND THE RISKS OF SHARING SECRETS WITH AI


Historically, privacy was almost implicit, because it was hard to find and gather information. But in the digital world, whether it's digital cameras or satellites or just what you click on, we need to have more explicit rules - not just for governments but for private companies. -- Bill Gates

We are getting more and more AI-powered technologies these days, especially those using Large Language Models (LLMs) like ChatGPT. They have become virtual assistants on our smartphones to personalized playlists on streaming services. We can use natural language to interact with our technology for easier usage and communication – remember what Siri promised but could never quite deliver? LLMs are finally bringing the promise of intelligent assistants to reality. However, as we start to embrace this new technology, we need to balance the benefits and convenience of AI with concerns about our data privacy. These models feed off data – they are trained on huge amounts of data, including some of our personal information. And as we interact with them, they are continually collecting more data in the hopes of creating better models and services. From the constant data collection by smart devices to the potential misuse of sensitive information shared with AI companions, we need to have clear policies in place to protect sensitive data. While AI models bring convenience and some impressive capabilities, AI services also pose risks to our personal privacy, challenging us to strike a balance between technological advancement and the protection of our fundamental right to privacy.


The Data Collection Dilemma

These days, data has become the new gold and tech companies are modern-day prospectors. Whether it is social media data that can be used to refine the algorithm or AI companies running out of human text data, there is a lot of innovation at stake. But this unending appetite for data means that companies are collecting data about their users in ways that they might not fully comprehend or even consent to. From shopping habits to location data to voice commands, companies are harvesting information about our lives, preferences, and behaviors. This mass accumulation of data raises serious questions about privacy, consent, potential for misuse or exploitation, or leaking of personal information.

people working at computers with gold light
Data is the new gold

It's all kinds of data too. It’s personal data with basic identifiers like name, age, and contact information, as well as more sensitive details like financial records. There’s also behavioral data that includes our online activities, including browser history, search queries, and interaction patterns. Then we have contextual data that is information about our environment, including location data, device information, even ambient sounds captured by always-listening devices. This combination of data types creates a comprehensive digital profile of each user, offering companies insights into our habits, preferences, relationships, and even our emotional states.


And now, AI systems use this hoard of data for training and potential improvements. By analyzing patterns in user behavior and language usage, LLMs can generate more accurate and contextually appropriate responses. Personal data helps in creating tailored experiences, while behavioral and contextual data enables AI to predict user needs and preferences with higher accuracy. For example, knowing that I’m a 46-year-old father of 3 who lives in San Diego and loves to sail, my AI assistant can provide extremely relevant recommendations for things to do this weekend (none of this information is true, but you get the point). This process of constant learning and refinement is what helps make AI systems so powerful and useful. It also means our data is being constantly analyzed and processed. The use of this data for AI also raises important ethical questions about data ownership, the right to be forgotten, and the potential for AI systems to manipulate users based on collected data.


Always-On AI: Documenting Our Lives

Recently we’ve seen a few AI-powered life-logging tools like the Humane AI PinRabbit R1, even Microsoft’s OS-based Recall are designed to automatically document our daily lives. These AI assistants can transcribe and summarize our conversations, promising to create a comprehensive digital record of our lives, to help us remember, reflect on, and optimize our lives. These devices can serve as external memory, helping to recall important moments, track personal growth, or identify patterns in behavior and health. They can also help us win arguments (I recommend both the short story by Ted Chiang called The Truth of Fact, the Truth of Feeling and the Chappelle Show sketch Home Stenographer for takes on this potential effects). In a professional setting, automated notetaking and meeting summaries can boost productivity and ensure that no crucial information is lost. For those with cognitive impairments or memory issues, these technologies could be life changing. This continuous stream of data could also contribute to scientific research, offering deep insights into human behavior and societal trends.

Man speaking to woman stenographer
A home stenographer can record all our conversations for a full chat history! What if that record got into the wrong hands?

As you can start to see, the privacy implications of this constant surveillance are deep and troubling. When our every move, word, and interaction is recorded, we risk losing the ability to truly be alone or “off the record”. This constant observation can lead to self-censorship and behavioral changes as people become aware of being watched – let alone consenting to being recorded. There are also serious concerns about data security and potential misuse. If this intimate data falls into the wrong hands, it could be used for blackmail, identity theft or other malicious purposes. The line between consensual documentation and invasive surveillance of others becomes blurred when these devices are used in public places.


If you think I’m fearmongering, let’s talk through some real examples. Smart home devices, such as voice-activated assistant and security cameras offer convenience and peace of mind, but also create a detailed record of our domestic lives. There have been instances of these devices inadvertently recording private conversation or being hacked to spy on homeowners. Wearable fitness trackers collect valuable health data that can help users improve their well-being, but this same data could potentially be used by insurance companies to deny coverage or by employers to make hiring decisions. Perhaps the most invasive are “always-on” body cameras or smart glasses which not only record the wearer’s experiences but also capture images or audio of everyone they encounter, often without explicit consent. These devices are convenient and do bring some real benefits, but we also have a fundamental right to privacy and security of private info.


AI on Personal Devices: A Privacy Nightmare

AI assistants have shown up on our personal devices, with virtual helpers like Siri, Google Assistant, and Alexa integrating into our smartphones, tablets, computers, smart home devices, and wearables. These AI-powered tools promise to enhance our productivity, streamline our daily tasks, and provide instant access to information. From scheduling appointments and sending messages to controlling smart home devices and answering complex questions, these assistants aim to be our digital companions who are always ready to help. However, their very ubiquity and the depth of their integration into our personal lives raises privacy concerns.


To be the best assistants they can be, AI assistants need access to your personal data, including location data, contact lists, calendars, browsing history, even the content of our emails and messages (I’m looking at you Google). If we let them, they also want access to our voice recordings, photos and videos. That’s a lot of very personal information we’re providing to AI systems that get a comprehensive view of our lives. Does a personalized assistant outweigh the concern about privacy?


As the CTO of F’inn, I’m constantly concerned about data security for our company. The potential for data breaches and misuse of our information by AI companies is particularly concerning. If a malicious actor were to gain access to the servers storing this information, they could potentially obtain a detailed profile of millions of users, including their personal habits, relationships, and other sensitive information. Even without external breaches, there are concerns about how this data might be used by the companies collecting it. Could it be sold to advertisers, shared with government agencies or used to manipulate user behavior? Absolutely. The AI systems themselves could also be potentially exploited via jailbreaks or other exploits, leading to unexpected access and usage.


We need to balance the convenience offered by these AI assistants with the privacy of our data. The personalized assistance and seamless integration these tools provide can enhance our experience and increase productivity. However, how this data is used behind the scenes is often a mystery to users. We need greater transparency about what data is being collected and how it’s being used. We need enhanced control over data sharing, including granular permissions and the ability to opt out of certain types of data collection. We also need stronger regulations around data protection and usage coupled with robust and tested security measures by companies housing our data. The security of our private information is paramount. From there we should be able to decide what gets shared or used to further train models.


The Intimacy of AI

Another area of data privacy concern is the rise of AI girlfriends, boyfriends, and emotional support bots that are blurring the line between technology and intimacy. These digital companions are designed to provide emotional support, companionship, or even romantic interaction, filling a void for many who struggle with loneliness or social anxiety. Unlike humans, these AI relationships are available 24/7, never tire of listening and can be customized to meet specific emotional needs. From chatbots that offer daily check-in and words of encouragement to more sophisticated virtual partners that engage in deep conversations and role-playing scenarios, these AI companions are becoming more and more popular.

One of the benefits of these AI companions can be their non-judgmental nature. People often find it easier to open up and share their deepest secrets, fears, and desires with an AI that doesn’t have preconceived notions or societal biases. This perception of safety can lead to a level of vulnerability and honestly that some users might not achieve with even their closest humans. For those dealing with mental health issues, trauma, or stigmatized experiences, the ability to express themselves freely without fear of judgement can be cathartic and potentially therapeutic. The consistency and availability of these AI companions can provide stability and a level of support that humans lack.


Woman talking to robot
How private and confidential are those intimate chats with AI?

However, the risks of entrusting such sensitive and intimate information to AI companies are significant and typically underestimated by users. While interacting with an AI might feel private, the reality is these conversations are typically stored, analyzed, and used to improve their AI systems. This reality means that a user’s most personal thoughts, their deepest secrets and desires, are being collected and processed by companies, often with unclear policies on data retention and usage. There’s also the question of who has access to the data within the company and how it might be used beyond improving the AI. Tesla engineers were caught reviewing and sending people’s private videos from their cars, the same could happen to your chat logs. Could this intimate knowledge be used for targeted advertising, sold to third parties, or even accessed by the NSA/ FBI/ CIA? The lack of transparency and regulation leaves users vulnerable to potential exploitation of their most personal info.


Even if companies have the best of intentions with your sensitive data, the reality is data leaks are still common. Imagine the impact of a data breach that exposes users’ intimate conversations, sexual fantasies, or mental health struggles. This kind of leak could lead to personal embarrassment, being rejected by insurance, losing their job, or even blackmail. The detailed psychological profiles that could be built from these interactions could be used for manipulation, such as influencing voting behavior or using for financial gain. There’s also the risk of AI companies using the data to create increasingly addictive and manipulative AI companions, who might just benefit from exacerbating social isolation and dependency on virtual relationships. As AI companions become more sophisticated and widely used, we’ll need to deal with the implications of outsourcing our most intimate conversations and emotional needs to artificial entities controlled by profit-driven companies.


Protecting Your Privacy in the AI Age

Hopefully you’re a bit riled up about the importance of data privacy and security like I am. What can we do? Here are a few recommendations.

  • Review the Terms of Service and Privacy Policies. When you’re signing up for a new AI-based service, read the Terms of Service and Privacy Policy (or use a summary service like Frontpage to get the main points and explanations). They’re usually lengthy and complex docs that we gloss over, but they’re very important to review. They should outline how companies collect, use, and share our data. It’s essential for us to take the time to read and understand these policies, paying particular attention to sections about data collection, storage, and sharing. Look for information on how long data is retained, whether it’s sold to 3rd parties, and what options you have for permanently deleting your data. If a policy is unclear or overly invasive – that’s a dealbreaker! If you do continue, be care about what you share.


Tina Fey from 30 rock
If companies aren’t going to take our data privacy seriously, that’s a dealbreaker!
  • Adjust the settings and permissions of any AI-powered devices to take control of your privacy. Most browsers, smartphones, laptops, smart home devices, and AI assistants offer granular controls over what data they can access and collect. Take the time to review these settings and tailor them to your comfort level. For example, you might choose to disable location tracking and limit microphone access to specific apps. On social media platforms, review your privacy settings to control who can see your posts and personal information. For AI assistants, consider turning off features like “always listening mode” when not in use or prevent the saving of your chat history. I wish handing over our privacy was opt-in, but unfortunately, it’s usually opt-out, so exercise your power by opting out where possible.

  • Consider privacy-focused alternatives. Options range from open-source AI models such as Meta’s Llama series, Microsoft’s Phi-3 and Google’s Gemma 2 which all run locally. Use encrypted messaging apps that prioritize user privacy. We should encourage privacy-first options, both open- and closed-source.

  • Advocate for better data protection laws. We need a long-term strategy for protecting privacy in the face of AI services. As individuals, we can support and vote for politicians and politics that prioritize data protection and digital rights. We can engage with our local representatives to express our concerns about data privacy and the need for regulations on our behalf. And support organizations like the EFF that advocate for digital rights and privacy protections. The GDPR and CCPA have set strong precedents for data protection laws that should be the framework for future regulations.


Do what you can to protect your data privacy and encourage companies to take serious measures to protect your personal data.


Conclusion

The AI Age is bringing us new amazing capabilities but also significant privacy concerns. From data collection practices to potential vulnerabilities, our digital footprints are larger and more detailed than ever. The intimacy of our interactions with AI companions adds another layer of complexity as we entrust our deepest thoughts and feelings to systems that, despite their apparent understanding, are ultimately tools of data-driven companies. These privacy challenges are exacerbated by the potential for data breaches, misuse of personal information, and the long-term implications of building comprehensive digital profiles of individuals. We need informed decision-making about AI by taking an active role in protecting our privacy. We need to educate ourselves about the technologies we use, carefully considering the trade-offs between convenience and privacy, making conscious choices about what information we’re willing to share. It also means advocating for stronger data protection laws and supporting companies that prioritize user privacy. The future of AI privacy means finding the right balance of convenience without sacrificing our fundamental right to privacy.


button that says "let's stay connected"


bottom of page