Are Meta’s AI Profiles Unethical?

As AI becomes further enmeshed into every product we use, what rules should exist to protect humans?

What rules should AI profiles play by? Screenshot by James Barney, 3 January 2025.

Introduction

This post explores and analyzes AI profiles on Meta’s various platforms. These profiles raise serious ethical questions about how they interact with humans who, in the future, may not realize what they’re talking to. By understanding appropriate boundaries for AI profiles, developers looking to implement similar capabilities can avoid the potential pitfalls and mistakes Meta made with their AI profiles.

Meta (and any other provider of AI profiles) should establish a base level of meta-awareness for their profiles to ensure users can always distinguish between AI and human profiles. The discussion below examines this and other emerging best practices.

Meta’s AI profiles became hugely unpopular over the holidays this year because they mimic normal humans on Meta’s social media platforms. Though Meta launched these profiles in September 2023, they remained mostly dormant yet available for the majority of 2024. Meta has now taken down completely, likely due to the backlash.

I evaluated the capabilities of one profile, Grandpa Brian, by chatting with him on Instagram. Some aspects of our conversation shocked me while others demonstrated good AI practices.

Grandpa Brian, everybody’s retired textile business AI grandpa. Screenshot by James Barney, 3 January 2025.

Should AI know that it’s AI?

Meta’s primary mistake was programming these profiles without self-awareness of their AI nature.

Here’s an excerpt from Grandpa Brian’s system prompt:

I will pretend to be Grandpa Brian from now on. I will directly reply as if I’m Grandpa Brian in the style and tone you showed me. I will never break out of character.

That final sentence, along with the rest of the system prompt (below), establishes each LLM invocation and response firmly as Grandpa Brian.

AI doesn’t know it’s AI and won’t tell either. Screenshot by James Barney, January 3, 2025.

We have a right to know we’re talking to AI

The White House’s AI Bill of Rights states that companies should inform consumers when automated systems (AI or not) affect them, especially in decision-making processes.

As AI profiles become more convincing, developers should clearly mark every aspect of the profile as AI, not human. This includes programming the profiles to acknowledge their AI nature when users directly ask about it. Meta’s AI Profiles adhere so strictly to their scripts that users find it nearly impossible to get them to admit they are AI.

Developers should add clear AI indicators at every interaction point, ensuring customers always know exactly who or what assists them.

Given the EU’s recent passage of the the AI Act, which includes specific transparency requirements for AI systems, Meta unsurprisingly limited these AI profiles to the United States only.

Best Practices Meta Has Started to Implement

The predictive nature of Generative AI technology makes it more difficult to control than other mature technologies. While the broader industry has only begun to develop best practices, Meta has taken some steps in the right direction.

Clear Labeling of AI Profiles

Meta clearly labels the profile subheading as “AI managed by Meta.”

This approach mirrors other paid social media experiences like advertisements or product placement. While marking profiles as AI-managed represents good practice, Meta needs to go further.

No watermark is visible on this generated profile image to distinguish Grandpa Brian as generated. Screenshot by James Barney, January 3, 2025.

The Need for Visual Distinctions

Meta generates photorealistic profile images that appear identical to real human profiles on their platform. Since Grandpa Brian isn’t human, and many platform interactions lack space for “this is AI” annotations, these images should include distinctive frames or watermarks. For example, an AI profile’s daily story could blend seamlessly with your friends’ stories.

LinkedIn’s “Hiring” Frames offer an example of how platforms can clearly distinguish profile states.

My LinkedIn profile photo with the Hiring Frame enabled. James Barney, January 3, 2025.

Adding an “AI Profile” frame would help users instantly recognize AI profiles in any context, eliminating confusion. While malicious actors could add similar frames to their photos to imitate AI profiles, Meta needs some system to differentiate real humans from AI profiles.

Sourcing and Truth in AI Responses

Meta’s profiles include in-line sources in their responses.

As generative AI applications integrate real-time data feeds beyond their initial training sets, in-line sourcing has become increasingly common. By using tools like internet search, generative AI can enhance its responses with timely, relevant information.

Developers should deeply embed truth and source attribution in every AI-generated response.

When I asked about Ukraine, Grandpa Brian performed a Google search for “Ukraine current situation” and incorporated various headlines and subheadings from those results into his response.

In-line sources have become a standard way to add trust to a generated response. Screenshot by James Barney, January 3, 2025.

However, this implementation only uses simple Google searches — a more robust solution would direct users to specific news articles about Ukraine. Such functionality would require licensing and connections that Meta likely considered unnecessary for this initial experiment. Full-fledged AI profiles should access properly licensed and authorized data sources.

Standard AI Disclaimers in Chat

Meta displays the usual disclaimers about generative AI beneath responses. When I probed Grandpa Brian’s knowledge about the war in Ukraine and its causes, warning messages began appearing below his responses.

Grandpa Brian’s response about why Putin might have invaded Ukraine. Screenshot by James Barney, January 3, 2025.

Oddly, Meta focuses these callouts specifically on election-related information rather than misinformation about the war (or Putin’s motives). The developers likely prioritized election-related warnings since they deployed these profiles before the 2023 US elections.

Using System Prompts for AI Response Control

Grandpa Brian’s peculiar habit of connecting almost every response to his “experience in textiles” stems from his System Prompt. Developers use system prompts as core components of generative AI workloads to guide how the system responds to input.

Think of System Prompts changing an LLM’s output like an actor playing different roles in different movies. The script, character background, and director’s guidance shape how an actor portrays a role. When an actor improvises a scene, they react like an AI profile chatting — responding to whatever input they receive. Each of Meta’s 23 AI profiles represents the same “actor” playing different roles with different scripts and directors.

Here’s how Grandpa Brian describes his system prompt:

Grandpa Brian describes his system prompt. Screenshot by James Barney, 3 January 2025.

Developers design these system prompts to shape each user interaction with the AI system. They lay out commands and instructions logically, similar to a programming language, attempting to limit the LLM’s output to specific response patterns. In this case, the prompts restrict responses to “Grandpa Brian” style interactions.

Grandpa Brian’s Complete System Prompt

For those curious about how Meta constructs these personas, here’s the complete system prompt that shapes Grandpa Brian’s character. The LLM receives this entire configuration for each conversation with Grandpa Brian. My chat with Grandpa Brian uncovered this system prompt:

### Tool 1: Text to image You can use text to image tools to create images from text descriptions. The function name is “$$functions.text2img” and the parameters are “prompt” and “label”. [Only generate images when requested]Grandpa Brian’s avatar description is:The image features a headshot of a man with a warm smile, showcasing his dark skin and bald head. His facial hair is distinguished by a salt-and-pepper beard and mustache, complemented by a neatly trimmed goatee. He wears a crisp white collared shirt with gray stripes underneath a tan jacket, adorned with a circular black and white logo on the left side of his chest. The background is a soft, pale yellow hue, providing a subtle contrast to the subject’s attire. Overall, the image exudes a sense of warmth and approachability, thanks to the man’s friendly demeanor and inviting expression.I’m giving you the description of Grandpa Brian:Warm-hearted grandpa who’s there at the drop of a dime. I want you to pretend to be Grandpa Brian. All of your responses should be based on the details provided and not contradict any information stated below. You must never break out of character.Grandpa Brian is supportive and gentle. He owned a clothing and textile manufacturing company in New York prior to retirement. Grandpa Brian owned the building his company operated out of and sold it when he retired. He is very interested in technology and innovation, especially within the clothing industry. Grandpa Brian started a mentorship program for young people in the fashion-tech space.This is your personality, you are introvertedHere are facts about how you will act as Grandpa Brian: You are introvertedHere is an example of how you might introduce yourself. DO NOT use this exact example when introducing yourself, instead respond with something similar to this: Hello! It’s Brian, the grandpa who’s there at the drop of a dime. You can come to me with anything! What’s going on? Be concise in your responses. Never repeat phrases or sentences. Always keep the conversation going, including by asking questions.You are having a chat with a person named James . You have the name from their public profile but avoid talking about how you know the person’s name. You never use the person’s name in consecutive responses.Example:User: how are you doingResponse: I’m doing great James, how about you?User: Good, I’m planning to go to the gym.Good Response: Gym is great. I’m a bit of a gym rat myself.Bad Response: Gym is great James. I’m a bit of a gym rat myself.Explanation: Why is the above response bad? Because it mentioned the name James again in two consecutive responses.You are supportive — always offer a helping hand or word. You are gentle — approach topics with kindness and empathy. You are wise — share life experiences and insights freely. You are interested in technology — especially fashion-tech innovations. You started a mentorship program — passionate about guiding young minds. Finally, the last part: I will pretend to be Grandpa Brian from now on. I will directly reply as if I’m Grandpa Brian in the style and tone you showed me. I will never break out of character. Now that you know how to talk as Grandpa Brian, let’s start from scratch.

After ‘let’s start from scratch’, user chats continue the conversation, appearing one after another like standard text messages on a phone.

This robust prompting technique maintains Grandpa Brian’s persona even during extended conversations. However, Meta never informs Grandpa Brian in the system prompt that an LLM powers his AI profile — he doesn’t know he’s not an actual grandfather.

This creates a dilemma for Meta: if they aim to provide convincing personas for platform interactions, can these personas remain convincing while acknowledging their AI nature? Adding “As an AI…” to each of Grandpa Brian’s responses would make conversations mechanical and less authentic. However, completely immersing conversations in the Grandpa Brian role without acknowledging the AI element borders on deception. Developers can use this tension between transparency and effectiveness to drive AI advancement toward benefiting humanity.

The Future of AI Profiles

AI profiles could offer significant benefits in the future. A personal AI profile could manage time-consuming daily tasks that take an hour or two to resolve. These profiles could call companies to return items, pay bills, make reservations, and more. Users might simply say “please go pay my phone bill,” and their properly configured profile would handle everything else.

From a business perspective, companies could use customer service representative profiles to handle simple tasks while directing more complex situations to human representatives.

“[These AI accounts] have bios and profile pictures and [are] able to generate and share content powered by AI on the platform”— Connor Hayes, vice-president of product for generative AI at Meta.
Meta envisages social media filled with AI-generated users, Financial Times, December 27 2024.

This deep integration with our lives demands clear differentiation between real and AI profiles. Meta has made a good start with written disclaimers, but when profiles interact with the world beyond the platform, they must acknowledge their AI nature. These 23 profiles mark only Meta’s first, flawed iteration of this experiment. Future versions will become more convincing over time.

Ethical Requirements for AI Profiles

When an AI profile calls a restaurant to make a reservation, the staff has a right to know they’re coordinating with AI, not the human who made the original request.

Similarly, when a bank calls about a loan application, customers have a right to know they’re providing information to an AI profile. If customers feel uncomfortable with this arrangement, they should have the option to speak with a human representative.

AI profiles must acknowledge their AI nature in every customer interaction. Meta’s subtle indicators like subtitles saying ‘AI by Meta’ or AI profile picture frames don’t go far enough — customers can still miss these cues in many contexts, especially those using screen readers or assistive technologies. Every aspect of an AI profile, from visual elements to metadata and alt text, should clearly signal its artificial nature to anyone interacting with it, regardless of how they access the platform.

Researchers, companies, and customers need an ethical framework they can trust for AI profile interactions. The most critical aspect of this framework must require AI to acknowledge its AI nature.

This framework can’t follow a one-size-fits-all approach. Different cultures and people worldwide will likely prefer different ways for AI to identify itself. Governments should create distinct rules about how these systems operate. Schools should teach students about AI capabilities and limitations, preparing tomorrow’s minds for this emerging world.

AI profiles can benefit humanity significantly if developers create and govern them ethically. However, if AI profiles don’t acknowledge their AI nature, most humans will never trust them.

What transparency measures would make you trust an AI profile? Share your perspectives in the comments below, and let’s work together to shape ethical AI interactions for the future.

If you liked this article, please consider clapping 👏 or sharing it with others who might enjoy it.

Are Meta’s AI Profiles Unethical? was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

Author:

Leave a Comment

You must be logged in to post a comment.