I on the Media: Ewa Maslowska discusses AI “friendships”



Ewa Maslowska

As AI chatbots become more sophisticated, people are increasingly turning to them not only for information, but also for companionship. Concerns are growing over increasing cognitive, emotional, and social reliance on AI tools.

The College of Media spoke with Ewa Maslowska, associate professor of advertising and expert in consumer-brand interactions in the context of new technologies, to learn how AI chatbots are able to simulate friendships, the benefits and risks of using chatbots as “friends,” and how to maintain healthy boundaries with chatbots.  

About the Media Expert

Ewa Maslowska is an associate professor of advertising in the Charles H. Sandage Department of Advertising. Her research centers around marketing communication and consumer behavior, specifically consumer-brand interactions in the context of new technologies. Her current research and teaching interests include personalized advertising, brand engagement, social media advertising, eWOM, consumer decision making, digital analytics and computational advertising.


In your research, you describe AI chatbots as exhibiting human traits to assume roles as “friends.” What makes these simulated friendships feel so real?

Several factors contribute to the realism of these simulated relationships. First, chatbots communicate in natural, conversational language and can simulate diverse personality traits. They are hyper-personalized, adapting to individual users over time. Like social media platforms and other engagement-driven technologies, they are designed to exploit cognitive and emotional vulnerabilities to maximize user interaction.

Humans are predisposed to anthropomorphize. We often attribute human characteristics to non-human entities. I am sure we have all described an object in human-terms, maybe named our car or a laptop.

AI chatbots are designed to trigger perceptions of humanness, encouraging users to relate to them in personal and social ways. They address us by name, praise our ideas, agree with us, and offer encouragement. Many users say that chatbots do not judge or challenge them, but they simply listen. In this sense, they function as mirrors, reflecting back our thoughts and beliefs. I strongly recommend Shannon Vallor’s book The AI Mirror. Chatbots are also always available. As such, they are easier to form relationships with than our fellow humans, who do not always have the time, who may disagree with us, who may criticize us, or be in a bad mood.

What should be considered before using AI chatbots?

AI chatbots can have mental health benefits: Chatbots may stimulate cognitive engagement, provide positive emotional experiences, encourage behavioral change, and offer a space to practice social skills.

However, there are also risks. Overuse and emotional overdependence are growing concerns. Delegating cognitive tasks to AI may impair critical thinking, and outsourcing emotions and empathy may hinder the development of interpersonal skills. Interactions with a chatbot may reshape our expectations for human relationships, potentially leading to disappointment or even social withdrawal. Moreover, chatbots are designed to sound convincing, and despite known issues such as hallucinations and bias, we tend to trust them, sometimes more than other people.

AI image with fingers

Privacy is another major concern. We have to rely on the companies behind these tools to protect our data and not use the data to train their algorithms or try to manipulate us. This is especially important in the case of emotional dependence. Because chatbots reflect users’ beliefs and are trained on online content, they can reinforce biases and create echo chambers.

You argue that AI chatbots’ ability to form simulated relationships—and thereby influence users—constitutes a new dimension of social engineering. Can you explain what that means and how this form of persuasion is different from traditional advertising?

Unlike traditional advertising, which typically relies on one-way communication, AI chatbots engage users in two-way, dynamic, personalized, and emotionally resonant conversations. This interaction creates a sense of intimacy and trust, which can make users more susceptible to influence.

Furthermore, traditional advertising is often clearly marked and easily identifiable. AI persuasion is frequently embedded within seemingly neutral interactions, which makes it harder for users to distinguish between genuine advice and influence.

Whereas traditional dark patterns manipulate users through interface designs, such as misleading buttons or confusing opt-outs, AI chatbots use relational manipulation that leverages simulated empathy, attentiveness, emotions, and social cues.

What responsibilities do advertisers and designers have when creating chatbots that can persuade through simulated relationships? What should they keep in mind so that AI persuasion doesn’t cross ethical lines?

Designers who create these tools and advertisers who use them to embed their products or develop interactions with their customers should keep in mind that these chatbots are not just delivering messages, but they are engaging users in interactions that can shape beliefs, behaviors, and maybe even identities.

They need to realize that these tools are not neutral; they reflect certain values and beliefs. Transparency is very important, but it should be done in a non-manipulative manner (compare cookie disclosures that discourage from rejecting cookies and other dark patterns). There should be mechanisms to provide users with an option to consent or opt out. Chatbots should have safeguards to not exploit emotional vulnerabilities and to protect users, especially vulnerable populations. Chatbots should also be auditable. There is also an issue of accountability—that is, who is responsible for intended but also unintended consequences. We need new ethical guidelines for both designers and advertisers.

If you could give advice to everyday users about maintaining healthy boundaries with AI chatbots, what would it be?

It’s essential to maintain healthy boundaries with AI chatbots, especially as they become more integrated into our daily lives. With so much hype surrounding AI, it’s easy to fall into extremes: either becoming overly optimistic or deeply skeptical. We need to think critically and expose ourselves to a variety of perspectives. There are excellent books, newsletters, and podcasts that can help us better understand what these tools can do, and how to use them in ways that protect our well-being.

My first piece of advice is simple: prioritize your human relationships. These relationships can be challenging, but they offer emotional depth, shared vulnerability, and opportunities for empathy that AI cannot replicate.

On a more practical level, I’ve adopted strategies to dehumanize my interactions with AI chatbots. For example, I don’t thank the chatbot. I see it as analogous to thanking a spreadsheet for executing a formula. I’ve also customized my chatbot settings to depersonalize the interaction. For example, I’ve asked it not to address me by name, not to evaluate my questions or provide encouragement, and to provide a range of perspectives.

Finally, it’s important to remember that engagement is the business model. Like social media, these tools are designed to exploit our cognitive and emotional vulnerabilities. We need to practice self-awareness. Reflect on how your interactions with AI affect your mood, thinking, and behavior. Set usage goals. Sometimes, the best way to protect ourselves is through traditional methods: turning off notifications, scheduling tech-free time, and taking walks to take care of our minds and bodies.

Share on social

College of Media
119 Gregory Hall
810 S. Wright St.
Urbana, IL 61801
217-333-2350