Mediated technologies (subtly but powerfully) shape how people relate, express, perceive, and behave.

CPI’s role is to question not just what these tools do, BUT HOW THEY DO IT. We also examine their impacts to our mental health and social connections.

Here’s a strategic list of mediated technologies we monitor, along with the cyberpsychological concerns for each:

Examples:
ChatGPT, Claude, Pi, Character.ai

Mediation(s):

  • Simulate conversation and often stand in for human interaction
  • Mediate emotional expression, question-asking, and information exchange

Our Cyberpsychological Concerns:

  • Emotional over-identification with AI agents
  • Blurred boundaries between human-authored and AI-generated interaction
  • Cognitive offloading and declining information literacy
  • Misinformation acceptance due to confident but flawed AI responses

Examples:
TikTok, Instagram, Threads, YouTube

Mediation(s):

  • Enable users to communicate, perform, and socially engage through visual/textual content
  • Algorithmic feeds mediate visibility, identity, and attention

Our Cyberpsychological Concerns:

  • Comparison anxiety
  • Identity distortion
  • Performance pressure
  • Misinformation spread

Examples:
Replika, Anima, Soul Machines

Mediation(s):

  • simulate companionship and social bonding—standing in for friends, partners, or confidants

Our Cyberpsychological Concerns:

  • Emotional dependency
  • Diminished interest in human connection
  • Confusion around relational authenticity

Examples:
Alexa skills that play games with groups, conversational AI in cars, social storytelling features

Mediation(s):

  • When these tools serve a social, interactive function, they become mediators (especially when facilitating group interaction or voice-controlled group apps)

Our Cyberpsychological Concerns:

  • Anthropomorphizing machines
  • Shifts in social habits around communication
  • Loss of interpersonal skill-building

Examples:
AI-generated profiles, match suggestions, dating chatbots

Mediation(s):

  • mediate romantic and emotional connection, often shaping outcomes algorithmically

Our Cyberpsychological Concerns:

  • Commodification of intimacy
  • Inauthentic presentation
  • Emotional risk or deception

Examples:
Sentiment analysis in Zoom, customer service emotion detection, classroom engagement tools

Mediation(s):

  • Modify or influence communication based on emotional cues or predicted emotional states

Our Cyberpsychological Concerns:

  • Performance anxiety
  • Emotional surveillance
  • Misreading or misclassification of genuine emotion

Examples:
Lil Miquela, AI-generated YouTube or Instagram personalities

Mediation(s):

  • Facilitate parasocial relationships, where audiences emotionally engage with non-human agents as if they were real

Our Cyberpsychological Concerns:

  • Erosion of media literacy
  • Confusion about authenticity
  • Unrealistic expectations for self-image or behavior

Support our advocacy & literacy outreach