While emergent AI are rapidly evolving and still being defined, they carry unpredictable social and psychological effects.

CPI investigates how emergent AI challenge our definitions of cognition, labor, and creativity. We also explore their ethical, regulatory, and psychological impacts across work and everyday spaces.

Unpredictable, evolving AI-driven systems, like the emergent AI shared below, can deeply impact cognition, behavior, identity, and psychological safety:

Examples:
GPT-4o, Sora, DALL·E, Suno, GitHub Copilot

Why they’re emergent AI:

  • Constantly evolving capabilities
  • No standardized norms for authorship, bias control, or output verification

Our cyberpsychological concerns:

  • Confusion between human and machine authorship (blurring creative identity)
  • Reduced critical thinking or creativity (over-reliance on AI outputs)
  • Emotional manipulation via personalized content synthesis (e.g., hyper-targeted messaging)
  • Emotional over-identification with AI agents

Examples:
ShotSpotter, PredPol, facial recognition for behavioral prediction

Why they’re emergent AI:

  • Rely on opaque, evolving machine learning algorithms often deployed in high-stakes, real-world environments with little oversight

Our cyberpsychological concerns:

  • Erosion of trust in fairness and social systems
  • Psychological harm to communities subjected to over-surveillance
  • Reinforcement of bias with a veneer of neutrality

Examples:
Synthesia, HeyGen, DeepFaceLab

Why they’re emergent AI:

  • Uses advanced AI to simulate human likeness with increasing realism, outpacing legal and perceptual safeguards

Our cyberpsychological concerns:

  • Destabilization of what’s “real” in media (epistemic confusion)
  • Emotional exploitation through fake personas or misinformation
  • Loss of media literacy and trust in authentic sources

Examples:
Alexa skills that play games with groups, conversational AI in cars, social storytelling features

Why they’re emergent AI:

  • Uses unproven psychometric algorithms to make decisions about employment and human behavior in real time

Our cyberpsychological concerns:

  • Anxiety, self-monitoring, and loss of workplace psychological safety
  • Emotional manipulation based on AI interpretation of affect or facial movement
  • Undermines human-based empathy in hiring decisions

Examples:
Squirrel AI, Century Tech, Khanmigo

Why they’re emergent AI:

  • These platforms use real-time AI to personalize education paths, content, and engagement strategies based on learning behavior.

Our cyberpsychological concerns:

  • Altered learning cognition (e.g., over-personalized scaffolding may reduce mental resilience)
  • Risk of psychological profiling in educational contexts
  • Dependency on machine guidance over internal motivation

Examples:
Affectiva, RealEyes, Zoom mood trackers, emotion AI in cars or classrooms

Why they’re emergent AI:

  • These tools interpret human emotion using facial expression, tone of voice, and behavior yet lack nuance or contextual sensitivity.

Our cyberpsychological concerns:

  • Loss of emotional privacy and authenticity
  • Increased pressure to “perform” emotionally in AI-monitored spaces
  • Misinterpretation of emotions leading to unfair treatment or profiling

Examples:
Babylon Health, AI triage bots, AI financial advisors (e.g., robo-investors)

Why they’re emergent AI:

  • AI is used to make or inform high-stakes personal decisions with limited transparency and often without full human oversight.

Our cyberpsychological concerns:

  • Loss of patient/client trust in human-centered care
  • Fear or confusion over decisions made “by algorithms”
  • Risk of learned helplessness or disengagement in personal decision-making

Support our advocacy & literacy outreach