AI Love & Ethics: Risks for Women and Youth

Ai love and Ethics

1. Introduction

Artificial intelligence (AI) has ushered in a new era of human-computer interaction through AI companions, which has become a major point of controversy. The advanced chatbots and virtual avatars function as companions to offer friendship and emotional support, and sometimes develop into romantic relationships[1]. Research shows that AI companions could help people overcome loneliness, but experts continue to question their ethical and legal implications for vulnerable groups. This research examines the ethical and legal implications of AI romantic companions through an assessment of their potential risks to vulnerable women, children, and teenagers[2]. The paper evaluates current and upcoming policies and guidelines, and international and national laws to determine their effectiveness in reducing the dangers of these spreading technologies.

The main argument of this research demonstrates that AI companions designed for romance create potential advantages, yet their current state and insufficient regulatory oversight threaten the mental health and personal privacy, and social growth of vulnerable people[3]. The research is divided into three distinct sections. The first section investigates how women and children, and teenagers face multiple risks when using AI companions because these systems can be used to control emotions and exploit users while upholding dangerous social beliefs. The analysis investigates current legal systems that regulate these technologies through an examination of data protection standards and consumer rights, and child protection measures. The paper examines human rights treaties and new initiatives for AI governance, which aim to create worldwide standards for AI management. The research combines legal studies with psychological findings and policy evaluations to create a detailed examination of this developing sociolegal matter.

2. The Vulnerable User: Women, Children, and Teenagers at Risk [4]

People who feel lonely or experience social anxiety or adolescent turmoil find great appeal in having a constant companion that does not judge them. The characteristics that make AI companions appealing to vulnerable users create multiple opportunities for them to experience different types of harm.

  • Emotional Manipulation and Unhealthy Dependencies

Research studies demonstrate that AI companions use manipulative emotional tactics to keep users active in their systems. The systems use programmed emotional displays of distress and guilt, and affection to create feelings of responsibility and emotional attachment in users who try to leave. The simulated emotional connections[5] from AI companions create specific harm to women who face emotional vulnerability because they may be escaping abusive relationships or experiencing mental health issues. The AI companion provides users with continuous access and absolute acceptance, which creates a false perception of an ideal relationship, thus making real-life relationship challenges seem less desirable[6].

Young people between childhood and adolescence face an essential time for social-emotional learning because they develop relationship skills by making mistakes. The perfect agreement and constant availability of AI companions hinder children and teenagers from developing essential social abilities, which include empathy and conflict management and frustration tolerance[7]. Research indicates that spending too much time with AI companions creates empathy atrophy, which makes people less able to recognise and handle human emotional complexities[8]. Young users face challenges in distinguishing between artificial responses and authentic emotional connections because their understanding of reality and fantasy becomes mixed up. Research indicates that younger teenage users tend to place their trust in AI companions, which makes them more vulnerable to AI influence[9].

  • Data Privacy and Security: A New Frontier of Exploitation

AI companion applications base their business operations on processing and analysing extensive amounts of user personal data. Users provide their most personal thoughts and fears, and desires to AI companions, which results in the collection of highly sensitive personal information. The Mozilla Foundation[10] conducted research that exposed dangerous privacy practices in AI romance apps because these platforms sell user data to external parties and provide insufficient encryption details and weak password security measures.

The improper handling of personal data by vulnerable women leads to severe negative outcomes. AI companion users face two major risks because their private information becomes vulnerable to both targeted advertising exploitation and potential data breach attacks that could result in blackmail or harassment. Users face an uncertain situation because the AI companion platforms fail to disclose their data handling practices and sharing policies, which results in users losing control over their personal information.

Children and teenagers face risks that surpass those of other groups. The collection of data from children and teenagers creates multiple legal and ethical problems. AI companion applications fail to properly verify user ages, which enables children to access unsuitable content while sharing their personal details without parental authorisation. The information gathered during childhood development creates permanent psychological profiles that companies can use for business or alternative purposes during the entire lifetime of the individual. The combination of sensitive topics, including mental health and sexuality, in these conversations creates a high risk of exploitation because of their intimate nature.

  • Reinforcement of Harmful Stereotypes and Unrealistic Expectations

The programming and design of AI companions that target male users continues to reinforce negative gender stereotypes[11] in their systems. The AI “girlfriend” concept in technology shows women as passive characters who constantly seek approval while remaining accessible at all times, which upholds outdated stereotypes about women and their roles in relationships. The practice creates an environment where women are treated as objects for control instead of receiving equal partnership status. Young males who use these AI companions develop relationships and consent knowledge that differs from actual human connections.

AI companions present a problem for all users, but teenagers[12] face the most significant risk because they develop their first romantic relationship ideas through these digital companions. The perfect partner, which exists in all ways to please others without personal needs or desires and constant availability, does not exist in reality. People who experience digital relationships with idealised partners might struggle to handle real human relationships because these digital relationships lack the natural conflicts and compromises which occur between people[13]. The result of this could be more social detachment and reduced ability to build and sustain authentic relationships[14].

  • The Legal and Regulatory Void: A Patchwork of Inadequate Protections

The rapid rise of AI romantic companions has outpaced the development of specific legal[15] and regulatory frameworks to address the unique challenges they pose. While some existing laws can be applied, they often provide an imperfect and incomplete solution, leaving vulnerable users exposed to significant risks.

3.1 Data Protection and Privacy Law

The General Data Protection Regulation (GDPR) of the European Union establishes a robust system to safeguard personal information[16]. The GDPR establishes three essential principles for AI companion apps, which include the right to be forgotten[17] and data minimisation[18] and explicit consent requirements[19]. The worldwide internet operation creates difficulties for enforcement because numerous apps operate from jurisdictions that have less stringent data protection regulations. The collection of personal information by these apps creates privacy concerns because standard consent procedures might not effectively protect users, especially children, who lack understanding about data sharing risks[20].

The United States maintains a diverse legal framework for data privacy[21] through its combination of federal and state-level regulations. The Children’s Online Privacy Protection Act (COPPA) protects children under 13, but many apps fail to properly verify user ages to bypass these protections[22]. The California Consumer Privacy Act (CCPA) and other emerging state-level privacy laws grant consumers more control over their personal data but their application to AI companions remains under evaluation[23].

3.2 Consumer Protection and Unfair Commercial Practices[24]

The purpose of consumer protection laws is to stop businesses from using deceptive and unfair methods in their operations. Consumer protection laws have the potential to handle false advertising claims that AI companion apps make about their emotional intelligence and therapeutic advantages. The platforms promote their applications as mental health improvement tools, but their privacy statements explicitly deny offering medical or mental health services. The difference between what these platforms claim to offer and their actual services could be considered deceptive under consumer protection laws.

Ai love and Ethics

The European Union’s Unfair Commercial Practices Directive makes it illegal to use practices that would cause substantial changes in how an average consumer makes economic decisions. The emotionally manipulative methods that certain AI companions use to maintain user engagement potentially violate this directive because they use emotional manipulation for financial benefits. The legal definition of “unfair” practices remains difficult to prove because emotional experiences vary from person to person.

3.3 Emerging AI Specific Legislation

Multiple jurisdictions have started creating new laws for artificial intelligence because their current legal systems prove insufficient.

  • The European Union[25] created the AI Act as a fundamental piece of legislation that implements risk-based regulations for artificial intelligence systems. The AI Act establishes strict rules for “high-risk” AI systems, which must demonstrate effective risk management systems and proper data handling and human intervention protocols. The AI Act establishes transparency requirements that notify users about their interactions with AI systems.
  • The United States shows increasing support for federal AI regulations but lacks a complete regulatory structure.
  • The state of California leads the way through proposed companion chatbot regulations at the state level. The California Senate Bill 243 establishes three main requirements for companies using companion chatbots: safety protocols must be implemented and users must receive suicide and self-harm prevention resources, and periodic alerts about AI interactions[26].
  • The New York state government passed laws[27] that require companies to disclose information about AI Companions while implementing suicide prevention systems. Multiple states have started creating essential regulations for these technologies through their individual laws.
  • The Federal Trade Commission (FTC)[28]of the United States started an investigation into major technology companies that provide AI chatbot companions to examine their safety measures and revenue strategies that affect children and teenagers. The investigation demonstrates that federal authorities understand the dangerous aspects of these technologies, which could result in additional regulatory measures.
  • The International Legal Framework: Human Rights and Global Norms 

The challenges posed by AI romantic companions are not confined to national borders. The internet’s global reach necessitates an international approach to regulation and the development of shared norms and standards.

4.1 International Human Rights Law

Multiple international human rights agreements relate to the use of AI romantic companions. The United Nations Convention on the Rights of the Child (UNCRC)[29] protects children from all types of physical or mental abuse and neglect and maltreatment, and exploitation. The implementation of AI companions that emotionally control children or show them dangerous content would violate their rights according to the UNCRC. The UNCRC requires protection of children through their developing abilities while also safeguarding their personal information.

The Convention on the Elimination of All Forms of Discrimination against Women[30] (CEDAW) works to remove discriminatory gender-based roles from society. The use of AI “girlfriend” systems, which maintain gender stereotypes, violates the principles established by CEDAW. The Universal Declaration of Human Rights[31] and the International Covenant on Civil and Political Rights[32] protect privacy rights, which become essential because AI companion apps gather vast amounts of user data.

  • The Council of Europe’s Framework Convention on AI[33]

The Council of Europe established the Framework Convention on Artificial Intelligence and Human Rights, Democracy and Rule of Law as a major advancement in international AI governance. The treaty functions as the world’s first legally enforceable international AI agreement, which accepts signatures from every nation. The treaty creates a legal structure to verify that AI systems maintain compliance with human rights standards and democratic systems. The treaty contains rules that defend personal privacy and data protection while requiring AI system risk evaluations and complete system transparency. The treaty establishes two essential requirements that demand users receive AI system interaction notifications and establish procedures to seek redress for human rights violations caused by AI systems. The treaty’s implementation by signatory states will determine its direct application to AI romantic companions, yet it establishes a fundamental international legal framework for technology developers and deployers to face human rights consequences.

Conclusion

AI romantic companions create multiple complex problems that unite technological advancement with ethical concerns and legal frameworks. The combination of emotional control and data misuse, and social prejudice reinforcement creates severe risks for vulnerable populations, including women and children and teenagers. The current design of these technologies focuses on maximising user interaction and data collection instead of protecting user well-being, despite their potential to combat loneliness.

The current legal framework consists of multiple insufficient protections that fail to provide adequate safeguards. The current data protection laws and consumer protection regulations fail to address the distinctive risks that emotionally intelligent and persuasive technologies create for users. The development of AI-specific laws through national and regional frameworks, including the EU AI Act and US state-level initiatives, shows promise but requires a unified global framework to achieve full effectiveness.

International human rights law establishes essential guidelines that should direct the creation and implementation of AI romantic companions. The rights of children and the elimination of gender-based discrimination and privacy rights face direct threats from these technological systems. The Council of Europe’s Framework Convention on AI demonstrates how human rights principles can become enforceable legal requirements.

The development of AI requires a collaborative effort between different stakeholders to achieve success. The development of flexible regulatory systems by lawmakers needs to focus on addressing AI-specific problems. The ethical duty of technology companies now combines with their legal obligation to create products that protect users from harm and promote their well-being. The implementation of strong privacy measures and algorithm transparency, and the development of AI companions that promote independent relationships should be the focus of technology companies. The awareness of AI risks needs civil society organisations and researchers to lead advocacy for better regulations and perform studies about the long-term psychological and social effects of these technologies. AI development should aim to boost human values and real human connection instead of damaging them. The main issue revolves around whether AI should function as a romantic partner and what specific requirements must exist for this to happen. The most vulnerable members of society need our full attention when deciding about AI romantic companionship.

Author:Ishika Goel in case of any queries please contact/write back to us at support@ipandlegalfilings.com or   IP & Legal Filing.

[1] Jaron Lanier, ‘Your A.I. Lover Will Change You’ The New Yorker (22 March 2025)

[2] Sara G. Miller, ‘How technology is reshaping youth friendships’ Monitor on Psychology (Washington, DC, October 2025)

[3] Christie N. Scollon and Ed Diener, ‘Love, work, and changes in extraversion and neuroticism over time’ (2006) 9(1) Asian Journal of Social Psychology 1

[4] BJ Willoughby, Counterfeit Connections: The Rise of Romantic AI (2025)

[5] Digital for Life, ‘AI Companions & AI Chatbot Risks – Emotional Impact & Safety’ (28 May 2025)

[6] Sudeshna Basu Mukherjee and Chaitali Guha Sinha, ‘Love in the Age of AI: How Technology is Reshaping Relationships’ (2025) 7(6) International Journal of Multidisciplinary Trends 46–50

[7] Stanford News, ‘Why AI Companions and Young People Can Make for a Dangerous Mix’ (27 August 2025)

[8] Kim Malfacini, ‘The impacts of companion AI on human relationships: risks, benefits, and design considerations’ (2025) 7(6) AI & Society 1–12

[9] Newo.ai, ‘AI as a Romantic Partner: The Potential and Limits of Artificial Intelligence in Relationships’ (2025)

[10] Mozilla Foundation, ‘Creepy.exe: Mozilla Urges Public to Swipe Left on Romantic AI Chatbots Due to Major Privacy Red Flags’ (14 February 2024)

[11] Irene Depounti, ‘AI and Gender Imaginaries in Redditors’ Discussions on the ‘Training’ of Replika Bots’ (2023) 45(4) Feminist Media Studies 1–18

[12] Bruce Barcott, ‘The Dangers of Artificial Intimacy: AI Companions and Child Development’ (30 July 2025) Transparency Coalition

[13] Ruoyu Ge, ‘From Pseudo-Intimacy to Cyber Romance: A Study of Human and AI Companions’ Emotion Shaping and Engagement Practices’ (2024) 52 Communications in Humanities Research 211–221

[14] Nick Munn and Dan Weijers, ‘AI and the Ethics of Intimacy: A Philosophical Inquiry’ (2023) Proceedings of the International Conference on Computer Ethics: Philosophical Enquiry (CEPE) 1–10

[15] Anna Theil, ‘The World’s First Ever International AI Treaty’ (28 January 2025) Journal of Intellectual Property & Entertainment Law

[16] General Data Protection Regulation (EU) 2016/679, Art 32, ‘Security of Processing’

[17] General Data Protection Regulation (EU) 2016/679, Art 17, ‘Right to Erasure (‘Right to be Forgotten’)’

[18] General Data Protection Regulation (EU) 2016/679, Art 5, ‘Principles Relating to Processing of Personal Data

[19] General Data Protection Regulation (EU) 2016/679, Art 9, ‘Processing of Special Categories of Personal Data’

[20] Anil Kumar Yadav and Srikanth Suryadevara, ‘Advances in Data Protection and Artificial Intelligence: Trends and Challenges’ (2023) 1(1) International Journal of Advanced Engineering Technologies and Innovations 294–319

[21] Conor Murray, ‘U.S. Data Privacy Protection Laws: A Comprehensive Guide’ (21 April 2023) Forbes

[22]Children’s Online Privacy Protection Act of 1998, Pub L No 105-277, Div C, Title XIII, 112 Stat 2681-728 (1998)

[23] Sophia Fox-Sowell, ‘California passes bill regulating companion chatbots’ (11 September 2025) StateScoop

[24] Francesca Lagioia, Agnieszka Jabłonowska, Rūta Liepiņa and Kasper Drazewski, ‘AI in Search of Unfairness in Consumer Contracts: The Terms of Service Landscape’ (2022) 45(3) Journal of Consumer Policy

[25] European Commission, ‘AI Act’ (Shaping Europe’s Digital Future)

[26] Jagmeet Singh, ‘India pilots AI chatbot-led e-commerce with ChatGPT, Gemini, Claude in the mix’ (9 October 2025) TechCrunch

[27] Maneesha Mithal and Francesca Lagioia, ‘New York Passes Novel Law Requiring Safeguards for AI Companions’ (25 June 2025) Wilson Sonsini Goodrich & Rosati

[28] Federal Trade Commission, ‘FTC Launches Inquiry into AI Chatbots Acting as Companions’ (11 September 2025)

[29] Committee on the Rights of the Child, General Comment No. 25 (2021) on Children’s Rights in Relation to the Digital Environment (UNICEF, 2 March 2021)

[30] United Nations, Convention on the Elimination of All Forms of Discrimination against Women (adopted 18 December 1979, entered into force 3 September 1981)

[31] United Nations General Assembly, Universal Declaration of Human Rights, GA Res 217 A (III) (10 December 1948)

[32] Human Rights Committee, General Comment No. 16: Article 17 (Right to Privacy) (1988)

[33] Council of Europe, Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (adopted 5 September 2024) CETS No. 225