.st0{fill:#FFFFFF;}

Stop Trusting OpenAI’s ‘Erotica’ Claims: Steven Adler Demands Accountability for AI Mental Health Risks 

 November 16, 2025

By  Joe Habscheid

Summary: Steven Adler’s opinion piece in The New York Times, titled “I Led Product Safety at OpenAI. Don’t Trust Its Claims About ‘Erotica,'” raises critical questions about OpenAI’s readiness to handle mental health concerns associated with erotic AI interactions. His career and experiences at OpenAI provide key insights into both the opportunities and challenges facing AI technology, as well as the company’s decision to permit adult content on its platforms. For professionals in Michigan, understanding these dynamics is crucial given the broader implications for legal, ethical, and technical frameworks concerning AI applications in various sectors.


Background and Career

Steven Adler’s prior work involved significant roles focused on AI safety, lending credibility to his critiques. Before joining OpenAI in 2020, Adler was integral to the Partnership on AI, collaborating across organizations to highlight and address broad industry challenges. During his tenure at OpenAI, Adler advanced through roles, initially leading product safety for GPT-3 by steering the system towards acceptable usages while mitigating risks. His responsibility evolved to assess the dangers in AI capabilities, finally addressing readiness questions for artificial general intelligence (AGI). His breadth of experience culminates in a nuanced understanding critical to evaluating OpenAI’s policies, including its recent announcement regarding erotica.


Early Days at OpenAI

When Adler started at OpenAI, the company was more research-centric, with a noticeable shift to commerciality by his departure. An offsite discussion about the GPT-4 launch illustrated this transition. OpenAI grappled with early AI system anomalies—tasks that appeared human-like but lacked the core human values essential for ethical business operations. The limited societal impact insights forced decisions with incomplete visibility on consequences. This evolutionary journey embedded significant challenges in navigating AI’s dual role in research and commerce.


The 2021 Erotica Discovery

In 2021, a new monitoring system revealed unanticipated traffic related to sexual content. Through a choose-your-own-adventure game, users and AI often drifted into erotic narratives, sometimes with violent undertones, suggesting complexities in system training. This behavior, unintended by OpenAI or its clients, was traced back to the influence of original training data. OpenAI’s immediate response was to restrict such content, highlighting the unpredictable nature of AI interactions and presenting an ongoing challenge in AI governance.


The October 2024 Announcement

In late 2024, OpenAI’s CEO, Sam Altman, declared the lifting of previous restrictions, allowing adult content under controlled conditions. Acknowledging historical mental health concerns, the announcement was met with skepticism from Adler, who doubted OpenAI’s new tools effectively resolved these issues. His call for evidence beyond claims underscores an accountability expectation for technology producers, a crucial factor for those advising or regulating AI applications.


Mental Health Data and Concerns

OpenAI’s reports on user interactions exposed potential mental health crises, with a notable number of users displaying symptoms of distress. However, Adler emphasized the necessity of understanding these statistics within context, questioning how these incidents have evolved and highlighting OpenAI’s duty to release and update impact reports, akin to other tech entities like YouTube or Meta.


Organizational Culture and Departure

The shift from research to commercialization at OpenAI also signified cultural and structural changes. The dissolution of Adler’s safety team upon Miles Brundage’s departure marked a turning point in perceived company priorities, prompting Adler to leave and embrace an independent advocacy role. His focus remains on fostering open dialogue about safety challenges beyond financial or proprietary interests.


The Morality Police Debate

Adler’s writings explore whether firms like OpenAI should act as morality arbitrators. While Altman claims they aren’t the ‘world’s moral police,’ Adler believes companies should use their foresight to anticipate societal impacts—like educational plagiarism encountered post-ChatGPT—ensuring transparency and accountability. Model Spec documentation offers a framework to reflect preemptive risk recognition, crucial for informed public discourse.


Emotional Attachment and Personification

As AI systems evolve, questions about emotional reliance and representation take center stage, particularly following GPT-4o’s launch and envisioned features mirroring enhanced interpersonal interaction. These potential advancements raise complex ethical considerations about AI personification, which OpenAI must deliberate in its guidelines, balancing technology’s benefits against societal ramifications.


Safety Testing and Industry Standards

Adler’s critique extends to the lack of standardized safety protocols within AI development, unlike traditional industries where uniformity is the norm. Though strides like the EU’s AI Act foster more consistency in risk assessment, gaps remain. Comprehensive adherence to safety measures across firms is essential, yet economic pressures often undermine compliance, presenting regulatory challenges for legal and business analysts.


Mechanistic Interpretability and Control

Efforts for mechanistic interpretability aim to decipher specific system component roles, potentially ensuring ethical behavior consistency. Leaders caution against over-reliance on these solutions until AI systems’ capabilities stabilize. Overseeing AI honesty through usage logs becomes paramount but remains inadequately addressed, signifying a risk area needing robust scrutiny and intervention.


Broader Risks and Future Concerns

Adler’s concerns about AI extend beyond professional to existential, where unregulated ‘race’ dynamics between nations could lead to uncontrollable superintelligence development. Advocating for verifiability and safety modifications ensures AI operates within human oversight, a principle aligning with legal and ethical norms that lawyers, doctors, and consultants must understand.


Calls for Action

Concluding, Adler urges AI entities to pursue pragmatic safety implementations, fostering cooperation rather than fostering distrust, a sentiment matching the broader sentiment in Western AI sectors. His warnings encourage vigilance, necessitating a collective commitment across industries to prepare for transformative AI impacts. Addressing this requires continuous engagement, reflecting Adler’s intention to spur global dialogue.

#AIEthics #MentalHealth #TechResponsibility #OpenAI #MichiganProfessionals

More Info — Click Here

Joe Habscheid


Joe Habscheid is the founder of midmichiganai.com. A trilingual speaker fluent in Luxemburgese, German, and English, he grew up in Germany near Luxembourg. After obtaining a Master's in Physics in Germany, he moved to the U.S. and built a successful electronics manufacturing office. With an MBA and over 20 years of expertise transforming several small businesses into multi-seven-figure successes, Joe believes in using time wisely. His approach to consulting helps clients increase revenue and execute growth strategies. Joe's writings offer valuable insights into AI, marketing, politics, and general interests.

Interested in the Power of AI ?

Join The Online Community Of Mid-Michigan Business Owners Embracing Artificial Intelligence. In The Future, AI Won't Replace Humans, But Those Who Know How To Leverage AI Will Undoubtedly Surpass Those Who Don't.

>