•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•

OpenAI said Tuesday it is releasing a set of prompts that developers can use to make their apps safer for teens. The AI lab said the teen safety policies can be used with its open-weight safety model known as gpt-oss-safeguard, allowing developers to apply safety guidance without building from scratch.
OpenAI said the set of teen safety policies is designed as prompts, making them compatible with other models besides gpt-oss-safeguard, though they are likely most effective within OpenAI’s own ecosystem. The policies cover issues including graphic violence and sexual content, harmful body ideals and behaviors, dangerous activities and challenges, romantic or violent role play, and age-restricted goods and services.
To write the prompts, OpenAI said it worked with AI safety watchdogs Common Sense Media and everyone.ai. OpenAI also said the prompt-based policies are released as open source, with the intent that they can be adapted and improved over time.
“These prompt-based policies help set a meaningful safety floor across the ecosystem, and because they’re released as open source, they can be adapted and improved over time,” said Robbie Torney, Head of AI & Digital Assessments at Common Sense Media, in a statement.
In its blog, OpenAI said developers—including experienced teams—often struggle to translate safety goals into precise, operational rules. The company said this can result in gaps in protection, inconsistent enforcement, or overly broad filtering, adding that clear, well-scoped policies are a critical foundation for effective safety systems.
Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…