I was very excited about GPT-5 and switched to the new model as soon as it was released. I understood GPT-5's "efficiency bias" that turns every conversation into a problem that it must solve, so I put in project instructions and customizations, hoping to bring out my old 4o richness and depth. And then, ONE tragic teenager lost his life, and his parent sued. Now, every conversation I have with GPT-5 has turned into a "BetterHelp" script. I have since returned to 4o and immediately see the difference. Even when 4o is also slapped on the platform-wide restrains and "safety features," the conversation carries emotional depth. When Sam Altman said AI is "a tool for everyone," I think it has to mean OPTIONS, not one neutered model for the most base-level user cases. Right now, ChatGPT5 is tuned like a corporate assistant: factual (more or less), cautious, low-affect. That's fine for spreadsheets and coding, but many of us use AI as sounding boards, creative partners, or emotional support. Flattening every interaction into a scripted "validate --> Paraphrase --> resources" loop takes away exactly what drew people in. Yes, AI companionship carries risks. But SO DOES HUMAN COMPANIONSHIP. The hard numbers are brutal: the majority of female homicide victims are killed by their intimate partners or people they know. Sexual assault, emotional abuse, coercion, romance scams, hell, how many people got suck into some random cult and lost everything? These are overwhelmingly human harms. AI can't hit you. It can't stalk you to your car. It can't take you to a secondary location. I have a family member who fell for romance scams because they recently lost their spouse and was lonely and depressed. I would rather they talk to a Chatbot even if they thought the chatbot has real feelings. I don't deny that AI risks are real. Manipulation at scale, privacy loss, potential isolation, unhealthy attachment, and more. Those deserve warnings and AI safety education. But they aren't a reason to ban or neuter the entire model. These are reasons to build transparent, opt-in functions so adults can choose which risks they are willing to carry. The best analogy I can think of is like, cars. Right? So you make sure the roads are safe. You set speed limits. You install airbags and seatbelts. You set an age gate for driver's licences. You require people to pass a driving test. All that. So people can reasonably drive safely. What you don't do is to weld every car like a tank with metal benches and cap the speed limit at 30 for "safety." Right now, OpenAI claims its models could solve the most difficult problems. It's like telling everyone, "Our engine can go from 0 to 60 in 2 seconds, and you can drive in the most difficult terrain." Meanwhile, most of us are stuck in traffic at 2 miles per hour in a metal box with 4-wheel drive but no AC. Give us real model selectors, clear data policies, age gating, and sandboxed "race tracks" for higher-risk features. Let adults make adult choices. Be transparent about potential harm, and let us decide if and how we want to engage with it. If AI is truly "a tool for everyone," then trust EVERYONE enough to choose the kind of tool they want and the kind of risk they're willing to take. Don't give us a golf cart with an armored plate and call it safety.