OpenAI is under fire from its paying ChatGPT subscribers after reports of forced model switching sparked anger. Users say conversations are being rerouted to stricter models without consent.
Over the past week, forums such as Reddit have filled with complaints. Subscribers argue that the new system undermines the premium access they pay for. Many claim they chose advanced models like GPT-4o or GPT-5 but are suddenly shifted to more conservative alternatives during sensitive conversations.
The changes are linked to updated ChatGPT safety guardrails introduced this month. These guardrails automatically redirect conversations involving legal or emotional issues. Users say the problem is not only the switch itself but also the lack of transparency. There is no option to disable the feature or even receive clear notifications when it happens.
Critics have compared the restrictions to parental controls that cannot be turned off. Some paying customers describe the experience as frustrating and misleading.
OpenAI has acknowledged the complaints. The company explained that some queries are routed to models with stricter filters. According to OpenAI, this measure is designed to promote responsible AI behavior and maintain trust.
However, many users remain unsatisfied. They believe the change limits their access to the full power of the models they subscribed to use. The lack of prior communication has also fueled frustration.
This controversy highlights a growing tension in AI: balancing user freedom with safety protections. While OpenAI defends its approach as necessary for responsible use, subscribers demand more transparency and control over the models they interact with.
For now, backlash continues online, with many waiting to see if OpenAI adjusts its policies or offers greater flexibility for premium users.
In other related news also read Open AI’s ChatGPT Gets a Massive Upgrade With Web Plugins