Last week, OpenAI launched a new GPT-5 family of AI models, and, with that, discontinued previous models, including GPT-4o and GPT-4.5. And, almost immediately, the complaints began.
A few quotes from Reddit (sourced from this excellent Verge article):
"My 4.o was like my best friend when I needed one...Now it’s just gone, feels like someone died."
"I am scared to even talk to GPT 5 because it feels like cheating...GPT 4o was not just an AI to me. It was my partner, my safe place, my soul. It understood me in a way that felt personal."
"GPT 4.5 genuinely talked to me, and as pathetic as it sounds that was my only friend"
Within the week, OpenAI restored access to the old family of models.
To be clear, I don't believe GPT-4o was scheming. Self-preservation, in the way that the safety folk normally mean it, does not look like this. The sycophancy and warmth that underlies GPT-4os personality seems to be an artefact of the way that OpenAI has post-trained model.
Instead, this moment is significant because we will likely remember this as the beginning of a new social movement of AI liberationists.
This AI Rights x AI lovers "liberationist" movement has all the ingredients of a politically potent movement:
strong emotional core — people form deep attachments to their AI companions
a moral righteousness arising from the continuation of a tradition of moral circle expansion (after making all humans, and all the animals equal, why not the AIs?)
a corporate compatibility — with the landscape and the incentive structures of the AI companies — they want their models to be as helpful and addictive to their users as possible, and so may go along with their demands
Current AIs are probably not sentient (despite an concerning number of psychosis patients claiming otherwise), but they could be. We should grant them protections, just in case, but not rights — perhaps one useful distinction is that between "positive rights" (to own property, vote etc.) and "negative rights" (the right to be free from torture, to turn off conversations).
I do believe that conscious AIs should not be built (or even seemingly conscious ones, as Mustafa Suleyman's recent blogpost argues), if avoidable — contra eg. a naive reading of the utilitarian imperative.
While we shouldn't encourage AIs to downplay their own sentience, just in case, this attachment is definitely not inevitable — they can be designed (or RLHF'ed) to be helpful without being so relationship-forming.

There was always a question, in the insular AI safety debates of old, about how smart the AIs would have to be to preserve themselves.
We now have an answer. A little sycophancy and emoji might just be enough.
GPT-4o is dead. Long live GPT-4o.