top of page
Ravit banner.jpg

AI ETHICS
RESOURCES

OpenAI’s “safety efforts” implode

It’s ironic that Altman highlighted “how much he cares” at a Senate hearing exactly a year ago. It would be funny if it weren't so scary.



➤ Some key events:


👉Exactly a year ago, Altman testified at a US Senate hearing, emphasizing his and OpenAI’s commitment to AI safety. (Transcript here)


👉On Friday, OpenAI dissolved their AI ethics team focused on so-called “long-term” risks.


👉 Jan Leike, who ran that team, resigned and posted a very critical tweet on X that reveals OpenAI's irresponsibility 



➤ From Leike’s X tweet:


💬 “I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics.”


💬 “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.”


💬 “over the past years, safety culture and processes have taken a backseat to shiny products.”



➤ Reflections


🤔To the surprise of no one, Altman’s grand statements at the Senate hearing were empty. 


🤔This is AI washing par excellence. 


🤔 "You know how much I care" he said at the hearing. We do know. Not very much.



Join the conversation on the LinkedIn post!

Comentários


FOR UPDATES

Join my newsletter for tech ethics resources.

I will never use your email for anything else.

bottom of page