top of page
Ravit banner.jpg

AI ETHICS
RESOURCES

OpenAI still fails to meet basic responsibility standards

OpenAI released new versions of ChatGPT and DALL-E. They still fail to meet basic responsibility standards. We should call them out on it.


➤Background:


👉I’m reading so many starry-eyed posts about the new versions. If I see that promo video of ChatGPT with the bike one more time… :)


👉The technological advances are impressive


👉But the lack of progress on basic responsibility standards is underwhelming.


👉The gap between the two is jarring. OpenAI could easily have done a better job. They chose not to.




➤Here are a few examples:



❌Marking AI-generated content


It is currently impossible to identify generated content. This increases risks of disinformation, fraud, etc.


OpenAI could have mitigated the risk by embedding invisible watermarks or information in the image’s metadata. They could have used one of several existing standards or create a new one.



❌Transparency


We are still in the dark about the training dataset. What is it’s composition? How much of it is copyright-protected? How diverse it is?



❌Fairness


We’ve seen many examples of bias in large language models.


For example, a recent Stanford HAI study found that LLMs are particularly unrepresentative of several groups, such as people who regularly attend religious services (link in comments).


OpenAI has more than enough resources to conduct similar studies and address concerns. There is no indication they did.



➤The signal I get:


OpenAI just doesn’t care about AI responsibility.


They need to be called out on it.



➤ Join the discussion in my LinkedIn post!



Links:


Stanford HAI’s study


OpenAI's Safety page


The new DALL-E version


The new ChatGPT version

FOR UPDATES

Join my newsletter for tech ethics resources.

I will never use your email for anything else.

bottom of page