top of page
Ravit banner.jpg

AI ETHICS
RESOURCES

PRI blog: What do investors need to know about AI risks?

A blog post about a workshop on responsible investing I led at the UN's Principles for Responsible Investment (PRI) organization. The blog post is co-authored with Daram Pandian. You can read the full text here.


➤ AI is booming.


Most companies already employ AI and investments in AI are on the rise. It is likely that the vast majority of companies will use AI in the coming years.


➤ AI presents many environmental and social (ESG) risks.


Examples include exacerbating social inequalities, disrupting democratic processes, and high carbon emissions. Responsible AI is a part of ESG.


➤ Regulation efforts are picking up steam, and some are already in effect.


➤ Here are three approaches to evaluate the environmental and social risks that a particular AI system/company poses:


1. By application type - assign coarse grain risk levels based on the application type following the risk classification presented in the EU AI Act.


2. Evaluate the company's responsible AI maturity - companies that develop and deploy AI responsibly are more likely to detect AI ethics problems and fix them.


3. Third-party evaluation - for mature companies, consider using third-party auditors with relevant technical, ethical, and legal expertise.


➤ You can find a link to the full blog post in the comments.


➤ Thank you to Peter Dunbar, CFA who invited me to lead this workshop, to

Eline Sleurink who helped organize, and to the workshop's participants for a wonderful discussion.

FOR UPDATES

Join my newsletter for tech ethics resources.

I will never use your email for anything else.

bottom of page