top of page
Ravit banner.jpg

AI ETHICS
RESOURCES

New report: Early-stage AI governance



➤ Download the report below. And see more resources about this framework here.





➤  In the report:



🎁 A method for evaluating and improving AI responsibility, building on Dotan et al. (2024) and the NIST AI Risk Management Framework. 



🎁Detailed examples of governance evaluations from early-stage projects developed at the AI4Gov masters program, as part of a workshop/competition. Thank you to Marzia Mortati and Martina Sciannamè for hosting me!



🎁 The examples include risk triage, evaluating current governance activities, and improvement plans. 



🎁Insights arising from the examples, including top prioritized risks, common governance strengths and weaknesses, and strategic plans for early-stage projects




➤  Context:



👉Figuring out how to govern AI responsibly, maximize the benefits, and minimize the risks, is a barrier for many organizations. 



👉For example, in a recent BCG survey, 52% of executives said that they actively discourage generative AI adoption. The lack of a responsible AI strategy was the second most common reason.



👉These concerns are appropriate because the stakes for organizations are high, including risks and rewards to the quality of their products, reputation, client attraction, employee attraction, and compliance. 



👉 The stakes for end users, related communities, and society at large are also high, including mass disinformation, discrimination, privacy violations, and physical and psychological harm, to name just a few.



👉The report focuses on the earliest stage of the development life-cycle, the ideation phase, demonstrating how AI responsibility is crucial even at this stage.




➤ I'm looking for additional venues for this workshop and pilot partners for the framework. Get in touch if you're interested!



FOR UPDATES

Join my newsletter for tech ethics resources.

I will never use your email for anything else.

bottom of page