The Proliferation of AI Ethics Principles: What’s Next? (MAIEI article)
In this entry to the Montreal AI Ethics Institute blog, I discuss the proliferation of AI ethics principles. I argue that overviews of existing AI ethics principles help us see that it is unlikely that a core set of principles will be found and that, even if it were to be found, universally adopting runs the risk of exacerbating power imbalances.
Highlights:
►There are currently dozens of AI Ethics documents out there, articulating hundreds of AI ethics principles, written by governments, corporations, non-profits, and academics.
►This proliferation of principles presents challenges. For example, should organizations continue to produce new principles, or should they endorse existing ones? If organizations are to endorse existing principles, which ones? And which of the principles should inform regulation?
►Five different studies search for unifying themes in the existing principles. I summarize their results.
►Spoiler: Each finds somewhat different themes.
►Another spoiler: the existing AI Ethics principles were primarily written by powerful men with institutional interests in the global north.
My take:
►I think that it is unlikely that a unique set of AI ethics principles will be found.
►I also think that, even if a unique set were to be found, adopting it universally runs the risk of exacerbating existing power imbalances by subjugating large populations to principles that were formulated by a small elite.
►What's next? Should we seek to create new AI ethics principles which incorporate more perspectives? What if it doesn't result in a unique set of principles, only increasing the multiplicity of principles? Is it possible to develop approaches for AI ethics governance that don't rely on general AI ethics principles?
► Read the article here