top of page
Ravit banner.jpg

AI ETHICS
RESOURCES

Regulation of Generative AI

How will generative AI be regulated? Here is summary of rules the EU is working on as part of the EU AI Act. I divided the rules into three categories and relied on a recent Stanford report. All the links are below.


Join the conversation about this list on my Linkedin page, here.


➤ BACKGOUND


The EU AI Act is the leading bill to regulate AI. It is expected to pass into law in early 2024. While its official jurisdiction is the EU, it is likely to have a global effect similar to GDPR.


As a result of the recent generative AI hype, the European regulators added new proposed rules for what they call "foundational models". This is a boarder category and generative AI belongs to it. The draft defines foundational models in the following way:


"Foundation models are a recent development, in which AI models are developed from algorithms designed to optimize for generality and versatility of output. Those models are often trained on a broad range of data sources and large amounts of data to accomplish a wide range of downstream tasks, including some for which they were not specifically developed and trained". [Amendment 99, 60e]


Another name people use for foundational models is "general purpose AI systems" (e.g., here). Some of the rules are explicitly about generative AI (the ones about generated content), but the rest apply more broadly.



➤ RISK MITIGATION


🛠️Appropriate levels [Article 28b, paragraph 2c] - The model must achieve appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity


🛠️Data governance [Article 28b, paragraph 2b] - The datasets the model uses must be subject to appropriate data governance, including measures to examine and mitigate biases.


🛠️Adherence to general principles [Article 4a, paragraph 1] - The company must make best efforts to abide by the following principles (there are details on each in the draft itself):


a) Human agency and oversight

b) Technical robustness and safety

c) Privacy and data governance

d) Transparency

e) Diversity, non-discrimination and fairness

f) Social and environmental well-being


🛠️Law-abiding generated content [Article 28b, paragraph 4b] - The company must safeguard against generating illegal content.


🛠️Energy [Article 28b, paragraph 2d] - The company must implement standards for reducing energy consumption



➤ TRANSPARENCY:


👀Registry - Models must be registered in an official public registry, including the following information [Annex VIII, Section C]:


a) Which data sources the company used during the development

b) Capabilities and limitations

c) Foreseeable risks

d) Which risk mitigation measures the company implemented

e) Which risks remain unmitigated and why

f) Computing resources the model used during training (influences carbon and water footprints)

g) How well the model performs relative to industry benchmarks

h) How the company tested and optimized the model

i) In which EU states the model is on the market


👀Training on copyrighted data [Article 28b, paragraph 4c] - If the some of the training data is protected under copyright law, the company must provide a public summary of that data.


👀Labeling [Article 52, paragraph b] - Deepfakes depicting people who appear to do or say something they didn't without their consent should be clearly labeled.

(Thank you to Dr. Till Klein for letting me know that this one was missing from list!)


👀Transparency [Annex VIII, 60g] - Generative foundation models must ensure transparency about the fact the content is generated by an AI system, not by humans.


👀System is designed so users know its an AI [Article 28b, paragraph 4a] - People must be informed that they are interacting with an AI system (except some authorized law enforcement models).



➤ DEMONSTRATE COMPLIANCE:


🖨️Quality management [Article 28b, paragraph 2f] - The company must establish a quality management system to ensure and document compliance with this law


🖨️Pre-market compliance [Article 28b, paragraph 1] - The company must ensure compliance with this law before putting the model on the market


🖨️Downstream documentation [Annex VIII, 60g] - The company should prepare documentation that downstream providers will need


🖨️Upkeep [Article 28b, paragraph 3] - The company must keep relevant technical documentation for at least 10 years


------


References:


1. A recent Sanford report curated the parts of the EU AI Act that are relevant to foundation models. The rules I summarized in this post are the ones in that report. Here's the link to it:

2. Here are links to the draft of the updated EU AI Act that contain these rules


FOR UPDATES

Join my newsletter for tech ethics resources.

I will never use your email for anything else.

bottom of page