The actual implementation of new EU regulations for AI should stick to the common sense we use for any technology and specifically should be checked against some key questions:
- Is the problem we are addressing currently happening, or is it likely to happen with a frequency worth the regulation?
- Should the problem happen, are the consequences serious enough compared to the costs implied with regulation compliance?
- Is the regulation effectively addressing the problem or just addressing the legal liability of some stakeholders?
- Is the AI regulation trying to address a perceived problem that we did not consider worth addressing with previous technologies before AI?
I call this list a “common-sense checklist”.
We can use it to score a given regulation, but it is also a valuable framework for discussing the balance between costs and benefits.
Let’s make some examples and see how humans have dealt with these questions in common cases.
Is the problem we are addressing with AI regulation currently happening, or is it likely to happen with a frequency worth the regulation?
Some people like to foresee and prepare for any conceivable threat: a survival instinct, as we say, “better worried than sorry”.
However, very few consider an invasion of alien species from deep space a very likely or imminent danger. We are not regulating the interaction with this alien species or the building requirements of underground bunkers simply because the alien landing scenario is, empirically, very improbable.
Back to the AI domain, developing Artificial General Intelligence (AGI) might create undesired issues. Still, its actual development is yet to come and is not likely to happen any time soon. As it has occurred for any technology so far, we should trust humans’ capability to develop solutions to problems as they come along.
Should the problem happen, are the consequences serious enough compared to the costs implied with regulation compliance?
So, should we ignore risks whenever the risk event is rare enough?
In modern commercial planes, the failure of the primary actuators circuit (controlling essential parts such as the engines or the wing flaps) is infrequent. Still, a possible break while flying may lead to catastrophic consequences. In this case, we gladly accept the reasonable cost of adding a redundant circuit, effectively addressing a rare risk with relevant social benefits.
On the contrary, we usually don’t regulate access to standard kitchen knives, not because they can’t be used to stab somebody (a chef knife is 8 inches or 20 cm long) but because the huge societal cost and impracticality of regulating them far exceed the benefit of regulating.
If you train an AI generative application using 200.000 images, there is the possibility that a few of them might be protected by copyright. Therefore you are “somehow exploiting” the work of the original author. On the other hand, the cost implied by checking for copyright 200.000 images is enormous and could lead the AI industry to skip an innovative area entirely. Where is the balance here? I’ll get back to this specific example later.
Is the regulation effectively addressing the problem or just addressing the legal liability of some stakeholders?
Soon after ChatGPT was released to the public, the regulatory authority of my country (Italy) banned ChatGPT because it failed to check, among other things, the user’s age. Eventually, after a few weeks, access to ChatGPT for teenagers was finally given by asking the user to self-certify to be older than 13.
If the intent was to prevent kids from using ChatGPT, it is clear the solution was NOT effective for the stated goal but addressed OpenAI legal liability instead. It also gave our Regulatory Agency director 15 minutes of fame and exposure.
Let’s not burden AI research and development with useless, window-dressing, hypocritical “solutions” like this.
In the EU AI Act there is a rule to declare if a video or image has been AI-generated. This requirement is a perfect example of good regulation: it directly addresses the problem of deep fakes with transparency, a severe issue when the main subject in a fake video is somebody we rely upon for information or decision-making (e.g. the President of a country).
It is a good rule because, at the same time, compliance is not costly or burdensome: the creator must declare that the video was AI-generated and this is enough for not being fooled by the deep-fake.
Beware – such a rule does NOT prevent the diffusion of possibly undeclared deep-fake video, exactly like a speed limit does not prevent somebody from driving at a much higher speed. However, the regulation creates the right framework for realistic compliance and practical implementation and enables addressing those who do not follow the regulation.
Is the AI regulation trying to address a perceived problem that we did not consider worth addressing with previous technologies before AI?
Wilson Mizner famously said: “If you steal from one author, it’s plagiarism; if you steal from many, it’s research.”.
Any product of the human mind is the fruit of continuous improvement over previous insights, ideas, reworkings, and links of different ideas. Famous fiction writers, such as J.K. Rowling, are often very active readers: how much of what they read will influence their writing? Should they report the novels they read before writing their own work of fiction?
In reality, we stick to common sense. If a writer mentions a complete quotation from another author, she will put it in quotations and give credit. If she writes a novel by copy-pasting entire pages from the last success of Ken Follett, she will probably be sued.
But if the writer reads dozens of books and rephrases some of the concepts adding his insights, then nobody is asking him to report on what he read in the past. Why should AI be treated with a different standard?
The EU AI Act requires any business to provide full visibility around the origin of the data on which their AI models were trained, including what might be covered by copyright. This requirement is exactly like asking an author to document the books he used to learn a subject or novels read for leisure in the past. Copyright infringement should not be assumed because the AI is learning from some books or paintings but rather if there is evidence of plagiarism in the final result.
Why do we bother questioning regulation implementations?
There are multiple reasons to question the regulatory framework.
The apparent reason, often mentioned, is that too burdensome rules may slow down innovation. Start-ups and inventors may shift to less regulated areas that are not necessarily those most beneficial to society (e.g. moving talents from healthcare and medical to gaming or fin-tech).
Also, when regulatory requirements pile up or become unrealistic in their reach, the final result is that a shallow effort is spread across so many hurdles, and “legal window-dressing” occurs, adding zero value, as previously mentioned.
People and companies have so much energy and resources, and it would be way more beneficial to focus these on a few critical, addressable areas that are likely to generate relevant problems. Therefore, there is also a cost-opportunity cost whenever we pile up any new rule to an already complex regulatory backdrop.
If you want to know more about the EU AI Act, I found the techrepublic article here very insightful.
Conclusions
In this post, I propose a simple, four-questions “Common Sense Checklist” to check whether any regulation implementation will add true value to society or drag the innovation down with excess burden.
Rules and regulations are fundamental as they define the playground and ensure that society as a whole is improved by new technologies, weaker demographics are protected from abuse, and risks are kept to a reasonable minimum.
AI is a technology like any other: we must address the risk/benefit balance of any regulation by adopting the same common sense we use for any technology and avoid over-burdening its development.
In particular, we should be aware that:
- It is impossible or not practical trying to bring any risk to zero. There will always be risks associated with any technology that we must understand and minimize with well-designed strategies and also regulations.
- Regulation implementation must balance a residual risk or stakeholder interests with compliance costs and societal benefits.
- Avoid regulations whose compliance is unpractical, too costly or burdensome for any stakeholder. Otherwise, legal workarounds will be developed, or the development effort will be abandoned with a societal cost and missed innovation.
- Especially for developing technologies such as AI, don’t over-engineer regulation beyond what is ordinarily sensible and carried out by stakeholders without AI. For example, the request for tracking training data is an example of a costly and unpractical requirement going well beyond the final goal of preventing plagiarism.