David Haber

Building mission-critical AI with Lakera. Based in Zurich, Switzerland.

Why We Need Concrete Use Cases for Successful AI Regulation

Artificial Intelligence (AI) regulations are currently a hot topic across industries. More so following the EU’s recent release of its updated AI regulation plans aimed at the Union’s single market. While it is easy to find yourself falling into one of the following AI regulation “camps”, it is unfortunate that the conversation has polarized to either “we are already massively over-regulating” or “we should’ve introduced new AI regulations yesterday”. Of course, neither of these opinions addresses the complexity of today’s AI. This article summarizes where we are at with the recent EU proposal, as well as what we have learned are the key elements to translating the proposal into concrete guidance.

On reflection, the truth is that regulations are fundamentally a good thing. They are the reason that we now don’t have to worry about many aspects of life. We can trust that tap water is safe to drink, that over-the-counter drugs are not killing us, and we’ve even managed to make airplanes an order of magnitude safer over the last half a decade.1 All of these have improved our safety and quality of life. Like other innovations, which directly affect people’s lives, AI also needs to be carefully regulated for it to be safe, trustworthy, and effective. The real question is how can we efficiently regulate AI – so let’s take a look at that!

European Collaboration on AI Regulations

The EU recently released an update to its coordinated plan for AI, together with the first legal framework.2 Aiming to create human-centered, ethical AI, which is trustworthy, world-leading, and innovative, is part of the Union’s ambitious plan for the future. The proposal has since been heavily discussed in the press and on social media. We have had a chance to speak with the team that created the proposal and will discuss some of the key points below.

The European Commission proposed new rules and actions which aim to turn Europe into the global hub for trustworthy Artificial Intelligence (AI). Photo: Unsplash/Christian Lue

The European Commission proposed new rules and actions which aim to turn Europe into the global hub for trustworthy Artificial Intelligence (AI). Photo: Unsplash/Christian Lue

While this proposal is “horizontal”, meaning it is valid across industries and the Member States, it is unlikely that it will be adopted as such. Particularly in traditionally regulated sectors such as healthcare or aviation, the new regulations have to “fit in” with what’s already there. Furthermore, the proposal has already been met with some criticism for its lack of clarity over its definition of AI, the categorization of applications into the proposed risk groups, and its protections and penalties, especially when it comes to surveillance and law enforcement use. So, we expect to see a few refinements – and likely some additions – to the proposal in the next months and years.

The success of this package will ultimately depend on how it will be translated into a practical setup for innovators. The most important goal is that application developers and startups can innovate so that we don’t end up losing ground to less-regulated areas, such as China and the US. But that is also just one perspective to take! If done right, regulations create an opportunity to foster innovation rather than stifle it. Europe has a chance to create a regulatory edge. Finding the right balance will be difficult – but not impossible!

Collaboration, principles, and concrete guidance

Having previously worked extensively with regulators across various industries, including aviation and healthcare, here are three selected learning points for how we think that the new proposal can be successfully translated into concrete guidance for innovators:

  1. Startups, AI experts, and regulators need to team up
    Collaboration is crucial. In order to create lasting success, co-location, mutual learning, and a joined vision are required. In particular, internal knowledge building is necessary on all sides. We need to work on reducing language barriers and enhancing project compatibility. It is important that regulators “speak a bit more AI” and that innovators “speak a bit more regulations”.

  2. Industries should focus on adaptive governance
    Disciplines, from healthcare to aviation, have adopted principles-based regulations decades ago, and now we should (and can) do the same for AI. This will allow us to leave behind reliance on exhaustive rules and unbending guidelines. And, instead, put more stock on broader values, or principles, by which interested parties need to comply. We need to work towards final guidelines that are adaptive and at the same time provide concrete suggestions for implementation. That would give AI developers both the space and the efficiency they need to innovate.

  3. Concrete use cases are key
    Most importantly, in order to create regulations for something as complex as AI, we need many more concrete and future-oriented use cases. While it’s fine to start at a high level, discussions cannot be abstract. We have to take the EU’s high-level proposals and work through real and concrete use cases. By combining expertise from all sides and working through concrete use cases, we can test-drive high-level proposals (such as the EU’s), identify relevant issues, and translate them into efficient, clear, and adaptive guidance for innovators.

Concrete use cases are key to successful regulations

It is important to not blindly jump into one of the “AI regulation camps”. But, if we accept that regulations are necessary and good, we can start thinking about how to create the right environment for efficient regulatory frameworks. Taking this approach will make AI globally more user-focused, ethical, and efficient, as well as foster innovation.

Reaching this standard will only be possible if we move away from abstract conversations and with the help of concrete use cases. So, we need to invest in advancing the available technologies on national and international levels across industries. With purposeful risk financing and a clear focus on innovation, we can create efficient regulatory frameworks for concrete guidance and then generalize them to other use cases.

Do reach out on LinkedIn if you want to discuss anything about AI regulations, the EU’s current plans, or the future of AI development! You might also be interested in my recent article on what makes building safe AI so hard in the first place.


  1. Aviation safety evolution (2019 update) ↩︎

  2. ‘Coordinated Plan on Artificial Intelligence 2021 Review’ and ‘Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)’. For more information and the official press release, see: https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682 and for FAQs, have a look at: https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683. ↩︎