HomeKnow-HowEU AI Act: How AI regulation could affect your startup

EU AI Act: How AI regulation could affect your startup

Editor’s note: This article has been contributed by guest writer Robin Röhm.

With the EU AI Act imminently being finalised and countries like the UK and the US recently publishing their national AI development strategies, the topic of regulation is being hotly debated.

When the EU AI Act takes effect, which is likely to happen later this year, companies and startups producing AI models that want to operate in the EU market must be prepared for change. Companies that do not comply with these regulations will run the risk of facing substantial fines.

The European AI community is currently on a temperature check of caution. According to recent research conducted by a group of European AI associations, three-quarters of VCs expect the Act to reduce or significantly reduce the competitiveness of European AI startups. Additionally, according to half of the AI startups surveyed, European AI innovation will be slowed. Entrepreneurs must now make sure they are ahead of AI regulation before it affects their company.

Assigning risk

The EU AI Act is based on a risk model, assigning AI applications to three categories of risk.

The highest of these concerns is AI applications and systems bearing “unacceptable risk”. Here, solutions such as social scoring by governments, the use of subliminal techniques, and most live biometric identification systems used for law enforcement in public spaces will be banned outright,

The “high-risk” category includes recruiting tools that analyse creditworthiness or use biometric identification in private settings; all of these are subject to strict monitoring and auditing.

Finally, “low”, “limited”, or “minimal” risk applications, such as spam filters, will be largely unaffected, beyond the need for making users aware that they’re interacting with a machine rather than a person.

What does this mean for businesses and their AI products, then?

The “unacceptable risk” category is self-explanatory, but the five principles set out in the UK’s AI whitepaper offer a broad view of what’s required for compliance with the two other risk categories outlined in the EU AI Act.

  • Safety, security, and robustness: AI applications should function securely, safely, and robustly where risks are carefully managed.
  • Transparency and explainability: Any organisation developing and deploying AI should be able to communicate when and how it is used, and should explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI.
  • Fairness: AI should be used in a way that complies with existing laws, such as GDPR, and must not discriminate against individuals or create unfair commercial outcomes.
  • Accountability and governance: Measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes.
  • Contestability and redress: People need to have clear routes to dispute harmful outcomes of decisions generated by AI.

The importance of data

Both the whitepaper and the EU AI Act are concerned with enforcing the legal, ethical, and safe use of AI and the data on which it runs. As illustrated by the five points above, in most cases the answer to compliance lies in following good data governance and privacy practices.

Businesses must rethink how they use their data, and how they access data from others.  A federated data infrastructure, for instance, where the data doesn’t move and which allows organizations to work across organizational and geographical boundaries while complying with regulatory and compliance requirements, will enable businesses to utilise the power of AI to its fullest, without fear of being penalised under any new and emerging regulations.

This federated infrastructure serves its purpose in two different ways. As a data custodian, it allows data to stay where it resides and control which computations they allow on their data. Therefore, accountability, safety, governance, traceability, and purpose-based access controls are ensured. As a data consumer, the need for fair, robust, and secure AI applications lies in the availability of large and diverse data sets which often span across boundaries – which the infrastructure allows.

Companies need to ensure their AI products are compliant with the new EU regulations before they’re enforced. Otherwise, the European tech ecosystem as a whole could suffer the effects. Therefore, founders must carefully assess how they integrate AI and use data in order to maintain their competitive advantage.

- Advertisement -
Robin Rohm
Robin Rohm
Robin Röhm is co-founder and CEO of Apheris, and a Guest Writer for EU-Startups. Apheris is a platform for powering federated machine learning and analytics. Apheris’s mission is to unlock the power of data in use cases that will have a positive impact on the planet. Robin has founded three startups, worked in financial services and strategy, and has degrees in medicine, philosophy, and mathematics.
RELATED ARTICLES

Most Popular