With the growing artificial intelligence hype over the past year, led by the explosion of interest in generative AI, it is little surprise that governments around the world are laying out plans to regulate the technology.
While enterprises and AI companies know that regulation is coming – how could they not? – Many are woefully underprepared for the complexity of what lies ahead. With regulation about to roll through jurisdictions across the world, it is time to stop talking about what it might look like – and start acting.
The proposed EU AI Act is looming large for companies operating in Europe. This legal framework will significantly bolster the power of EU authorities to monitor the development of AI, with the potential to levy fines on companies that breach the rules. The Act centers on a classification system that evaluates the risk posed by AI developments to the “health and safety or fundamental rights of a person”, according to an analysis by the World Economic Forum.
Although there is a two-year implementation period, the EU AI Act includes fines for non-compliance, which is a genuine threat for companies that have not yet begun preparing for the new regulation. These fines can total up to €30 million or up to 6 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. They are not sanctions to be taken lightly, which makes the lack of preparedness even more difficult to understand.
Fail to prepare, prepare to fail
Why are so many companies not properly prepared for the coming regulation? In part, the regulators are to blame. Although we have some guidance and, in the case of the EU AI Act, a rough structure, there is still so much that we don’t know. It is still a moving target, making it very difficult for compliance departments to understand the steps they need to take to ensure they are in line with the new rules.
To be sure, it is difficult for regulators because AI development is happening so fast. That means it is very difficult to come up with practical legislation that does not hinder innovation.
There are other issues too, such as a lack of policy experts with a deep understanding of AI, and no current technical products that allow companies to remain compliant while continuing to focus on their core business. Both of those factors might change, but for now, companies do not have the tools they need to comply with AI regulations.
And let’s not pretend any of this is easy. There is a complex web of interconnectedness with existing regulations like GDPR and industry-specific regulations such as the Software as a Medical Device (SaMD) requirements, all of which require understanding and incorporating into business decisions none of which makes it easy for companies to comply. Inevitably, there will be breaches and problems as regulation is rolled out, both in terms of AI-specific regulation and in the way existing, industry-specific guidelines are adapted for AI.
The risks of not complying
Under the EU AI Act, serious breaches will lead to fines of up to €30 million for companies, although regulators are likely to be lenient at first to accommodate teething problems. Still, it is a risk companies cannot ignore.
At larger organizations, there is a danger of stasis taking hold, where the risk of not complying is seen as too high to act and therefore nothing happens. Standing still means competitors can gain market share from individual companies, but there is a wider risk for whole ecosystems if regulation prevents companies from innovating. If businesses are unable to collaborate for fear of breach, they will not be able to access the necessary data to build AI products that have the potential to improve the world.
Finally, there is a broader societal risk if companies are not ready for AI regulation. A recent open-access paper on catastrophic AI risks by researchers from the Center for AI Safety identified four categories of risk.
First is “malicious use”, in which individuals or groups intentionally use AI to cause harm. Second, is the “AI race”, in which competitive environments compel actors to deploy unsafe AI, or cede control to AI. Third is “organizational risks”, which highlights how human factors and complex systems can increase the chance of catastrophic accidents. Finally is “rogue AI”, which describes the inherent difficulty in controlling agents far more intelligent than humans.
This might sound terrifying, but this underscores the reason for increasing AI regulation – and the urgency for companies to make sure they are prepared.
What can AI companies do?
Although a daunting prospect, there is still time to get ready for AI regulation. Engineers, data custodians, and business owners must ensure they are aware of what is coming and the broader risks around AI security and privacy – and be willing to introduce products that enable developers to remain compliant.
Although preparing for the impending AI regulations may seem like a daunting prospect, the good news is that there’s still time to get organized and compliant. The key stakeholders in this process—engineers, data custodians, and business owners—need to be not only aware of upcoming regulations but also proactive in implementing solutions that facilitate compliance. Below are essential steps that companies can follow:
- Prioritize Data Privacy and Intellectual Property: Machine learning models can inadvertently “memorize” the data they’re trained on, raising significant privacy concerns, especially if sensitive personal information is involved. It’s crucial to identify and mitigate this risk to preserve both data privacy and intellectual property rights.
- Implement Secure Machine Learning Models: Machine learning models are not immune to malicious attacks that could compromise data security. Companies adopting AI must be vigilant in utilizing only secure, vetted models.
- Establish Robust Data Governance: Data custodians should implement governance frameworks to maintain control over the data sets used in model training. This includes setting guidelines on which algorithms can be deployed on these data sets, ensuring alignment with broader compliance requirements.
Companies building AI products need to think about mitigating risks in their product around areas such as security, biases, safety and robustness. They can achieve this by ensuring they train models on representative and granular real-world data, including sensitive data that is currently going unused.
It is also important that they are transparent in their actions by registering models and providing summaries of copyright information, along with quality management and documentation. Doing so supports accountability and lifts standards in the development of AI models.
Ultimately, it comes down to data security. With governed, private, and secure computational access to data, it is possible to build innovative AI products without sacrificing security or compliance – while, crucially, complying with regulation.
Regulation is coming, like it or not. In order to deliver innovation and build safe products, we need to introduce the tools that allow companies to comply with these new rules. That will be beneficial for everyone.