Recently, we demonstrated how AI was born in the EU, but raised in the US. Across the pond, AI has grown up in the absence of regulation, which has led to fundamental problems in the ethical use of technology, from compromising individual rights and privacy to weaponization. Now, the EU is leading the way in creating a framework for ethical AI, which, if implemented, will protect individuals, benefit the common good, and create more investments and opportunities for European startups.
On 18 December of last year, the European Commission’s High-Level Expert Group on Artificial Intelligence (AI-HLEG) released its Draft Ethics Guidelines for Trustworthy AI, calling for an EU-wide human-centric approach to AI based on five principles: “Do Good”, “Do no Harm”, “Preserve Human Agency”, “Be Fair”, and “Operate Transparently”. The document addresses core concerns of AI use, including identification without consent, covert AI systems, citizen mass scoring, and lethal autonomous weapons systems (LAWS).
The unregulated US roll-out of digital technology and AI has allowed tech giants like Google and Facebook to unethically collect massive amounts of private information. Much of that data has subsequently been widely shared with governments as well as private third parties, often without proper authorization. Not only has the US government been incompetent at regulating the development of digital technology and AI, it further demonstrated that it could not be trusted to ethically manage its own technology, as evidenced by Edward Snowden’s revelations, including NSA spying on German and other EU leaders. While this could have been by design, the NSA then went on to demonstrate its internal incompetence by leaking or allowing hackers to pilfer US cyber weapons.
Under these circumstances the EU had no choice but to begin to try to aggressively regulate the development of digital and other advanced technologies. The GDPR was a necessary first step, and it is only logical that the EU should follow that up with a framework for regulating and calling for the ethical use of AI.
Final guidelines were published in April, and on 26 June AI-HLEG released its Policy and Investment Recommendations for Trustworthy AI, which calls for increased research as well as investment in building ethical AI in the EU. The Commission’s vision is based on three pillars: increasing public and private investments in AI, preparing for socio-economic changes, and ensuring an appropriate ethical and legal framework to protect and strengthen European values.
Among the proposal’s 33 recommendations are automating dangerous jobs where human lives are put at risk; following in Estonia’s footsteps by creating accessible, end-to-end digital public services for individuals and businesses at all levels of public administration, accessible regardless of economic class while respecting individual rights and privacy; strengthening research, education, and funding mechanisms within the EU; banning or restricting unjustified personal, physical, or mental tracking or identification by AI; and proposing an international moratorium on the development of automated lethal weapons.
The recommendations proposed by AI-HLEG would, put into practice, create opportunities for startups in building government systems for data collection as well as protection and governance, and support investment in AI-based solutions to address social needs in the wake of pressing challenges such as climate change.
One of the hard lessons of the US experience of allowing tech companies to become tech giants has resulted in too much centralization of power. A system of checks and balances with widely-dispersed expertise is essential to prevent abuses of power. It is therefore encouraging that the recommendations specifically call for creating “public-private partnerships to foster sectoral AI systems” and “an easy avenue for startups and SMEs to funding and advice”, through InvestEU, the European Innovation Council, Digital Innovation Hubs, and other programmes. The European startup community and the universities and incubators that nourish it should become actively involved in AI-HLEG’s follow-on activities to promote the implementation of these recommendations, and guarantee that evolving standards maintain public access and do not become further barriers to entry.