Artificial intelligence and the regulatory landscape
26 March 2020
On 19 February 2020 the European Commission released a white paper on Artificial Intelligence (AI) which sets out the approach the Commission proposes to take as this new technology evolves. This provides an opportunity to reflect on the approach which may be taken in New Zealand and other jurisdictions.
Introduction to AI
AI is defined by the AI Forum of New Zealand as "advanced digital technologies that enable machines to reproduce or surpass abilities that would require intelligence if humans were to perform them. This includes technologies that enable machines to learn and adapt, to sense and interact, to reason and plan, to optimise procedures and parameters, to operate autonomously, to be creative and to extract knowledge from large amounts of data."
The benefit of AI is that it can provide increased efficiency and accuracy that humans cannot. But as with all new technologies, there are disadvantages that can also result; ranging from discrimination based on the data input, to risk of death due to an error in the reading of the input data (such as a driverless car misreading an object on the road resulting in an accident).
It is therefore important that this technology and the opportunities it presents are well understood and, where appropriate, regulated, so that benefits can be capitalised on while avoiding or mitigating the most significant risks.
Different types of AI
Though it makes for compelling Friday night viewing, most experts appear to agree that we are still a long way off a Terminator world full of an army of self-realising artificially intelligent robots of our own design. Currently, that sort of AI, being AI that can contemplate its own existence, still remains a theory. It is commonly referred to as 'General AI'.
Even though General AI remains in the imagination of the Hollywood producers, AI does, in fact, exist and is rapidly advancing. In fact, AI is used on a day to day basis, whether that be Google Maps (through street name recognition and business identification) or self-driving cars, AI is already a regular part of modern society.
This more common form of AI is 'Narrow AI' where the AI cannot contemplate its own existence but is able to comprehend various inputs of data, analyse the data, and output a solution or decision without any human involvement or interaction. Often Narrow AI has a pre-defined set of parameters within which this process takes place. This is the technology that most legislative and governing bodies are looking to regulate. However, each jurisdiction struggles, and is likely to continue to struggle, to strike the right balance between risk mitigation and stifling innovation.
In June 2018 the European Commission set up the independent High-Level Expert Group on AI to provide guidelines on how AI can achieve trustworthiness.
In light of the guidelines, the Commission has since published its own 'White Paper on Artificial Intelligence – A European approach to excellence and trust' on 19 February 2020.
The Commission's white paper identified that the most pressing risks that need to be addressed regarding AI are the risks to fundamental rights, privacy of data, safety and effective performance, and liability identification. The Commission noted that the best approach to regulation should be risk-based to ensure responses to AI development are proportionate and do not stifle innovation.
Instead of providing proposed regulations at this stage, the Commission has set out legal requirements that any regulatory framework must cover to ensure that AI remains trustworthy and respectful of the values and principles of the European Union.
Ultimately, the requirements set out by the Commission contemplate that existing EU law will still apply to AI, with future changes predominately required for clarifying the application of the existing law to specific AI related scenarios. In summary, the Commission provided the following key requirements for AI:
- Any training of AI must ensure that the AI will not breach any rules or laws on safety, that there will be prohibitions on discrimination, and that all privacy and personal data is protected and regulated
- All records, data sets, and documentation of training methodologies and processes are kept and maintained
- The AI system, including its benefits and weaknesses, should be easily explained and understood to those who are using it
- AI systems must be robust, accurate and resistant to both overt and subtle attacks and have an ability to deal with errors at all stages of the AI system life cycle
- The AI system must allow for human intervention and, in certain circumstances, must only proceed after receiving human approval at any or all stages of the AI system life cycle, depending on the purpose of the AI system
- AI should only be used for remote biometric identification where expressly justified, proportionate and subject to adequate safeguards.
Currently, the European Union does not have any specific legislative instrument or standard to regulate the use and development of AI. However, these requirements are likely to set the stage for future legislation, similar in scope and effect as the General Data Protection Regulation (GDPR) for privacy, therefore indicating that the European Union may be on the cusp of providing for specific and unique AI regulatory legislation.
Australia has been an active participator in the AI regulation discussion with a number of bodies seeking comment on how to best approach AI regulation:
- The Australian Human Rights Commission published a white paper in 2019 seeking comments on the proposed method of regulation. The paper suggests an independent regulatory body be developed, either from an existing organisation or a new one, called a 'responsible innovation organisation'. This body would provide guidance on how to approach AI and possibly even have some enforcement powers to ensure that AI is used appropriately and in accordance with Australian law and some governing principles
- In April 2019, the Department of Industry, Innovation and Science (in conjunction with Data61, an arm of the Commonwealth Scientific and Industrial Research Organisation (CSIRO)), released the AI Ethics Framework and the AI Technology Roadmap setting out Australia's core principles in relation to AI
- In June 2019, Standards Australia released a discussion paper seeking input on how standards can be developed and used to regulate AI.
Currently however, Australia has no specific regulatory framework for the development and use of AI and so is relying on current legislation and standards until new standards are developed. It seems that the wealth of discussion papers and the existence of a current set of AI principles indicates that Australia may not be far from AI-specific regulatory instruments.
The UK seems to be taking a positive approach to the development of AI, with a focus on the encouragement of innovation in the sector.
In 2017, an independent review was carried out by Professor Dame Wendy Hall and Jerome Pesenti recommending an increase in education around AI and the development of guidance on how to implement and regulate AI. In May 2019, the AI Sector Deal was published as the UK's national AI strategy, implementing many of the ideas presented by the 2017 independent review.
Recently, in February 2020, the Committee on Standards in Public Life published 'Artificial Intelligence and Public Standards' commenting on the role of public standards in the AI sector. According to the Committee, the current tools and principles established in the UK are sufficient to encapsulate the risks that come with AI development. It is not a matter of establishing new regulatory bodies and laws, but instead clarifying and tweaking current laws and standards so they can be more clearly applied to circumstances involving AI.
The UK government recently established the Centre for Data Ethics and Innovation (CDEI) as a specific statutory body aimed at researching issues of AI and its regulation. The CDEI often publishes papers and reports on the status of AI regulation within the UK on their website.
New Zealand does not yet have a formal national strategy that deals with a regulatory approach to AI. However, the New Zealand government currently utilises algorithms in a number of policies according to the Algorithm Assessment Report released by Statistics NZ in October 2018. Added to this, AI innovation is predicted to add $54b to the GDP by 2035 according to a paper published by the AI Forum of New Zealand.
The lack of a regulatory framework has been raised as a concern by the University of Otago in their 'Government Use of Artificial Intelligence in New Zealand' report, funded by the New Zealand Law Foundation and published in early 2019. The report has recommended a number of principles that should guide government agencies in the use and regulation of AI including clarity of scope, accuracy, control and human input, transparency, fairness, information privacy, oversight, and consultation. These principles are reflective of the Australian and European Commission positions and indicate a slow move towards a regulatory framework for New Zealand's approach to AI.
New Zealand is currently in partnership with the Centre for the Fourth Industrial Revolution which is an international organisation under the World Economic Forum. The aim of the partnership is to develop a roadmap for policy makers to facilitate discussion in regulation methodology regarding AI, which is underway.
It is likely that New Zealand's position on regulation of AI will change in the near future with the roadmap set to be published sometime this year. Further, considering the economic impact of the technology and the already wide governmental use of algorithms, there is a risk that issues will arise that will not be covered by the current laws in place. As such, and as noted by the Otago University report and by a more recent report from the AI Forum of New Zealand in September 2019, there is a need to develop guidelines to clearly mark the path of AI regulation in New Zealand in order to respond to those issues.
Currently, most jurisdictions are responding to the development and advancement of AI in a cautious manner. There is a strong desire to mitigate the risks, but an equal desire not to stifle innovation. With possible European Union legislation around the corner and the pace at which AI technology itself is advancing, it may not be long before jurisdictions have made their decisions on regulation and a diversity of approaches may well arise. Until then, watch this space…