NZ Government Use Of Algorithms

In recent decades the New Zealand government has made openness and transparency one of its key policy objectives regarding its collection, use, and sharing of data.  Since 2011, a framework of principle-led policies has been developed that build on these core principles, including policies aimed at ensuring that government agencies can responsibly harness the incalculable benefits of "big data" and algorithmic analysis, while preserving public trust in government processes and systems. 

This update focuses on the guardrails that the government has put in place to regulate its own use of data algorithms in public processes.  It also looks at how these guardrails have evolved in parallel with the explosion of investment in algorithms and AI globally, and the signals of what is coming next.

Declaration on Open and Transparent Government (2011)

The Declaration on Open and Transparent Government established the New Zealand Data and Information Management Principles (NZDIMP) to inform policymakers on how to balance two often competing objectives: maximizing value creation and protecting against unintended harms.  In basic terms, the principles state that government-held data must be:

  • Open, unless there are grounds for refusing public access under the Official Information Act 1982 or other government policy
  • Protected, in the case of confidential or classified information
  • Readily available in a discoverable form (online)
  • Trusted and authoritative, meaning care is taken to ensure accuracy and attributable to authoritative sources
  • Well managed, meaning good practice should be followed for the collection, storage and preservation of the data (eg custodial protection from technological obsolescence)
  • Expected to be free, but in any event reasonably priced so as to avoid any unreasonable barrier to access
  • Provided with the highest-practicable level of interoperability and granularity (including appropriate metadata) to maximise reusability and the multiplier effect of combining datasets.

The sheer amount of data held by government agencies and breadth of the purposes for which that data can be used make central management of the government's data use impractical, and even make effective central oversight difficult.  For this reason, cross-government policies built on NZDIMP have typically preserved the discretion of each agency to determine what its own adherence to NZDIMP looks like. 

While this self-determination has been welcomed by some (especially sophisticated, data-savvy agencies), other agencies have reported feeling like they do not have the resources or understanding to comply with the all-of-government approach and have asked for clearer guidance.

SEUDA (2018)

In 2018, the Chief Government Data Steward (a position held by the Chief Executive of Stats NZ) and the Privacy Commissioner jointly developed the Principles for Safe and Effective Use of Data and Analytics Guidance (SEUDA).  SEUDA comprises six, often-overlapping principles and provides further detail about what implementing NZDIMP could look like in practice.  The SEUDA principles direct agencies to:

  • Ensure data use delivers clear, demonstrable public benefits
  • Understand how data was collected and analysed to know whether it is fit for purpose
  • Retain focus of people behind the data or their privacy rights
  • Document data use, clearly explaining decisions, and consulting with Māori as partners
  • Understand limitations to the data, such as risks of amplifying discriminatory outcomes over time (and feeding learnings about limitations back into the design of data processes)
  • Ensure that analytics are tools only and do not replace human oversight.
Applying SEUDA to use of algorithms (2018 - 2020)

The principles of SEUDA were an important foundation to the development of the New Zealand government's central policy on the use of algorithms by its agencies.  The "Algorithm Charter for Aotearoa New Zealand" (Algorithm Charter) was first published in July 2020 by Stats NZ as a key deliverable of the 2018-2020 Open Government Partnership National Action Plan.

At the time the Algorithm Charter was being developed, most other jurisdictions had yet to enact regulation of their own use of algorithms.  France was the sole identified example where algorithmic transparency had been legislated for, and French agencies had been subsequently struggling with how to apply the new legislation to existing algorithms.  Canada was publicly proposing a softer approach by creating non-statutory guidelines, as well as the creation of a national forum to consider issues relating to digital freedoms and the impacts of digitalisation on social cohesion.

New Zealand's Algorithm Charter was developed iteratively, with multiple draft versions being circulated between both government stakeholders and the wider public for consultation.  The resulting document requires signatories to make six commitments.

The Algorithm Charter commitments

When using algorithms to which the Algorithm Charter applies, signatories commit to:

  • Maintaining transparency by publishing plain language information about how the algorithm informs decision-making
  • Embracing tangata whenua partnership and incorporating Māori perspectives into the development and use of algorithms
  • Keeping focus on people
  • Making sure the data inputted is fit for purpose, and understanding limitations and biases
  • Ensuring that privacy, ethics and human rights are safeguarded via peer-review of use cases
  • Retaining human oversight by nominating a responsible point of contact and providing avenues for appealing decisions influenced by the use of algorithms.

However, the Algorithm Charter is noteworthy for four key characteristics of its design:

  • It is voluntary for New Zealand government agencies
  • It does not provide a definition of which algorithms or analytical tools it applies to, instead allowing signatories to "self-assess" whether the commitments should apply for each algorithm
  • No new oversight authority was established to monitor signatories' compliance
  • It was published with the express intention that it was independently reviewed 12 months later, to assess next steps and further changes.

Voluntary participation

A number of agencies (including some reportedly heavy data users) expressed concerns during stakeholder consultation that they had already adapted policy frameworks that specifically suited the needs of their existing algorithm use.  For these agencies, overlaying the general-purpose commitments of the Algorithm Charter on top of existing practices could result in an additional compliance burden for minimal added value.  This in turn could strain agency resources, cancelling out some of the efficiencies that use of algorithms may have helped the agency to achieve in the first place, and could have a stifling effect on future innovation.

These concerns echoed issues raised during consultation on the Accessibility Charter launched by the Ministry of Social Development (Accessibility Charter) in February 2018.  The Accessibility Charter requires agencies to provide accessible information to the public, however, some agencies noted that the nature and extent of data they held meant that consistent compliance would not be practical.  Rather than water down the Accessibility Charter commitments to a level that would allow for universal application, it was decided that the requirements would be voluntary.

In the face of similar constraints, the Algorithm Charter followed the same blueprint.  At the time of writing, the list of published signatories of the Algorithm Charter includes approximately 30 government agencies.  The first independent review of the Algorithm Charter (published in July 2021) indicated that a number of data-heavy agencies that had raised the initial concerns had not signed up.

Self-determination of scope by agencies

The Algorithm Charter notes that there are "a wide range of advanced analytical tools that can fit under the term ‘algorithm’", but it does not impose a technical definition of the algorithms to which the Algorithm Charter applies.  The rationale is that it is not necessarily important what algorithm is being used, as even rudimentary algorithms could provide significant benefits or cause material harm, depending on the nature of the data or business process for which they are used. 

The question of scope elicited much stakeholder input, and each successive draft of the Algorithm Charter set out a different proposed scope.  Ultimately, the Algorithm Charter requires signatories to apply a risk assessment to each analytical process or tool they use and make their own decision according to a risk matrix.  If the matrix suggests a high risk of unintended consequences or potential significant impacts from use of the tool or process, then the Algorithm Charter requirements must be applied.  If the risk is moderate, it merely "should" be applied, and if the risk is low, it "could" be applied.

Allowing each agency to determine the scope of the Algorithm Charter sacrifices regulatory rigor, but theoretically allows agencies to steer themselves away from situations where compliance is costly, and benefits are minimal.  However, this design feature continues to divide opinion.  Stats NZ acknowledged at the time of publication that this perceived leniency could allow agencies to avoid reasonable scrutiny about harmful algorithm use.  In addition, the results of the first independent review into the operation of the Algorithm Charter noted that there was confusion among some agencies as to how to properly apply the risk matrix.

Oversight and accountability

The Algorithm Assessment Report (which was undertaken in October 2018 as one of the first steps of developing the Algorithm Charter) suggested that many operational algorithms in use by agencies were already subject to oversight (including requirements to have established processes to resolve complaints or requests for information about the algorithm) under overlapping policy frameworks.  This formed a key part of the rationale for launching the Algorithm Charter without also establishing a specialised body to monitor compliance and accountability, alongside limiting overall implementation costs of the Algorithm Charter.

However, it cannot be ignored that concerns about a lack of effective accountability were a key theme of public submissions on the draft of the Algorithm Charter and that these concerns persisted in the results of the first independent review in 2021.  The review suggested a number of potential options to increase accountability in a more tailored way, such as the establishment of a dedicated oversight body, or the development of a single public register of algorithms to which the Algorithm Charter applies.

Results of the first independent review (2020 - 2021)

The initial design of the Algorithm Charter was characterised by a majority of signatories as being at the "less stringent end of the regulatory spectrum".  This perception was a key part of why an independent review was scheduled to take place one year after its adoption to identify further required changes.  This effectively made the initial 12 months from July 2020 to July 2021 a continuation of the Algorithm Charter's development and consultation process.  The year one review was undertaken by Taylor Fry and published in July 2021.

The year one review noted that support among agencies for having the Algorithm Charter remained "almost universal", and notably no agencies reported that the Algorithm Charter had had any "stifling" effect on their use of algorithms, as some had feared.  Most signatories reported they had made progress in terms of conducting stocktakes of the algorithms they had in use and applying the risk matrix to them.  Some agencies also reported they were developing internal policies, establishing review committees, disclosing algorithm information on their websites, and hiring new staff specifically to work on these activities.  New Zealand Police were an early example of an agency that published the results of their algorithm stocktake on their website, including details of the life-cycle management of their listed algorithms.

However, the report highlighted that agencies were implementing the Algorithm Charter in isolation and there was little sharing of information and resources, which could lead to missed opportunities to maximise efficiency, or inconsistent risk tolerances being applied.

Agencies specifically identified the commitment about embracing tangata whenua as partners in algorithm development as an area of difficulty, noting that it introduces a range of complex considerations that require expert input.  Agencies found that capacity within the relevant community of experts was limited, and often the same small group of people were regularly being called on to advise.

In total, the review noted 24 practical considerations for making improvements, with some of the key recommendations being to:

  • Maintain the current approach of not dictating a technical definition for "algorithm", but provide additional guidance (suggesting this could be done by listing examples of what to include and what not to include)
  • Consider developing a more detailed tool for triaging risk (suggesting this could be based on assessment tools under similar New Zealand frameworks, such as impact assessments under the Privacy, Human Rights and Ethics framework, or by looking at the recently developed Canadian Algorithm Impact Assessment)
  • Facilitate a community of practice for agencies to share learnings and best practices
  • Develop an annually updated central register of algorithms to which the Algorithm Charter commitments apply (suggesting this could be administered by the Government Chief Data Steward's office)
  • Consider the creation of a dedicated oversight body.
Initiatives following the review (2021 - present)

The government is taking a phased approach to implementing the recommendations of the review.  The first phase (which was scheduled to complete in June 2024) focused on providing further guidance and establishing best practices for agencies.  The second phase will promote transparency and engagement, while the third phase will introduce stricter oversight and governance measures.

The first phase yielded a number of deliverables that address the first three of the bullet-pointed recommendations above.  The most significant of these deliverables came in December 2023, with the publication of the Algorithm Impact Assessment Toolkit - a standardised two-step process for guiding agencies through the risk and impact assessment of a proposed algorithm use.  This two-step process is for guidance only and is not mandatory for signatories.

The first step involves a screening questionnaire for agencies to complete when designing or planning the use of algorithms.  If the answers to the screening questions suggest the proposed use may be "high risk" the agency is directed to the second step, which is to conduct a more in-depth "Algorithm Impact Assessment" (AIA).  The AIA comprises 40 open-ended questions that are designed to get decision-makers within agencies thinking about risks through the lens of the Algorithm Charter commitments.  There is also an AIA report template that agencies can use to document the findings of its risk assessment.

The questionnaires were accompanied by the publication of an Algorithm Impact Assessment User Guide (AIA User Guide).  The AIA User Guide is a comprehensive document (spanning almost 80 pages) and comprises guidance on navigating the AIA process, implementing the Algorithm Charter commitments, and finding additional support and guidance resources.  The AIA User Guide covers many of the issues that agencies identified as being confusing or complex in the first review, such as the actions an agency can take to honour its commitment to partner with tangata whenua in relation to its algorithm uses, or how to identify biases and appropriately assess their risk of causing harm.

The AIA User Guide is also a rich source of helpful case studies from both New Zealand and around the world.  A number of these case studies are particularly helpful for understanding risks or capabilities associated with specific types of algorithm or emerging technologies, while other case studies are more regulatory in nature.  The latter underscore the massive regulatory progress that has been made globally when compared with the relatively sparse information that was available when the Algorithm Charter was first conceived in 2018.

The second notable deliverable resulting from the recommendations of the first review has been the establishment of the Algorithm Charter Community of Practice (CoP).  This is a forum that is open to staff across all government agencies (whether signatory agencies or not), with the aim of generating a body of knowledge of "what works" for members to draw on, and to present opportunities for agencies to collaborate on the use of algorithms.  The CoP meets quarterly, but there is also a digital forum where members can share knowledge at any time.

What comes next

In its fourth (and current) National Action Plan for Open Government Partnership, the government sets out its goals for 2024 in relation to its algorithm use commitments.  Phase two, which targets transparency and accountability, was slated to begin in June 2024.  The plan identifies an initial milestone in December 2024 for providing "tools, guidance and other supports" to signatories to help them meet the transparency and accountability objectives of the Algorithm Charter.  The plan does not identify any specific deliverables, but based on the review recommendations some potential phase two deliverables are:

  • Development of a centrally administered register of all regulated algorithms used across government, which signatories may be required to contribute to; or
  • Further tools and guidance for agencies to use, including how agencies can:
    • follow the lead of New Zealand Police and others by publishing information on the algorithms they use on their websites, or even coding of algorithms on repositories like GitHub
    • build effective frameworks (or leverage existing ones) to allow people or organisations affected by the operation of algorithms to challenge or appeal the results of algorithm use (the AIA User Guide briefly touches on this principle, but not in any great detail).

Updates on the regulatory workstreams led by Stats NZ on government use of algorithms discussed in this update will be made available on  It is worth noting that these workstreams are part of a wider eco-system, and work together with other tools and overlapping frameworks, both domestically and globally.  Further information about these existing tools and frameworks can be found on

This article was co-written by Andy Dysart (senior associate) and Renee Stiles (partner).