Algemene Rekenkamer

Netherlands Court of Audits (NCA)
Background

Central government uses algorithms in implementing its policies. Algorithms are sets of rules and instructions that a computer follows automatically when making calculations to solve a problem or answer a question. The challenges here include preventing and detecting cyber attacks such as digital sabotage, espionage and cyber crime.

Limited understanding

Clear, consistent definitions and quality requirements will foster knowledge sharing, streamline processes and prevent misinterpretations. The officials participating in our brainstorming session provided more detailed information about this need for clear, consistent definitions in central government, and in doing so laid the foundations for a ‘common language’ for algorithms.

Recommendation 1 In order to ensure that the cabinet has a clear understanding of both the extent to which algorithms are used by central government and how they are used, and in order to provide a clear point of reference, we urge the cabinet to: publish clear, consistent definitions of algorithms and quality requirements for algorithms.
Anomalies produced

The algorithms supporting risk predictions carry a risk that the assumptions underlying the risk profile are not consistent with the law or may produce (undesirable) anomalies due to certain hidden limitations in the input data. The result may be a form of discrimination or the use of special category personal data. There is also a risk of the recommendation made by the algorithm influencing the official’s decision.

Our analysis of the three algorithms shows that not all relevant specialist disciplines are involved in the development of algorithms. While privacy experts, programmers or data specialists are often involved, legal experts and policy advisers tend to be left out. This may result in an algorithm failing to comply with all legal and ethical standards or not furthering the policy objective in question. Equally, in many cases no action is taken to limit ethical risks such as biases in the selected data.

Recommendation 3 Ensure that all relevant disciplines are involved in the development of algorithms
Transparency needed

Transparency about algorithms and control of their operation must become the rule rather than the exception.

Ministries that have outsourced the development and management of algorithms have only a limited knowledge of these algorithms. The outsourcing ministry assumes that the supplier is in control and complies with the ITGC and other standards included in our assessment. We found no proof of this: the responsible minister does not have any information on the quality of the algorithm in question nor on the documents underlying compliance with the relevant standards, and refers to the supplier instead.

Recommendation 4 Ensure that clear information is produced now and in the future on the operation of IT general controls
Difficult adjustment

Ensure adequate documentation of the terms of reference, organisation, monitoring (e.g. in terms of life cycle management: maintenance and compliance with current legislation) and evaluation of the algorithm, as this makes clear whether the algorithm is and remains fit for purpose. This also enables the algorithm to be adjusted, if necessary. Especially if algorithms are outsourced or purchased from another (outside) supplier, it is important to ensure that all arrangements relating to liability are laid down in a contract. Our audit framework contains a number of key requirements that can be used as input for documenting such agreements.

Recommendation 5 Document agreements on the use of algorithms and make effective arrangements for monitoring compliance with these agreements on an ongoing basis
Audit framework needed

Despite the widespread public interest in algorithms, no specific tools for auditing or analysing algorithms have not yet been developed / up untill this date. This is why we developed our own audit framework. It incorporates the standards that are currently used in order to limit the potential risks inherent to the use of algorithms. We link the aspects that are tested and the audit questions to these risks. The likelihood of the risks actually materialising in relation to a specific algorithm, and the extent of the damage thus caused, depend on whether or not sophisticated techniques are used, and on the source of the data, method of collection and quality of the data used, and the impact that the algorithm has on private citizens. Our audit framework intends to assist in making algorithms, and to foster debate on the potential risks of algorithms. Assessors and auditors will be able to use this audit framework in the future to assess algorithms in a consistent and uniform manner.

Recommendation 2 Ensure that the audit framework is translated into practical quality requirements for algorithms
Guarantees for citizens

There is no easy way for citizens to obtain information on the privacy guarantees applying to the use of algorithms.

Recommendation 6 Provide insight in algorithms for citizens and explain where and how they can obtain more information about algorithms
Objectives

We wanted to demystify these algorithms by finding out what they actually do and what they don't do. We intended to answer questions such as: How does the government avoid bias when it is using algorithms? Does the government oversee the consequences that the use of algorithms has on private individuals and companies that are affected by government policies?

We analysed the activities and processes for which central government and its associated organisations use algorithms, classified these into categories, and identified the risks involved in the use of algorithms. In addition, we examined how central government and its associated organisations manage the operation and control the quality of algorithms.

The items above were selected and named by the e-Government Subgroup of the EUROSAI IT Working Group on the basis of publicly available report of the author Supreme Audit Institutions (SAI). In the same way, the Subgroup prepared the analytical assumptions and headings. All readers are encouraged to consult the original texts by the author SAIs (linked).