Algemene Rekenkamer NCA

Understanding algorithms - 2021

2021 NL2021algorithmsUnderstanding
SCALE
  • - Virtually all the ministries are either developing or already using applications. Some of these involve highly innovative algorithms using artificial intelligence.
  • - The government is planning to invest €23.5 million in 2021 in the Dutch AI Coalition, a public-private partnership in artificial intelligence. This money has been earmarked for research into artificial intelligence and the development of applications.
COMPLIANCE FOCUS
  • - Strategic Action Plan for Artificial Intelligence
  • - IT General Controls (ITGC)
  • - General Data Protection Regulation (GDPR)
  • - Government Information Security Baseline
PERFORMANCE ASPECT
  • - transparency
  • - bias and other risks
  • - learning effects
  • - suppliers management

Clear, consistent definitions and quality requirements will foster knowledge sharing, streamline processes and prevent misinterpretations. The officials participating in our brainstorming session provided more detailed information about this need for clear, consistent definitions in central government, and in doing so laid the foundations for a ‘common language’ for algorithms.

The algorithms supporting risk predictions carry a risk that the assumptions underlying the risk profile are not consistent with the law or may produce (undesirable) anomalies due to certain hidden limitations in the input data. The result may be a form of discrimination or the use of special category personal data. There is also a risk of the recommendation made by the algorithm influencing the official’s decision. (...) Our analysis of the three algorithms shows that not all relevant specialist disciplines are involved in the development of algorithms. While privacy experts, programmers or data specialists are often involved, legal experts and policy advisers tend to be left out. This may result in an algorithm failing to comply with all legal and ethical standards or not furthering the policy objective in question. Equally, in many cases no action is taken to limit ethical risks such as biases in the selected data.

There is no easy way for citizens to obtain information on the privacy guarantees applying to the use of algorithms.

Transparency about algorithms and control of their operation must become the rule rather than the exception. Ministries that have outsourced the development and management of algorithms have only a limited knowledge of these algorithms. The outsourcing ministry assumes that the supplier is in control and complies with the ITGC and other standards included in our assessment. We found no proof of this: the responsible minister does not have any information on the quality of the algorithm in question nor on the documents underlying compliance with the relevant standards, and refers to the supplier instead.

Ensure adequate documentation of the terms of reference, organisation, monitoring (e.g. in terms of life cycle management: maintenance and compliance with current legislation) and evaluation of the algorithm, as this makes clear whether the algorithm is and remains fit for purpose. This also enables the algorithm to be adjusted, if necessary. Especially if algorithms are outsourced or purchased from another (outside) supplier, it is important to ensure that all arrangements relating to liability are laid down in a contract. Our audit framework contains a number of key requirements that can be used as input for documenting such agreements.

Despite the widespread public interest in algorithms, no specific tools for auditing or analysing algorithms have not yet been developed / up untill this date. This is why we developed our own audit framework . It incorporates the standards that are currently used in order to limit the potential risks inherent to the use of algorithms. We link the aspects that are tested and the audit questions to these risks. The likelihood of the risks actually materialising in relation to a specific algorithm, and the extent of the damage thus caused, depend on whether or not sophisticated techniques are used, and on the source of the data, method of collection and quality of the data used, and the impact that the algorithm has on private citizens. Our audit framework intends to assist in making algorithms, and to foster debate on the potential risks of algorithms. Assessors and auditors will be able to use this audit framework in the future to assess algorithms in a consistent and uniform manner.

Ensure that the audit framework is translated into practical quality requirements for algorithms

Code (gexf) to continue analysis with GephiTerminology graph
svg
The items above were selected and named by the e-Government Subgroup of the EUROSAI IT Working Group on the basis of publicly available report of the author Supreme Audit Institutions (SAI). In the same way, the Subgroup prepared the analytical assumptions and headings. All readers are encouraged to consult the original texts by the author SAIs (linked).