According to the 2021 Final Report of the National Security Commission on Artificial Intelligence, “agencies should institute specific oversight and enforcement practices, including…a mechanism that would allow thorough review of the most sensitive/high-risk AI systems to ensure auditability and compliance with responsible use and fielding requirements…” National Security Commission on Artificial Intelligence, Final Report, (Washington, D.C.: Mar. 1 2021). In addition, according to one forum participant, entities should consider mitigating risks by limiting the scope of the AI system when there is not sufficient confidence that the stated goals and objectives can be achieved.
Entities should regularly consider the utility of the AI system to ensure that it is still useful. For example, as one forum participant noted, an AI system trained on traffic patterns in 2019 might not be useful in 2020 because of reduced traffic during the COVID-19 pandemic. In assessing utility, entities should also consider the extent to which the AI system is still needed to address the goals and objectives. In addition, changing laws, operational environments, resource levels, or risks could affect the utility of the AI system compared to other alternatives. Therefore, entities should also consider metrics for determining when to retire the system and the process for doing so.