Quick search by citation:

15 U.S. Code § 278h–1 - Standards for artificial intelligence

(a) MissionThe Institute shall—
(1)
advance collaborative frameworks, standards, guidelines, and associated methods and techniques for artificial intelligence;
(2)
support the development of a risk-mitigation framework for deploying artificial intelligence systems;
(3)
support the development of technical standards and guidelines that promote trustworthy artificial intelligence systems; and
(4)
support the development of technical standards and guidelines by which to test for bias in artificial intelligence training data and applications.
(b) Supporting activitiesThe Director of the National Institute of Standards and Technology may—
(1) support measurement research and development of best practices and voluntary standards for trustworthy artificial intelligence systems, which may include—
(A)
privacy and security, including for datasets used to train or test artificial intelligence systems and software and hardware used in artificial intelligence systems;
(B)
advanced computer chips and hardware designed for artificial intelligence systems;
(C)
data management and techniques to increase the usability of data, including strategies to systematically clean, label, and standardize data into forms useful for training artificial intelligence systems and the use of common, open licenses;
(D)
safety and robustness of artificial intelligence systems, including assurance, verification, validation, security, control, and the ability for artificial intelligence systems to withstand unexpected inputs and adversarial attacks;
(E)
auditing mechanisms and benchmarks for accuracy, transparency, verifiability, and safety assurance for artificial intelligence systems;
(F)
applications of machine learning and artificial intelligence systems to improve other scientific fields and engineering;
(G)
model documentation, including performance metrics and constraints, measures of fairness, training and testing processes, and results;
(H)
system documentation, including connections and dependences within and between systems, and complications that may arise from such connections; and
(I)
all other areas deemed by the Director to be critical to the development and deployment of trustworthy artificial intelligence;
(2)
produce curated, standardized, representative, high-value, secure, aggregate, and privacy protected data sets for artificial intelligence research, development, and use;
(3)
support one or more institutes as described in section 9431(b) of this title for the purpose of advancing measurement science, voluntary consensus standards, and guidelines for trustworthy artificial intelligence systems;
(4)
support and strategically engage in the development of voluntary consensus standards, including international standards, through open, transparent, and consensus-based processes; and
(5)
enter into and perform such contracts, including cooperative research and development arrangements and grants and cooperative agreements or other transactions, as may be necessary in the conduct of the work of the National Institute of Standards and Technology and on such terms as the Director considers appropriate, in furtherance of the purposes of this division.[1]
(c) Risk management frameworkNot later than 2 years after January 1, 2021, the Director shall work to develop, and periodically update, in collaboration with other public and private sector organizations, including the National Science Foundation and the Department of Energy, a voluntary risk management framework for trustworthy artificial intelligence systems. The framework shall—
(1) identify and provide standards, guidelines, best practices, methodologies, procedures and processes for—
(A)
developing trustworthy artificial intelligence systems;
(B)
assessing the trustworthiness of artificial intelligence systems; and
(C)
mitigating risks from artificial intelligence systems;
(2)
establish common definitions and characterizations for aspects of trustworthiness, including explainability, transparency, safety, privacy, security, robustness, fairness, bias, ethics, validation, verification, interpretability, and other properties related to artificial intelligence systems that are common across all sectors;
(3)
provide case studies of framework implementation;
(4)
align with international standards, as appropriate;
(5)
incorporate voluntary consensus standards and industry best practices; and
(6)
not prescribe or otherwise require the use of specific information or communications technology products or services.
(d) Participation in standard setting organizations
(1) Requirement

The Institute shall participate in the development of standards and specifications for artificial intelligence.

(2) PurposeThe purpose of this participation shall be to ensure—
(A)
that standards promote artificial intelligence systems that are trustworthy; and
(B)
that standards relating to artificial intelligence reflect the state of technology and are fit-for-purpose and developed in transparent and consensus-based processes that are open to all stakeholders.
(e) Data sharing best practices

Not later than 1 year after January 1, 2021, the Director shall, in collaboration with other public and private sector organizations, develop guidance to facilitate the creation of voluntary data sharing arrangements between industry, federally funded research centers, and Federal agencies for the purpose of advancing artificial intelligence research and technologies, including options for partnership models between government entities, industry, universities, and nonprofits that incentivize each party to share the data they collected.

(f) Best practices for documentation of data setsNot later than 1 year after January 1, 2021, the Director shall, in collaboration with other public and private sector organizations, develop best practices for datasets used to train artificial intelligence systems, including—
(1) standards for metadata that describe the properties of datasets, including—
(A)
the origins of the data;
(B)
the intent behind the creation of the data;
(C)
authorized uses of the data;
(D)
descriptive characteristics of the data, including what populations are included and excluded from the datasets; and
(E)
any other properties as determined by the Director; and
(2)
standards for privacy and security of datasets with human characteristics.
(g) Testbeds

In coordination with other Federal agencies as appropriate, the private sector, and institutions of higher education (as such term is defined in section 1001 of title 20), the Director may establish testbeds, including in virtual environments, to support the development of robust and trustworthy artificial intelligence and machine learning systems, including testbeds that examine the vulnerabilities and conditions that may lead to failure in, malfunction of, or attacks on such systems.

(h) Authorization of appropriationsThere are authorized to be appropriated to the National Institute of Standards and Technology to carry out this section—
(1)
$64,000,000 for fiscal year 2021;
(2)
$70,400,000 for fiscal year 2022;
(3)
$77,440,000 for fiscal year 2023;
(4)
$85,180,000 for fiscal year 2024; and
(5)
$93,700,000 for fiscal year 2025.
(Mar. 3, 1901, ch. 872, § 22A, as added Pub. L. 116–283, div. E, title LIII, § 5301, Jan. 1, 2021, 134 Stat. 4536; amended Pub. L. 117–167, div. B, title II, § 10232(b), Aug. 9, 2022, 136 Stat. 1484.)


[1]  See References in Text note below.
Editorial Notes
References in Text

This division, referred to in subsec. (b)(5), probably means div. E of Pub. L. 116–283, Jan. 1, 2021, 134 Stat. 4523, which is classified principally to chapter 119 of this title.

Amendments

2022—Subsecs. (g), (h). Pub. L. 117–167 added subsec. (g) and redesignated former subsec. (g) as (h).