According to 15 U.S. Code § 9401 artificial intelligence is defined as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” The notes in 10 U.S. Code § 2358 define artificial intelligence as:
- “Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets.
- An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.
- An artificial system designed to think or act like a human, including cognitive architectures and neural networks.
- A set of techniques, including machine learning, that is designed to approximate a cognitive task.
- An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision making, and acting.”
In April of 2023 a Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems was issued by the EEOC, DOJ, CFPB, and the FTC. This statement acknowledged that automated systems may perpetuate bias and discrimination and clarified that all of these agencies’ enforcement authorities apply to automated systems.
For more information on artificial intelligence research see the National Institute of Standards and Technology (NIST) overview on artificial intelligence (nist.gov) and the National Artificial Intelligence Initiative (ai.gov).
[Last updated in November of 2023 by the Wex Definitions Team]