Glossary
DHS says it “provides this inventory of unclassified and non-sensitive AI use cases within DHS in accordance with the Advancing American AI Act (December 2022), Executive Order 13960 Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government (December 2020) and Office of Budget and Management Memorandum (OMB) M-25-21 Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (April 2025).” Earlier versions of the DHS AI Use Case Inventory (2022–2024) are available from the DHS AI Use Case Inventory Library.
Definitions of key terms used in the DHS AI Use Case Inventories. Some terms appear only in one dataset version.
Classification & Impact
- CAIO Determination / Justification 2025
- The Chief AI Officer (CAIO) reviewed each use case and determines whether it qualifies as "high-impact" under OMB guidance. The justification field explains the reasoning behind that determination. This replaced the separate rights-impacting and safety-impacting classifications used in the 2024 Inventory July Revision.
- High-Impact AI 2025
- An AI use case that could significantly affect the rights, safety, or civil liberties of individuals. Under the 2025 Inventory schema, use cases are classified as "High-impact," "Presumed high-impact but determined not high-impact," or "Not high-impact."
- Presumed High-Impact 2025
- A use case that falls into a category presumed to be high-impact (such as law enforcement or immigration screening), but which the CAIO has reviewed and determined does not meet the threshold for the high-impact designation, with justification provided.
- Rights-Impacting 2024 Rev
- A classification used in the 2024 Inventory July Revision indicating whether an AI use case could affect individual rights, such as due process, privacy, or equal protection. Replaced by the "high-impact" framework in the 2025 Inventory.
- Safety-Impacting 2024 Rev
- A classification used in the 2024 Inventory July Revision indicating whether an AI use case could affect physical safety, critical infrastructure, or emergency response. Replaced by the "high-impact" framework in the 2025 Inventory.
- AI Classification 2025
- Categorizes the type of AI technology used: Agentic AI (systems that take autonomous actions), Generative AI (systems that produce new content), or other AI/ML approaches.
- Agentic AI
- AI systems capable of taking autonomous actions in the real world with limited human oversight, such as making decisions, executing tasks, or interacting with external systems on behalf of a user or organization.
Lifecycle & Status
- Stage (2025 Inventory)
- Pre-deployment — Under development or testing, not yet in operational use.
Pilot — Being tested in a limited operational setting.
Deployed — Fully operational and in active use.
Retired — No longer in use; decommissioned.
- Stage (2024 Inventory July Revision)
- Initiated — Concept or planning phase.
Acquisition/Development — Being procured or built.
Implementation/Assessment — Being deployed and evaluated.
Operation & Maintenance — Fully operational with ongoing support.
Retired — No longer in use.
Security & Compliance
- ATO (Authority to Operate)
- A formal authorization granted by a federal agency's authorizing official that permits an IT system to operate. Indicates the system has undergone a security assessment and the associated risks have been accepted.
- PIA (Privacy Impact Assessment)
- A formal analysis of how personally identifiable information (PII) is collected, stored, shared, and protected within a system. Required by the E-Government Act when a federal system handles PII.
- PII (Personally Identifiable Information)
- Information that can be used to identify an individual, such as name, Social Security number, biometric data, or other distinguishing characteristics.
- SAOP (Senior Agency Official for Privacy) 2024 Rev
- The senior official designated by DHS to oversee privacy compliance and policy. The 2024 Inventory July Revision tracks whether the SAOP has assessed each AI use case for privacy implications.
Procurement & Infrastructure
- Procurement Method
- How the AI capability was acquired. Common values include commercially procured (purchased from a vendor), developed in-house by government staff, developed under contract, or open-source.
- Vendor
- The commercial entity providing the AI technology or service. This field is populated when the AI capability was procured from an external provider rather than built in-house.
- HISP (High Impact Service Provider) 2024 Rev
- A designation under FISMA (Federal Information Security Modernization Act) for systems or services that are critical to the functioning of the federal government. The 2024 Inventory July Revision tracks whether each AI use case supports a HISP-designated service.
Oversight & Risk Management
- Impact Assessment
- An evaluation of the potential effects of an AI system on individuals, communities, or operations. May include algorithmic impact assessments, civil rights assessments, or other formal reviews.
- Independent Review
- Whether the AI system has undergone evaluation by a reviewer not involved in its development, such as an inspector general, external auditor, or independent testing body.
- Ongoing Monitoring
- Continuous or periodic evaluation of an AI system's performance, accuracy, bias, and compliance after deployment.
- Appeal Process
- Whether individuals affected by an AI-assisted decision have a mechanism to challenge or seek review of that decision.
- Fail-Safe 2025
- Mechanisms built into an AI system to ensure safe behavior in the event of a malfunction, unexpected input, or system failure.
Other Terms
- Bureau
- The DHS sub-agency or component operating the AI use case, such as CBP (Customs and Border Protection), ICE (Immigration and Customs Enforcement), TSA (Transportation Security Administration), CISA (Cybersecurity and Infrastructure Security Agency), or USCIS (U.S. Citizenship and Immigration Services).
- Topic Area
- The functional category of the AI use case, such as law enforcement, cybersecurity, immigration, disaster response, or administrative operations.
- Demographic Variables
- Whether the AI system uses or processes data related to demographic characteristics such as race, ethnicity, gender, or age. Tracking this field helps assess potential bias risks.