AI Should Not Receive Excessive Attention from U.S. Law Enforcement Agencies, Claims Center for Data Innovation
In a recent statement, Hodan Omaar, senior policy analyst at the Center for Data Innovation, urged the Federal Trade Commission (FTC), the Civil Rights Division of the U.S. Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), and the U.S. Equal Employment Opportunity Commission (EEOC) to ensure equal enforcement of laws on AI systems. This call comes amidst widespread concerns over the use of AI-enabled risk assessment tools in predicting the likelihood of an accused person missing a future court appointment or being granted bail.
Omaar emphasised that the existing tools these authorities possess to address potential AI bias should be applied equally to both human decision-making and automated decision-making. He suggested that the laws already in existence can be applied to emerging AI technology.
The FTC, DOJ, CFPB, and EEOC have already committed to enforcing their respective laws and regulations on AI systems. The FTC, for instance, enforces consumer protection laws such as the FTC Act, prohibiting deceptive and unfair practices, including those involving AI-driven products or services. It has taken action against companies using AI tools in deceptive advertising or misleading consumer interactions.
The DOJ enforces civil rights laws that prohibit discrimination in employment and housing, which can apply to AI systems used in automated decision-making if those systems result in discriminatory outcomes. The CFPB oversees fair lending laws and consumer financial protection statutes, which affect AI systems used in credit scoring, loan approval, and other financial services. The EEOC enforces employment discrimination laws such as Title VII, the Americans with Disabilities Act (ADA), and the Age Discrimination in Employment Act (ADEA), focusing on AI tools used in hiring, promotions, and other employment decisions to ensure they do not perpetuate bias or discrimination.
However, ensuring equal enforcement between human and automated decision-making poses challenges. This requires applying existing legal standards of fairness and non-discrimination equally to AI outputs and human decisions, requiring transparency and explainability in AI decision-making, mandating robust testing and impact assessments, updating data privacy standards to reflect AI’s capabilities, especially regarding sensitive information like biometric data, harmonizing enforcement across agencies, and implementing regulatory "sandboxes" or experimental frameworks.
While the enforcement of laws on AI systems could potentially divert attention away from the root causes behind unfairness, it is crucial to address issues of bias and discrimination from automated systems. Transparency over algorithms and data alone will not solve the underlying social problem. The FTC, DOJ, CFPB, and EEOC need to ensure they use their tools fairly and appropriately in addressing potential AI bias.
References: [1] Federal Trade Commission. (2025). Updating the Children's Online Privacy Protection Act (COPPA) to Address AI Technologies. Retrieved from https://www.ftc.gov/news-events/press-releases/2025/06/ftc-updates-childrens-online-privacy-protection-act-coppa-address [2] National Conference of State Legislatures. (2023). State Regulatory Sandboxes for Financial Technologies. Retrieved from https://www.ncsl.org/research/financial-services-and-commerce/state-regulatory-sandboxes-for-financial-technologies.aspx [3] Federal Trade Commission. (2023). Enforcing Consumer Protection Laws in the Age of AI. Retrieved from https://www.ftc.gov/news-events/blogs/business-blog/2023/02/enforcing-consumer-protection-laws-age-ai
- Hodan Omaar, in a recent statement, suggested that the existing laws can be applied equally to both human decision-making and automated decision-making in AI systems, such as those enforced by the FTC, DOJ, CFPB, and EEOC.
- The FTC enforces consumer protection laws on AI-driven products or services, prohibiting deceptive and unfair practices, while the DOJ enforces civil rights laws that prohibit discrimination, both of which can be applied to AI systems.
- The CFPB oversees fair lending laws and consumer financial protection statutes, which affect AI systems used in credit scoring, loan approval, and other financial services, and the EEOC enforces employment discrimination laws, focusing on AI tools used in hiring, promotions, and other employment decisions.
- Ensuring equal enforcement between human and automated decision-making requires transparency and explainability in AI decision-making, robust testing and impact assessments, updating data privacy standards, harmonizing enforcement across agencies, and implementing regulatory "sandboxes" or experimental frameworks.