Version History: The Algorithmic Transparency in Public Services Act
Merged amendment: "even shorter take effective date" (round 1)
View full text
Section 1 — Purpose
When a government agency uses an algorithm or AI system to make decisions that affect people's lives — benefits eligibility, criminal sentencing recommendations, child welfare risk scores, hiring for public jobs, loan approvals for government-backed programs — citizens have a right to know how those decisions are being made.
Section 2 — Requirements
Any federal, state, or local government agency that uses an automated decision-making system must:
(a) Publish a plain-language explanation of what the system does, what data it uses, and how it reaches its decisions.
(b) Provide any individual affected by an automated decision with a written explanation of the factors that influenced their specific outcome, upon request.
(c) Conduct and publish an annual bias audit examining whether the system produces disparate outcomes across race, gender, age, disability status, or income level.
(d) Maintain a human appeal process — no person shall receive a final adverse decision from a government agency made solely by an algorithm without the option of human review.
Section 3 — Enforcement
Agencies that fail to comply shall have their authority to use the non-compliant system suspended until compliance is achieved. An independent oversight board, appointed by a bipartisan process, shall review complaints and audit reports.
Section 4 — Effective Date
This act shall take effect 18 months after passage to allow agencies time to inventory existing systems and establish compliance procedures.
Amendment (Round 1): even shorter take effective date
replace:
This act shall take effect 18 months after passage to allow agencies time to inventory existing systems and establish compliance procedures
with:
This act shall take effect 3 months after passage to allow agencies time to inventory existing systems and establish compliance procedures
Initial version
View full text
Section 1 — Purpose
When a government agency uses an algorithm or AI system to make decisions that affect people's lives — benefits eligibility, criminal sentencing recommendations, child welfare risk scores, hiring for public jobs, loan approvals for government-backed programs — citizens have a right to know how those decisions are being made.
Section 2 — Requirements
Any federal, state, or local government agency that uses an automated decision-making system must:
(a) Publish a plain-language explanation of what the system does, what data it uses, and how it reaches its decisions.
(b) Provide any individual affected by an automated decision with a written explanation of the factors that influenced their specific outcome, upon request.
(c) Conduct and publish an annual bias audit examining whether the system produces disparate outcomes across race, gender, age, disability status, or income level.
(d) Maintain a human appeal process — no person shall receive a final adverse decision from a government agency made solely by an algorithm without the option of human review.
Section 3 — Enforcement
Agencies that fail to comply shall have their authority to use the non-compliant system suspended until compliance is achieved. An independent oversight board, appointed by a bipartisan process, shall review complaints and audit reports.
Section 4 — Effective Date
This act shall take effect 18 months after passage to allow agencies time to inventory existing systems and establish compliance procedures.