Draft Legislation
Extends fiduciary obligations to AI systems making decisions about citizens. Requires pre-deployment assessments. Establishes the right to algorithmic explanation. Classifies autonomous AI agents as statutory secondary fiduciaries. Closes the accountability gap that VIDA and PDTA cannot reach.
If a government algorithm decides whether you qualify for a benefit, selects you for audit, or affects any decision about your life, you have the right to know how it reached that conclusion in terms a non-expert can understand — and to challenge it. The algorithm is subject to the same fiduciary duties as the human official it replaced.
No government agency may deploy an automated decision system in a rights-affecting context unless it has completed and filed an Algorithmic Deployment Assessment at least 60 days before deployment, documenting the system's decision logic, training data, accuracy rates, and disparate impact analysis.
When an automated decision system makes or substantially influences an adverse decision affecting a citizen's rights, that citizen is entitled to a specific, intelligible explanation accessible to a person of reasonable intelligence without technical expertise. 'Substantially influences' covers rubber-stamped human-in-the-loop review.
Tier 1 covers AI that makes or substantially influences rights-affecting decisions; requires ADA, Accountable Human Official review, and the right to explanation. Tier 2 covers internal administrative AI. Tier 3 covers citizen-facing information and navigation AI.
Autonomous AI agents deployed by government to query, process, or act upon citizen data are classified as statutory secondary fiduciaries subject to PDTA's full fiduciary duty regime. Deploying a black-box model in a rights-affecting context is a legal choice with legal consequences.
For Tier 1 AI uses, agencies must prefer interpretable models whose decision logic can be explained in plain language. Black-box models in rights-affecting contexts are permitted only upon demonstrated operational necessity and are subject to enhanced oversight.
Agencies must conduct ongoing systematic testing of all Tier 1 deployed systems for bias, disparate impact, accuracy, and fiduciary compliance. Identified systematic errors require mandatory remediation and trigger reopener rights for citizens adversely affected during the period of identified bias.
A government agency can be fully compliant with VIDA's identity architecture and PDTA's fiduciary obligations and still deploy a black-box AI system that makes the actual rights-affecting determination from data VIDA never touches, through a process PDTA's private right of action cannot reach, with no explanation available to the citizen whose life the determination affects. GAAFA closes that gap. It is not an extension of VIDA and PDTA; it is the statute that makes them complete.
A human official who approves an automated output without access to the system's decision rationale, or who approves outputs in batches without individual case review, does not satisfy the definition of Accountable Human Official. This definition is designed to foreclose the compliance theater pattern in which agencies assign nominal human review to automated outputs without providing the reviewer with the information needed to exercise genuine independent judgment.
The complete draft legislation is available to download as a Word document or to read directly on Google Drive. Critiques identifying specific incoherence in any provision should be directed to the scholarship address.
The most useful engagement is precise: a specific provision that does not do what it claims, a statutory requirement that is technically unimplementable, or a constitutional argument that a court has already rejected in a relevant context.