GAAFA Layer: AI accountability layer · Layer 3 of 3 Status: Initial draft · March 2026 Author: Michael G. Leahy

Draft Legislation

Government Algorithmic Accountability and AI Fiduciary Act

Extends fiduciary obligations to AI systems making decisions about citizens. Requires pre-deployment assessments. Establishes the right to algorithmic explanation. Classifies autonomous AI agents as statutory secondary fiduciaries. Closes the accountability gap that VIDA and PDTA cannot reach.

Initial draft · March 2026. This statute is a working draft. All provisions are subject to revision based on legal scholarship, technical review, and legislative engagement. Critique identifying specific incoherence in any provision is actively sought.
In plain terms

If a government algorithm decides whether you qualify for a benefit, selects you for audit, or affects any decision about your life, you have the right to know how it reached that conclusion in terms a non-expert can understand — and to challenge it. The algorithm is subject to the same fiduciary duties as the human official it replaced.

Section 5

Algorithmic Deployment Assessment

No government agency may deploy an automated decision system in a rights-affecting context unless it has completed and filed an Algorithmic Deployment Assessment at least 60 days before deployment, documenting the system's decision logic, training data, accuracy rates, and disparate impact analysis.

Section 6

Right to Algorithmic Explanation

When an automated decision system makes or substantially influences an adverse decision affecting a citizen's rights, that citizen is entitled to a specific, intelligible explanation accessible to a person of reasonable intelligence without technical expertise. 'Substantially influences' covers rubber-stamped human-in-the-loop review.

Section 7

Three Tiers of AI Use

Tier 1 covers AI that makes or substantially influences rights-affecting decisions; requires ADA, Accountable Human Official review, and the right to explanation. Tier 2 covers internal administrative AI. Tier 3 covers citizen-facing information and navigation AI.

Section 8

Agentic AI as Statutory Secondary Fiduciary

Autonomous AI agents deployed by government to query, process, or act upon citizen data are classified as statutory secondary fiduciaries subject to PDTA's full fiduciary duty regime. Deploying a black-box model in a rights-affecting context is a legal choice with legal consequences.

Section 4(b)

Interpretable Model Preference

For Tier 1 AI uses, agencies must prefer interpretable models whose decision logic can be explained in plain language. Black-box models in rights-affecting contexts are permitted only upon demonstrated operational necessity and are subject to enhanced oversight.

Section 9

Algorithmic Impact Assessments

Agencies must conduct ongoing systematic testing of all Tier 1 deployed systems for bias, disparate impact, accuracy, and fiduciary compliance. Identified systematic errors require mandatory remediation and trigger reopener rights for citizens adversely affected during the period of identified bias.

Why VIDA and PDTA Are Not Enough

A government agency can be fully compliant with VIDA's identity architecture and PDTA's fiduciary obligations and still deploy a black-box AI system that makes the actual rights-affecting determination from data VIDA never touches, through a process PDTA's private right of action cannot reach, with no explanation available to the citizen whose life the determination affects. GAAFA closes that gap. It is not an extension of VIDA and PDTA; it is the statute that makes them complete.

Section 3(f) · The Accountable Human Official Definition

A human official who approves an automated output without access to the system's decision rationale, or who approves outputs in batches without individual case review, does not satisfy the definition of Accountable Human Official. This definition is designed to foreclose the compliance theater pattern in which agencies assign nominal human review to automated outputs without providing the reviewer with the information needed to exercise genuine independent judgment.

The complete draft legislation is available to download as a Word document or to read directly on Google Drive. Critiques identifying specific incoherence in any provision should be directed to the scholarship address.

Companion statutes:

VIDA — Verifiable Identity and Digital Autonomy Act → PDTA — Personal Data Trusteeship Act → All three statutes →

This draft needs your critique

The most useful engagement is precise: a specific provision that does not do what it claims, a statutory requirement that is technically unimplementable, or a constitutional argument that a court has already rejected in a relevant context.