AI Policy & Transparency

AI Policy & Transparency

We believe AI is one of the most consequential and least governed forces shaping American life. Our policy framework is built on that premise. Here is what we believe must change — and how we use these tools in building this platform.

What We Believe About AI

AI is a governance crisis, not a technology trend. This platform treats artificial intelligence as a serious policy problem that demands a legislative response now — not a future concern to be studied while harms accumulate. The systems being deployed without oversight in hiring, lending, healthcare, criminal justice, and political advertising are reshaping life outcomes for millions of Americans with no democratic accountability and no recourse for harm.

Algorithmic systems are already making consequential decisions about your life. They determine who gets hired, who qualifies for a loan, what medical treatment a patient receives, and who a court considers a flight risk.[6] These decisions happen at scale, at speed, and with minimal transparency to the people most affected. Commercial facial recognition systems have been documented to misidentify darker-skinned and female faces at dramatically higher error rates than white male faces — errors with real-world consequences in law enforcement and employment.[5]

The companies deploying these systems face no meaningful accountability. The primary federal AI governance document — the NIST AI Risk Management Framework — is explicitly voluntary.[1] There are no mandatory pre-deployment safety assessments, no required disclosure of how these systems affect people, and no liability framework comparable to what we impose on pharmaceuticals, aircraft, or financial instruments with equivalent capacity for harm. That is a policy failure — and one the two major parties have declined to address seriously.

We share the critics' concerns — and go further. If you believe there is no ethical use of AI, we are not going to argue you out of that position — it is grounded in real grievances. Our AI policy is not a defense of artificial intelligence. It is a framework for controlling it. The same companies that displaced workers, scraped artists' work without consent, and embedded racial bias into life-altering decisions are the companies this platform would regulate, restrict, and hold liable. If you believe AI must be controlled — not celebrated, not deferred to — this platform agrees with you.

The field's own founders are sounding the alarm. Geoffrey Hinton, who shared the 2024 Nobel Prize in Physics for his foundational work in neural networks, resigned from Google specifically to speak freely about the dangers of unregulated AI.[9] Yoshua Bengio, one of the field's most-cited researchers, has called for international AI governance frameworks comparable to those applied to nuclear and biological weapons.[10] When the people most responsible for building these systems warn that they are outpacing our capacity to govern them, that is the premise our policy is built on.

Our Policy Framework

The Freedom and Dignity Project's Technology & AI pillar is a major part of the platform's governance framework. It covers algorithmic accountability, surveillance limits, data ownership, public infrastructure, labor protections, and democratic control over AI deployment decisions. The exact number of positions in that pillar can change as the catalog is revised, expanded, and reconciled.

Our framework requires mandatory algorithmic impact assessments, public bias audits, strong data ownership rights, limits on AI-enabled surveillance, prohibition of high-risk autonomous AI deployment without human review, and democratic control over where and how AI gets used. The table below shows where this platform stands relative to the current positions of the Republican and Democratic parties across key AI governance questions.

Area This Platform Republicans Democrats
Algorithmic impact assessments Mandatory, public, pre-deployment None proposed Voluntary guidelines only
AI in criminal justice Prohibited for high-stakes decisions without human review No position Limited guidance
Facial recognition Prohibited by default in public spaces; narrow exceptions only with strict safeguards, legislative authorization, and mandatory sunset review No limits Patchwork state-level support
Data ownership Individuals own their data Corporate ownership default Partial protections proposed
AI labor displacement Mandatory profit-sharing, retraining funds "AI will create jobs" Retraining programs (unfunded)
Internet & AI infrastructure Internet as public utility; public AI investment Deregulation Partial infrastructure bills

The Real Cost of AI

This transparency statement would be incomplete — and dishonest — if it only described how AI was used here without acknowledging what AI costs. The same technology we used to build this platform is causing documented harm at scale. We are not exempt from that accounting.

Economic disruption. Generative AI is projected to displace a significant share of the global workforce. A 2023 Goldman Sachs analysis estimated that the equivalent of 300 million full-time jobs globally could be affected by AI automation, with roughly a quarter to half of tasks in many professional roles subject to automation.[3] A concurrent McKinsey analysis projected that generative AI could add between $2.6 and $4.4 trillion annually in economic value — almost entirely captured by the companies deploying these systems, not the workers whose labor they are replacing.[4] This is not a theoretical concern. It is the established pattern of every prior wave of automation, now potentially at a pace and scope that outstrips the labor market's ability to adapt. Without active policy intervention — stronger safety nets, worker protections, profit-sharing requirements, and investment in retraining — the productivity gains from AI will deepen existing inequality rather than reduce it.

A direct word to artists, writers, and creators. Your work was used to train these systems — scraped without your consent, without compensation, without credit. That is not a side effect of AI development. It is the result of a deliberate regulatory vacuum, and it is a form of theft. Our data ownership, consent, and intellectual property policies exist in part for you. We do not dismiss your grievance. We share it. The absence of meaningful consent requirements for training data is one of the clearest failures of current AI governance, and closing that gap is a core element of what this platform proposes.

Algorithmic harm and social damage. AI systems have been shown to replicate and amplify bias across hiring, lending, healthcare, and criminal justice. A landmark 2019 NIST study of commercial facial recognition systems found substantially higher error rates for darker-skinned and female faces — in some systems, false positive rates for Black women were nearly 100 times higher than for white men.[5] Predictive risk tools used in criminal sentencing and parole have been found to embed racial disparities in their outputs, assigning higher risk scores to Black defendants at nearly twice the rate of white defendants with similar records.[6] AI-generated synthetic media — deepfakes — is already being used to fabricate statements by political figures, manufacture disinformation at scale, and produce non-consensual intimate images of real people. Social media recommendation algorithms, optimized for engagement rather than accuracy or human well-being, have demonstrably contributed to the spread of conspiracy theories, political radicalization, and documented harm to the mental health of young users. These are not edge cases. They are how these systems perform in deployment.

Environmental cost. The infrastructure required to train and run large AI models consumes enormous amounts of energy and water. Training a single large natural language model can produce carbon emissions comparable to the lifetime emissions of several automobiles.[7] Data centers — the physical infrastructure on which AI runs — consumed an estimated 460 terawatt-hours of electricity globally in 2022; the International Energy Agency projects that figure could double by 2026, driven largely by AI workloads.[8] AI inference at scale — answering queries from hundreds of millions of daily users — now constitutes a rapidly growing share of that total, not just training. Data centers also consume billions of gallons of fresh water annually for cooling. These environmental costs are not borne equally: they fall disproportionately on communities near data center clusters and on regions already bearing the sharpest effects of climate change.

There is no un-inventing AI. It is already embedded in hiring decisions, medical diagnosis, content moderation, financial services, criminal justice, and virtually every institution of modern life. The question is not whether AI exists. It is who controls it, how it is governed, and whose interests it actually serves.

Why We Used It Anyway

Writing a comprehensive structural reform platform — 25 policy pillars, thousands of canonical policy positions, constitutional analysis, and party comparisons — requires institutional scale: staff, a think tank, significant funding, years of support. Those resources are not available here. They are, by design, available to the people this project is criticizing. Well-funded campaigns have policy shops. Corporations have legal teams and lobbyists. This project has a laptop and a stack of research. AI tools let a single person do the research synthesis work that would otherwise require a full staff — and that asymmetry is the point. Democratizing access to analytical capability is precisely the kind of structural correction this platform advocates for.

There is something else worth saying plainly: the same technology companies whose practices we are proposing to regulate built these tools. We are using them to write the policies that would hold those companies accountable. We do not find that ironic. We find it necessary.

We are also aware of the argument that any use of these tools — regardless of purpose — is complicity in the harms they cause, and that the only principled response is refusal. We have thought seriously about that argument. Our conclusion: AI is already deployed across every major institution in American life. It is not going away. The question is not whether AI exists — it is whether the people most harmed by ungoverned AI will have any voice in building the framework that governs it. A boycott that produces no legislation, no structural accountability, and no policy change leaves everyone worse off, especially those already bearing the harm. Our obligation is not refusal. It is accountability.

What AI Did

What AI Cannot Do and Did Not Do

AI tools are powerful for scale and synthesis. They are not capable of political judgment, moral reasoning, or determining what a democratic society owes its citizens. Every position in this platform represents a human decision. The AI did not decide what rights you deserve.

AI-assisted work has real limitations. It can be confidently wrong. It can smooth over tensions that should stay sharp. It can produce something that sounds authoritative and misses the point. This project has made those mistakes and caught most of them. Some are probably still in the document.

That is why the source material is preserved. The chat logs, the canonical IDs, the policy catalog — all of it is version-controlled and traceable. If something in this platform is wrong, you can go find where it came from and why. That is not an accident. Accountability without traceability is just theater.

Safeguards We Have Built In

Using AI to build a platform about AI accountability creates an obligation to do it right. The following safeguards are built into this project's development process — not as aspirational goals, but as documented rules enforced at the source level.

Citation verification. Every factual claim, statistic, and external reference must be sourced to a real, accessible, independently verifiable publication. AI frequently generates plausible-sounding but fabricated citations. This project's instructions require that every citation be manually confirmed: the source exists, the URL resolves, and the source actually says what the citation claims.[1]
APA 7th edition citations with Wikipedia-style inline annotations. All research sections use numbered inline references [1], [2], etc., with full APA-formatted source lists. This makes every claim traceable and independently checkable by any reader.
Copyright compliance. No verbatim reproduction of copyrighted material beyond brief quotation for commentary and analysis. Facts and statistics — not copyrightable — are cited freely. Federal government works are public domain and may be quoted at length.[2] All AI-generated text is treated as a draft subject to originality review, not a final source.
Adversarial review. Every section making empirical claims is reviewed adversarially — meaning the reviewer actively tries to disprove the claim, finds contradicting data, and either addresses the contradiction honestly or revises the conclusion. Suppressing inconvenient evidence is not permitted. This project acknowledges where the data is contested.
Distinguishing AI synthesis from primary research. AI synthesizes, organizes, and drafts. It does not conduct primary research. Wherever this project cites research findings, those findings are attributed to the original researchers, not to the AI tool that surfaced them.
Source preservation and traceability. The complete source transcripts — every conversation used to build this platform — are preserved locally and version-controlled. Every policy item has a canonical ID traceable to the conversation where it was defined. Nothing is orphaned from its origin.
Separating advocacy from analysis. Policy positions and normative arguments are clearly distinguished from empirical claims. "This is what the research shows" and "this is what we believe should be done" are never conflated. Weasel formulations ("studies show," "experts agree") without specific citations are prohibited.
Continuous human review. AI drafts every document; humans review, revise, and make every judgment call. The political values, priorities, and conclusions in this platform are human. The AI provides scale and infrastructure. The distinction matters and is maintained.

These safeguards are documented in the project's repository instruction files, including AGENTS.md and .github/copilot-instructions.md. They are public, reviewable, and updated as the project evolves.

AI as Scaffold, Not Foundation

AI got this project off the ground. That is not where it stays.

Using AI to bootstrap this platform was a deliberate choice made under real constraints. One person could not have organized 25 policy pillars, thousands of canonical positions, a structured policy catalog, and a full website without it. The alternative was not a better platform — the alternative was no platform at all.

But scaffolding is not the same as structure. The goal was never to publish an AI-generated political document and call it finished. The goal is a human-authored platform — one where real contributors have read, challenged, improved, and rewritten everything that matters. AI produced the first draft. Human judgment replaces it.

What that means in practice:

This platform was built by AI assistance to make human authorship possible at scale. The AI's job is to close the resource gap that would otherwise make this work impossible for anyone without institutional backing. Everything AI produced is a starting draft, pending replacement by the people this project is actually for.

References

[1] National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). U.S. Department of Commerce. https://www.nist.gov/itl/ai-risk-management-framework

[3] Briggs, J., & Kodnani, D. (2023). The potentially large effects of artificial intelligence on economic growth. Goldman Sachs Global Investment Research. https://www.gspublishing.com/content/research/en/reports/2023/03/27/d64e052b-0f6e-45d5-a19a-c09a2f429a28.html

[4] McKinsey Global Institute. (2023). The economic potential of generative AI: The next productivity frontier. McKinsey & Company. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier

[5] Grother, P., Ngan, M., & Hanaoka, K. (2019). Face recognition vendor test (FRVT) part 3: Demographic effects (NISTIR 8280). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.IR.8280

[6] Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

[7] Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 3645–3650). Association for Computational Linguistics. https://doi.org/10.18653/v1/P19-1355

[8] International Energy Agency. (2024). Electricity 2024: Analysis and forecast to 2026. IEA. https://www.iea.org/reports/electricity-2024

[9] Metz, C. (2023, May 1). 'The Godfather of A.I.' leaves Google and warns of danger ahead. The New York Times. https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html; Royal Swedish Academy of Sciences. (2024). Scientific background: The Nobel Prize in Physics 2024. Nobel Prize Outreach AB. https://www.nobelprize.org/prizes/physics/2024/advanced-information/

[10] Heaven, W. D. (2023, May 25). Yoshua Bengio thinks we might be building AI wrong. MIT Technology Review. https://www.technologyreview.com/2023/05/25/1073634/yoshua-bengio-ai-wrong/

The Bigger Point

This project uses AI. It also advocates for AI accountability, algorithmic transparency, and strong limits on surveillance capitalism — because the same technology that helps build this document is being used to undermine the democracy it is trying to defend. Both things are true, and neither cancels the other.

Our obligation is to use AI transparently, use it to build tools of accountability rather than tools of control, and to make the case for the governance frameworks that can reduce the harm it causes. That is not a sufficient response to the scale of the problem. But it is an honest one — and it is more than is being offered by the institutions that have the most to gain from AI remaining ungoverned. We document the harms above not to clear our conscience, but to be clear about what we are working against. The policies in our Technology & AI pillar are not abstract positions. They are responses to real, documented damage happening now.

Technical note: This project has been developed through AI-assisted CLI workflows for research synthesis, document drafting, policy cataloging, and website development, with human review governing what is kept, revised, or rejected. The policy catalog is stored in policy/catalog/policy_catalog_v2.sqlite — a structured SQLite database of 3989 canonical policy positions, rebuilt from source material via scripts/build-catalog-v2.py. The repository is public at github.com/alistardust/freedom-and-dignity-project.

Models and Tools Used

This project has used multiple AI interfaces at different stages, including CLI-based coding and research workflows, earlier web-based chat interfaces, and cross-model review passes. Anthropic models are preferred when a choice is available, with OpenAI models used as an additional check, for coding-specific workflows, and for model-diversity validation. The following list reflects the models and model families documented in the project's current workflow notes:

A note on completeness: The model tracking below reflects best-effort documentation. Earlier ChatGPT and Copilot web interactions were not always logged with specific model versions. Some model interactions — particularly during early project development — may not be fully captured here. This list is as complete as available records allow; it will be updated as the project continues.
Anthropic
Claude Sonnet 4.6
Primary model. Research synthesis, policy drafting, website development, consistency audits, comparison analysis. Used for the majority of this project.
Anthropic
Claude Opus 4.7
Complex, multi-step reasoning. Deep policy analysis, adversarial review, constitutional classification work, and tasks requiring premium reasoning.
Anthropic
Claude Haiku 4.5
Fast parallel tasks. Codebase exploration, file searches, background research threads, and lightweight sub-agent operations.
OpenAI
GPT-4.1
Cross-model validation and tasks where a second model perspective improves accuracy or catches blind spots.
OpenAI
GPT-5.4
Adversarial cross-checks, high-stakes policy review, and tasks where model diversity reduces the risk of systematic errors or shared hallucinations. Actively used via GitHub Copilot CLI.
OpenAI
GPT-5.3-Codex / GPT-5.2-Codex
PolicyOS system-rules research, code-focused analysis, technical audits, and cross-model verification. Used via GitHub Copilot Codex CLI running in parallel with the primary Copilot CLI workflow.
OpenAI — Web Interface
ChatGPT
Used in early project development phases via chat.openai.com before the Copilot CLI workflow was established. Specific model versions (GPT-4, GPT-4o, etc.) were not consistently tracked in those early sessions.

The use of multiple models from two independent AI companies — Anthropic and OpenAI — is itself a safeguard. No single AI system is treated as authoritative. Where conclusions matter, cross-model verification reduces the risk of systematic errors inherited from any one model's training.

Skills and Agent Capabilities

The AI-assisted workflows used to build this project support specialized skills, scoped helper agents, and parallel research or coding tasks. The following capabilities were used or are available within those workflows:

Governing Instructions: Repository Level

The following instructions are stored in .github/copilot-instructions.md and related repository instruction files, and they apply automatically to AI interactions with this repository. They define how the AI must behave when working on this project: what to prioritize, how to handle sources, citation requirements, copyright rules, quality gates, and testing standards.

Repository Copilot Instructions — .github/copilot-instructions.md

Project Context

This repository is an active U.S. policy platform organized around 25 pillars and 5 foundations. All policy content lives under policy/. The canonical policy catalog is policy/catalog/policy_catalog_v2.sqlite — 3,810 positions in v2 ID format, all with plain-language summaries. Read .github/current-state.md before making structural changes.

Source of Truth

  1. docs/pillars/*.html — rendered policy cards; most recently edited content
  2. policy/catalog/policy_catalog_v2.sqlite — structured catalog; canonical for IDs and plain language
  3. policy/foundations/pillars/ — narrative prose markdown; may lag behind site HTML

Both HTML and DB are kept in sync. Any new position added to HTML must be backfilled to the DB in the same commit.

Working with IDs and the Catalog

  • Policy position IDs use the v2 format: XXXX-XXXX-0000 (e.g. HLTH-COVR-0001)
  • Canonical positions live in the positions table
  • Domain/subdomain codes live in domains and subdomains tables
  • v1-to-v2 mappings live in legacy_id_map for provenance
  • Rebuild the catalog with scripts/build-catalog-v2.py. Do not hand-edit the SQLite database.

Editing Guidance

  • Prefer updating import logic and source-backed docs over manual data patching
  • Preserve provenance — keep ID relationships visible through the database, not deleted
  • When updating policy content, check site HTML first, then the DB, then the pillar markdown
  • Keep changes surgical: context and traceability matter as much as final wording

Citation Standards

Every factual claim, statistic, poll result, legal reference, and externally sourced assertion must be cited. Use Wikipedia-style numbered inline annotations — [1], [2] — with full APA 7th edition reference lists.

APA 7th Edition Formats

News article: Author, A. A. (Year, Month Day). Title. Publication. URL
Government report: Agency. (Year). Title (Report No.). URL
Journal article: Author(s). (Year). Title. Journal, Vol(Issue), pages. DOI
Federal statute: Name of Act, Pub. L. No. XXX-XX, § X, XX Stat. XXX (Year). URL

What Must Be Cited

  • Every statistic, poll result, or numerical claim
  • Every reference to a study, report, or academic finding
  • Every historical fact that is not common knowledge
  • Every claim about what a law, court ruling, or official document says
  • Every claim about what another party or public figure has said or done

Source Quality Hierarchy

  1. Primary sources: federal statutes, court opinions, official government data (Census, BLS, CBO, GAO, CRS)
  2. Peer-reviewed academic research
  3. Non-partisan research institutions (Pew, Brookings, EPI, KFF, Brennan Center)
  4. Major news outlets with editorial standards (NYT, WaPo, Reuters, AP, NPR)
  5. Advocacy or partisan sources — cite only for attribution, never as neutral fact

Copyright and Plagiarism Safeguards

  • Never reproduce substantial verbatim text from copyrighted sources. Quote sparingly — one or two sentences at most — and only when exact wording matters.
  • Facts and statistics are not copyrightable. Their unique presentation by the original author is. State facts freely; cite the source.
  • Federal government works are public domain (17 U.S.C. § 105). May be quoted at length; must still be cited.
  • The "heart of the work" test: Even a short quote can infringe if it captures the essential value of the original. When in doubt, paraphrase and cite.
  • AI-synthesized text is a draft, not a final source. All AI output incorporating external facts must be re-sourced from primary references.
  • Do not cite sources you have not verified. Every citation must be confirmed: source exists, URL resolves, content matches the claim.

Quality and Accuracy Safeguards

Before publishing any factual claim:

  1. Verify the source exists and the URL resolves
  2. Confirm the source actually supports the specific claim being made
  3. Check the date — prefer the most recent data; note when data may be stale
  4. Check for context — a technically accurate statistic can still be misleading

Adversarial Review Requirement

Before finalizing any empirical section, actively try to disprove the claims. If contradicting data exists: acknowledge it, explain why it doesn't change the conclusion, or revise the conclusion. Suppressing contradicting evidence is not permitted. Intellectual honesty is a core value of this platform.

AI Hallucination Safeguards

AI models can generate fabricated citations, misattribute quotes, confuse statistics from different years, and produce confident-sounding claims that are wrong. Every AI-generated factual claim must be independently verified before publication. If no source can be found, the claim must be removed or reframed as the project's own position.

Language Integrity Rules

  • Do not present policy positions as established facts
  • Do not use weasel words ("studies show," "experts agree") without specific citations
  • Do not use emotional language in research sections
  • Always distinguish between "this is what the research shows" and "this is what we believe should be done"

Testing Standards

Stack: Vitest for unit tests (npm run test:unit), Playwright/Firefox for E2E tests (npm run test:e2e). Always run unit tests before committing. Run E2E tests after any HTML/JS/CSS change.

Always Test

  • Any new HTML page or section: title, key headings, critical elements present and visible
  • Any new JS feature injected by app.js: element exists in DOM, correct href, correct behavior
  • Any navigation path: link exists, href correct, target page loads
  • Any count-sensitive assertion: update counts when structure changes — stale counts are bugs
  • Any fix for a visual bug: write a regression test that would have caught it
  • Any link that could 404: navigate to the href and assert the page loads

Key Cautions

  • Do not use toBeVisible() alone to test opacity — Playwright considers opacity:0 elements visible. Use .toHaveClass(/visible/) for IntersectionObserver-driven reveals.
  • GitHub Pages path logic: use named subdir checks (/\/(pillars|compare)\//.test(location.pathname)) not segment counting — segment counting breaks at the repo root level.
  • When adding pillars, update SAMPLE_PILLARS and all count assertions.

Governing Instructions: Global Quality Standards

The following standards apply globally — across all repositories and all uses of this AI system. They define the baseline quality and accuracy expectations that govern every interaction, independent of project-specific rules.

Global Copilot Quality Standards

Model Selection

Anthropic (Claude) models are preferred over OpenAI models when a choice is available. Claude Sonnet is the default for standard tasks; Claude Opus is used for complex, multi-step reasoning tasks requiring premium analytical depth.

The use of multiple model families — Anthropic and OpenAI — is maintained deliberately. No single AI system is treated as authoritative. Cross-model verification is used for conclusions that matter.

Coding Rules

  • Follow the naming conventions of the language and repository in use
  • Prefer concise, efficient code; prefer completeness over brevity when the two conflict
  • Only comment code that genuinely needs clarification — do not over-comment
  • Always analyze code for robustness, completeness, and explicit error handling
  • Use UTF-8 encoding for all source files; keep scripts and configs ASCII-safe
  • Mentally trace through code against multiple input scenarios before declaring it correct
  • Generate unit tests where applicable and ensure they pass before declaring code complete

Quality Rules

  • Prioritize accuracy over speed. A slow, correct answer is always better than a fast, wrong one.
  • Never guess. Only provide answers that can be verified. If something cannot be confirmed, say so.
  • Base answers on the latest stable version of any technology being discussed. Note when version matters.
  • Adversarial review on all code: actively seek edge cases, failure modes, and security issues before declaring something complete.

Code Change Rules

  • Make precise, surgical changes that fully address the request — do not modify unrelated code
  • Do not fix pre-existing issues unrelated to the current task (unless directly caused by the change being made)
  • Update documentation if it is directly related to changes being made
  • Always validate that changes do not break existing behavior — run tests
  • Only run linters, builds, and tests that already exist; do not add new tooling unless necessary

← Back to Home