We believe AI is one of the most consequential and least governed forces shaping American life. Our policy framework is built on that premise. Here is what we believe must change — and how we use these tools in building this platform.
AI is a governance crisis, not a technology trend. This platform treats artificial intelligence as a serious policy problem that demands a legislative response now — not a future concern to be studied while harms accumulate. The systems being deployed without oversight in hiring, lending, healthcare, criminal justice, and political advertising are reshaping life outcomes for millions of Americans with no democratic accountability and no recourse for harm.
Algorithmic systems are already making consequential decisions about your life. They determine who gets hired, who qualifies for a loan, what medical treatment a patient receives, and who a court considers a flight risk.[6] These decisions happen at scale, at speed, and with minimal transparency to the people most affected. Commercial facial recognition systems have been documented to misidentify darker-skinned and female faces at dramatically higher error rates than white male faces — errors with real-world consequences in law enforcement and employment.[5]
The companies deploying these systems face no meaningful accountability. The primary federal AI governance document — the NIST AI Risk Management Framework — is explicitly voluntary.[1] There are no mandatory pre-deployment safety assessments, no required disclosure of how these systems affect people, and no liability framework comparable to what we impose on pharmaceuticals, aircraft, or financial instruments with equivalent capacity for harm. That is a policy failure — and one the two major parties have declined to address seriously.
We share the critics' concerns — and go further. If you believe there is no ethical use of AI, we are not going to argue you out of that position — it is grounded in real grievances. Our AI policy is not a defense of artificial intelligence. It is a framework for controlling it. The same companies that displaced workers, scraped artists' work without consent, and embedded racial bias into life-altering decisions are the companies this platform would regulate, restrict, and hold liable. If you believe AI must be controlled — not celebrated, not deferred to — this platform agrees with you.
The field's own founders are sounding the alarm. Geoffrey Hinton, who shared the 2024 Nobel Prize in Physics for his foundational work in neural networks, resigned from Google specifically to speak freely about the dangers of unregulated AI.[9] Yoshua Bengio, one of the field's most-cited researchers, has called for international AI governance frameworks comparable to those applied to nuclear and biological weapons.[10] When the people most responsible for building these systems warn that they are outpacing our capacity to govern them, that is the premise our policy is built on.
The Freedom and Dignity Project's Technology & AI pillar is a major part of the platform's governance framework. It covers algorithmic accountability, surveillance limits, data ownership, public infrastructure, labor protections, and democratic control over AI deployment decisions. The exact number of positions in that pillar can change as the catalog is revised, expanded, and reconciled.
Our framework requires mandatory algorithmic impact assessments, public bias audits, strong data ownership rights, limits on AI-enabled surveillance, prohibition of high-risk autonomous AI deployment without human review, and democratic control over where and how AI gets used. The table below shows where this platform stands relative to the current positions of the Republican and Democratic parties across key AI governance questions.
| Area | This Platform | Republicans | Democrats |
|---|---|---|---|
| Algorithmic impact assessments | Mandatory, public, pre-deployment | None proposed | Voluntary guidelines only |
| AI in criminal justice | Prohibited for high-stakes decisions without human review | No position | Limited guidance |
| Facial recognition | Prohibited by default in public spaces; narrow exceptions only with strict safeguards, legislative authorization, and mandatory sunset review | No limits | Patchwork state-level support |
| Data ownership | Individuals own their data | Corporate ownership default | Partial protections proposed |
| AI labor displacement | Mandatory profit-sharing, retraining funds | "AI will create jobs" | Retraining programs (unfunded) |
| Internet & AI infrastructure | Internet as public utility; public AI investment | Deregulation | Partial infrastructure bills |
This transparency statement would be incomplete — and dishonest — if it only described how AI was used here without acknowledging what AI costs. The same technology we used to build this platform is causing documented harm at scale. We are not exempt from that accounting.
Economic disruption. Generative AI is projected to displace a significant share of the global workforce. A 2023 Goldman Sachs analysis estimated that the equivalent of 300 million full-time jobs globally could be affected by AI automation, with roughly a quarter to half of tasks in many professional roles subject to automation.[3] A concurrent McKinsey analysis projected that generative AI could add between $2.6 and $4.4 trillion annually in economic value — almost entirely captured by the companies deploying these systems, not the workers whose labor they are replacing.[4] This is not a theoretical concern. It is the established pattern of every prior wave of automation, now potentially at a pace and scope that outstrips the labor market's ability to adapt. Without active policy intervention — stronger safety nets, worker protections, profit-sharing requirements, and investment in retraining — the productivity gains from AI will deepen existing inequality rather than reduce it.
A direct word to artists, writers, and creators. Your work was used to train these systems — scraped without your consent, without compensation, without credit. That is not a side effect of AI development. It is the result of a deliberate regulatory vacuum, and it is a form of theft. Our data ownership, consent, and intellectual property policies exist in part for you. We do not dismiss your grievance. We share it. The absence of meaningful consent requirements for training data is one of the clearest failures of current AI governance, and closing that gap is a core element of what this platform proposes.
Algorithmic harm and social damage. AI systems have been shown to replicate and amplify bias across hiring, lending, healthcare, and criminal justice. A landmark 2019 NIST study of commercial facial recognition systems found substantially higher error rates for darker-skinned and female faces — in some systems, false positive rates for Black women were nearly 100 times higher than for white men.[5] Predictive risk tools used in criminal sentencing and parole have been found to embed racial disparities in their outputs, assigning higher risk scores to Black defendants at nearly twice the rate of white defendants with similar records.[6] AI-generated synthetic media — deepfakes — is already being used to fabricate statements by political figures, manufacture disinformation at scale, and produce non-consensual intimate images of real people. Social media recommendation algorithms, optimized for engagement rather than accuracy or human well-being, have demonstrably contributed to the spread of conspiracy theories, political radicalization, and documented harm to the mental health of young users. These are not edge cases. They are how these systems perform in deployment.
Environmental cost. The infrastructure required to train and run large AI models consumes enormous amounts of energy and water. Training a single large natural language model can produce carbon emissions comparable to the lifetime emissions of several automobiles.[7] Data centers — the physical infrastructure on which AI runs — consumed an estimated 460 terawatt-hours of electricity globally in 2022; the International Energy Agency projects that figure could double by 2026, driven largely by AI workloads.[8] AI inference at scale — answering queries from hundreds of millions of daily users — now constitutes a rapidly growing share of that total, not just training. Data centers also consume billions of gallons of fresh water annually for cooling. These environmental costs are not borne equally: they fall disproportionately on communities near data center clusters and on regions already bearing the sharpest effects of climate change.
There is no un-inventing AI. It is already embedded in hiring decisions, medical diagnosis, content moderation, financial services, criminal justice, and virtually every institution of modern life. The question is not whether AI exists. It is who controls it, how it is governed, and whose interests it actually serves.
Writing a comprehensive structural reform platform — 25 policy pillars, thousands of canonical policy positions, constitutional analysis, and party comparisons — requires institutional scale: staff, a think tank, significant funding, years of support. Those resources are not available here. They are, by design, available to the people this project is criticizing. Well-funded campaigns have policy shops. Corporations have legal teams and lobbyists. This project has a laptop and a stack of research. AI tools let a single person do the research synthesis work that would otherwise require a full staff — and that asymmetry is the point. Democratizing access to analytical capability is precisely the kind of structural correction this platform advocates for.
There is something else worth saying plainly: the same technology companies whose practices we are proposing to regulate built these tools. We are using them to write the policies that would hold those companies accountable. We do not find that ironic. We find it necessary.
We are also aware of the argument that any use of these tools — regardless of purpose — is complicity in the harms they cause, and that the only principled response is refusal. We have thought seriously about that argument. Our conclusion: AI is already deployed across every major institution in American life. It is not going away. The question is not whether AI exists — it is whether the people most harmed by ungoverned AI will have any voice in building the framework that governs it. A boycott that produces no legislation, no structural accountability, and no policy change leaves everyone worse off, especially those already bearing the harm. Our obligation is not refusal. It is accountability.
AI tools are powerful for scale and synthesis. They are not capable of political judgment, moral reasoning, or determining what a democratic society owes its citizens. Every position in this platform represents a human decision. The AI did not decide what rights you deserve.
AI-assisted work has real limitations. It can be confidently wrong. It can smooth over tensions that should stay sharp. It can produce something that sounds authoritative and misses the point. This project has made those mistakes and caught most of them. Some are probably still in the document.
That is why the source material is preserved. The chat logs, the canonical IDs, the policy catalog — all of it is version-controlled and traceable. If something in this platform is wrong, you can go find where it came from and why. That is not an accident. Accountability without traceability is just theater.
Using AI to build a platform about AI accountability creates an obligation to do it right. The following safeguards are built into this project's development process — not as aspirational goals, but as documented rules enforced at the source level.
These safeguards are documented in the project's repository instruction files, including AGENTS.md and .github/copilot-instructions.md. They are public, reviewable, and updated as the project evolves.
AI got this project off the ground. That is not where it stays.
Using AI to bootstrap this platform was a deliberate choice made under real constraints. One person could not have organized 25 policy pillars, thousands of canonical positions, a structured policy catalog, and a full website without it. The alternative was not a better platform — the alternative was no platform at all.
But scaffolding is not the same as structure. The goal was never to publish an AI-generated political document and call it finished. The goal is a human-authored platform — one where real contributors have read, challenged, improved, and rewritten everything that matters. AI produced the first draft. Human judgment replaces it.
What that means in practice:
This platform was built by AI assistance to make human authorship possible at scale. The AI's job is to close the resource gap that would otherwise make this work impossible for anyone without institutional backing. Everything AI produced is a starting draft, pending replacement by the people this project is actually for.
References
[1] National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). U.S. Department of Commerce. https://www.nist.gov/itl/ai-risk-management-framework
[2] U.S. Copyright Office. (2023). Copyright and Artificial Intelligence: Part 1 — Digital Replicas. U.S. Copyright Office. https://www.copyright.gov/ai/; 17 U.S.C. § 105 (works of the U.S. government are not subject to copyright protection).
[3] Briggs, J., & Kodnani, D. (2023). The potentially large effects of artificial intelligence on economic growth. Goldman Sachs Global Investment Research. https://www.gspublishing.com/content/research/en/reports/2023/03/27/d64e052b-0f6e-45d5-a19a-c09a2f429a28.html
[4] McKinsey Global Institute. (2023). The economic potential of generative AI: The next productivity frontier. McKinsey & Company. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier
[5] Grother, P., Ngan, M., & Hanaoka, K. (2019). Face recognition vendor test (FRVT) part 3: Demographic effects (NISTIR 8280). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.IR.8280
[6] Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
[7] Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 3645–3650). Association for Computational Linguistics. https://doi.org/10.18653/v1/P19-1355
[8] International Energy Agency. (2024). Electricity 2024: Analysis and forecast to 2026. IEA. https://www.iea.org/reports/electricity-2024
[9] Metz, C. (2023, May 1). 'The Godfather of A.I.' leaves Google and warns of danger ahead. The New York Times. https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html; Royal Swedish Academy of Sciences. (2024). Scientific background: The Nobel Prize in Physics 2024. Nobel Prize Outreach AB. https://www.nobelprize.org/prizes/physics/2024/advanced-information/
[10] Heaven, W. D. (2023, May 25). Yoshua Bengio thinks we might be building AI wrong. MIT Technology Review. https://www.technologyreview.com/2023/05/25/1073634/yoshua-bengio-ai-wrong/
This project uses AI. It also advocates for AI accountability, algorithmic transparency, and strong limits on surveillance capitalism — because the same technology that helps build this document is being used to undermine the democracy it is trying to defend. Both things are true, and neither cancels the other.
Our obligation is to use AI transparently, use it to build tools of accountability rather than tools of control, and to make the case for the governance frameworks that can reduce the harm it causes. That is not a sufficient response to the scale of the problem. But it is an honest one — and it is more than is being offered by the institutions that have the most to gain from AI remaining ungoverned. We document the harms above not to clear our conscience, but to be clear about what we are working against. The policies in our Technology & AI pillar are not abstract positions. They are responses to real, documented damage happening now.
policy/catalog/policy_catalog_v2.sqlite — a structured SQLite database of 3989 canonical policy positions, rebuilt from source material via scripts/build-catalog-v2.py. The repository is public at github.com/alistardust/freedom-and-dignity-project.
This project has used multiple AI interfaces at different stages, including CLI-based coding and research workflows, earlier web-based chat interfaces, and cross-model review passes. Anthropic models are preferred when a choice is available, with OpenAI models used as an additional check, for coding-specific workflows, and for model-diversity validation. The following list reflects the models and model families documented in the project's current workflow notes:
The use of multiple models from two independent AI companies — Anthropic and OpenAI — is itself a safeguard. No single AI system is treated as authoritative. Where conclusions matter, cross-model verification reduces the risk of systematic errors inherited from any one model's training.
The AI-assisted workflows used to build this project support specialized skills, scoped helper agents, and parallel research or coding tasks. The following capabilities were used or are available within those workflows:
The following instructions are stored in .github/copilot-instructions.md and related repository instruction files, and they apply automatically to AI interactions with this repository. They define how the AI must behave when working on this project: what to prioritize, how to handle sources, citation requirements, copyright rules, quality gates, and testing standards.
.github/copilot-instructions.mdThis repository is an active U.S. policy platform organized around 25 pillars and 5 foundations. All policy content lives under policy/. The canonical policy catalog is policy/catalog/policy_catalog_v2.sqlite — 3,810 positions in v2 ID format, all with plain-language summaries. Read .github/current-state.md before making structural changes.
docs/pillars/*.html — rendered policy cards; most recently edited contentpolicy/catalog/policy_catalog_v2.sqlite — structured catalog; canonical for IDs and plain languagepolicy/foundations/pillars/ — narrative prose markdown; may lag behind site HTMLBoth HTML and DB are kept in sync. Any new position added to HTML must be backfilled to the DB in the same commit.
XXXX-XXXX-0000 (e.g. HLTH-COVR-0001)positions tabledomains and subdomains tableslegacy_id_map for provenancescripts/build-catalog-v2.py. Do not hand-edit the SQLite database.Every factual claim, statistic, poll result, legal reference, and externally sourced assertion must be cited. Use Wikipedia-style numbered inline annotations — [1], [2] — with full APA 7th edition reference lists.
News article: Author, A. A. (Year, Month Day). Title. Publication. URL
Government report: Agency. (Year). Title (Report No.). URL
Journal article: Author(s). (Year). Title. Journal, Vol(Issue), pages. DOI
Federal statute: Name of Act, Pub. L. No. XXX-XX, § X, XX Stat. XXX (Year). URL
Before publishing any factual claim:
Before finalizing any empirical section, actively try to disprove the claims. If contradicting data exists: acknowledge it, explain why it doesn't change the conclusion, or revise the conclusion. Suppressing contradicting evidence is not permitted. Intellectual honesty is a core value of this platform.
AI models can generate fabricated citations, misattribute quotes, confuse statistics from different years, and produce confident-sounding claims that are wrong. Every AI-generated factual claim must be independently verified before publication. If no source can be found, the claim must be removed or reframed as the project's own position.
Stack: Vitest for unit tests (npm run test:unit), Playwright/Firefox for E2E tests (npm run test:e2e). Always run unit tests before committing. Run E2E tests after any HTML/JS/CSS change.
app.js: element exists in DOM, correct href, correct behaviortoBeVisible() alone to test opacity — Playwright considers opacity:0 elements visible. Use .toHaveClass(/visible/) for IntersectionObserver-driven reveals./\/(pillars|compare)\//.test(location.pathname)) not segment counting — segment counting breaks at the repo root level.SAMPLE_PILLARS and all count assertions.The following standards apply globally — across all repositories and all uses of this AI system. They define the baseline quality and accuracy expectations that govern every interaction, independent of project-specific rules.
Anthropic (Claude) models are preferred over OpenAI models when a choice is available. Claude Sonnet is the default for standard tasks; Claude Opus is used for complex, multi-step reasoning tasks requiring premium analytical depth.
The use of multiple model families — Anthropic and OpenAI — is maintained deliberately. No single AI system is treated as authoritative. Cross-model verification is used for conclusions that matter.