This pillar establishes comprehensive governance frameworks for artificial intelligence, data systems, algorithmic accountability, digital infrastructure, and emerging technology to protect rights, ensure transparency, p
This pillar establishes comprehensive governance frameworks for artificial intelligence, data systems, algorithmic accountability, digital infrastructure, and emerging technology to protect rights, ensure transparency, prevent abuse, and align technological power with democratic values and human dignity.
Technology must serve people, not surveil, manipulate, or replace them. AI and automated systems must be transparent, accountable, contestable, and subject to meaningful human oversight — especially when they affect rights, access to essential services, or the exercise of liberty. No system, however sophisticated, may operate outside democratic accountability or constitutional constraint.
AI and digital technology touch every domain of public life. The rules in this pillar do not exist in isolation — they directly intersect with, reinforce, and constrain policy across the entire platform. Where AI operates in a domain, domain-specific rules apply in addition to the governance frameworks here.
AI infrastructure is among the fastest-growing sources of industrial carbon emissions, freshwater consumption, and electronic waste on Earth. The environmental cost of AI is not peripheral — it is one of the defining environmental policy challenges of the coming decade. This pillar's 21 ENV rules establish binding obligations across three dimensions:
AI must also actively support climate efforts — climate modeling, environmental monitoring, grid optimization, renewable energy forecasting — all under transparent, accountable frameworks. Where AI is used to assist environmental enforcement or compliance, transparency and public accountability are non-negotiable.
The largest single family in this pillar (TEC-JUD) covers AI in courts, criminal sentencing, bail, probation, family court, eviction, and administrative proceedings. No AI system may determine a sentence, assign a risk score that determines detention, or replace a judge. Algorithmic evidence faces strict admissibility standards. Every affected person retains rights to explanation, adversarial challenge, and human review. These rules work in tandem with the Equal Justice & Policing pillar.
Automated hiring, firing, and performance management have expanded workplace power asymmetries to unprecedented scale. This pillar's 24 LAB rules ban fully automated employment decisions, prohibit intrusive monitoring and emotion inference, protect workers from outside-hours surveillance, require transparency about algorithmic management, and preserve the right to a human manager. AI may not be used to identify or suppress union activity. Collective bargaining over AI workplace systems is explicitly protected.
AI in clinical decision-making carries life-or-death stakes. This pillar prohibits AI from replacing clinicians in high-risk decisions, bans systems designed to manipulate patients emotionally, requires evaluation of mental health harms (especially for minors), and mandates human accountability for AI-assisted diagnoses and treatment recommendations. These rules extend the Healthcare pillar's rights framework into the algorithmic layer.
Automated systems now make credit, mortgage, and insurance decisions at scale with minimal transparency. This pillar's 29 FIN rules ban automated denial without human review, prohibit opaque scoring criteria, require anti-discrimination audits, protect against vulnerability profiling and social scoring, and ensure rights to explanation and appeal. No algorithm may redline a community or deny housing access through discriminatory proxy variables.
Educational AI shapes what students learn, how they are evaluated, and what opportunities they access. This pillar's 24 EDU rules ensure AI may assist but never replace human instruction, may not be the sole basis for grading or placement decisions, must be evaluated for bias, and must protect student data from commercial exploitation. Invasive surveillance tools, emotion inference systems, and behavioral scoring are prohibited in educational settings.
AI-generated deepfakes and synthetic media pose direct threats to electoral integrity. This pillar's 15 SYN rules ban deceptive synthetic media for political manipulation, prohibit false depictions of public officials, require provenance markers and disclosure, and establish liability for distribution of manipulative AI content. These rules underpin the Elections & Representation pillar's protections against disinformation campaigns.
The most consequential cross-sectional intersection: autonomous weapons systems and AI-driven intelligence. This pillar's 58 MIL rules ban autonomous lethal targeting, require identified human decision-makers for all uses of lethal force, prohibit AI from generating nuclear launch parameters, mandate congressional authorization for new capabilities, require logging and auditability, restrict exports of military AI, and support international treaty frameworks. Democratic accountability does not end at the battlefield.
AI-driven immigration enforcement magnifies the stakes of algorithmic error to life-altering extremes. This pillar bans opaque AI in enforcement decisions and prohibits AI risk scoring for detention or family separation. Every person in immigration proceedings retains the right to contest automated decisions and to human review. These rules extend due process guarantees into the algorithmic systems that increasingly operate immigration enforcement.
The following areas have been identified as requiring additional policy development. Proposed rules are marked below in the ENV family. Community and legislative input is invited on each.
No binding caps on data center water usage exist yet. Large facilities can consume millions of gallons daily for cooling. Rules TEC-ENV-022 through TEC-ENV-027 propose caps, water-stressed region restrictions, recycling mandates, and drought contingency requirements.
Current rules do not restrict data center construction in drought-prone or water-scarce regions. AI companies are actively expanding into areas like Arizona, Nevada, and the Southwest U.S. where freshwater is already critically constrained.
Current rules cover operational emissions but do not explicitly require Scope 3 accounting — supply chain emissions from chip manufacturing, hardware shipping, and data center construction. These can dwarf direct operational footprints.
Major AI companies are pursuing private nuclear power agreements to meet AI's explosive energy demand. No governance framework yet exists for private AI-driven nuclear procurement, public safety review, or grid impact assessment.
No rules require software-level efficiency — training and inference optimization as an environmental obligation. Larger models are not always better; compute minimization should be a design requirement for high-consumption applications.
Data center clusters generate significant waste heat, raising local temperatures, stressing urban cooling systems, and increasing residential energy use. No thermal pollution standards or heat island assessments exist for AI infrastructure.
Large data center campuses can displace habitats, disrupt wildlife corridors, and increase light pollution. No biodiversity impact assessments are currently required for AI infrastructure siting decisions.
Communities facing simultaneous automation-driven job displacement and climate disruption need coordinated policy response. No rules yet address the intersection of AI labor displacement and environmental just transition.
This pillar's rules operate in addition to domain-specific rules in each pillar listed above. Where AI is present in a domain, both the domain pillar and these technology governance rules apply concurrently.
Modern technology has created a new layer of unaccountable power. Algorithmic systems shape access to credit, employment, housing, healthcare, education, and public services with minimal transparency or recourse. AI systems make consequential decisions affecting liberty, safety, and dignity without meaningful human accountability. Mass surveillance has become cheap, scalable, and pervasive, turning every digital interaction into potential evidence. Biometric tracking turns public space into a persistent identification grid. Synthetic media and deepfakes undermine the shared reality necessary for democracy. Platform algorithms optimize for engagement at the expense of mental health, truth, and social cohesion. Government agencies and corporations use technology to circumvent constitutional limits, purchase surveillance data to avoid warrant requirements, and automate rights violations at scale.
This pillar closes those gaps. It establishes that emerging technology cannot become a shortcut around old rights. It ensures that AI systems affecting consequential outcomes are governed by strict standards for transparency, testing, bias mitigation, auditability, and human accountability. It bans high-risk uses of AI in contexts where automated errors cause irreversible harm — including criminal sentencing, benefits termination, immigration enforcement, insurance denial, and autonomous weapons targeting. It restricts mass surveillance, prohibits warrantless bulk data collection, and bans government purchase of commercial tracking data. It protects anonymous internet access, limits age verification systems that function as mass identification infrastructure, and ensures net neutrality. It requires that platforms disclose algorithmic ranking, that users retain access to non-algorithmic alternatives, and that manipulative engagement optimization targeting psychological vulnerabilities is prohibited. It establishes liability for AI harms, mandates independent auditing, and guarantees rights to explanation, appeal, and human review.
AI Governance and Accountability: Comprehensive frameworks ensuring that AI systems affecting rights, access, or liberty are transparent, auditable, subject to human oversight, and tested for bias and safety before and after deployment.
Algorithmic Transparency: Platforms and systems must disclose when algorithms materially influence outcomes, provide non-algorithmic alternatives, and prohibit manipulative optimization targeting psychological vulnerabilities.
Surveillance Limits: Ban warrantless mass surveillance, AI-powered public tracking, persistent location monitoring, biometric crowd surveillance, and government purchase of commercial data to evade constitutional protections.
Biometric Protections: Ban mass facial recognition in public spaces, real-time biometric tracking of demonstrations, and use of biometrics as general identity infrastructure; require strict necessity, proportionality, and deletion rules where permitted.
Data Rights and Privacy: Prohibit cross-agency surveillance fusion, secret behavioral dossiers, and use of bulk commercial data in AI surveillance; protect anonymous internet access and pseudonymous use.
Criminal Justice and Policing: Ban AI risk scoring in sentencing, bail, and punishment; prohibit predictive policing based on biased data; ban suspicion scoring and network mapping absent individualized cause.
Courts and Legal Systems: Comprehensive restrictions on AI in judicial contexts: no AI-determined sentencing, credibility assessment, or jury influence; ban on generative AI in evidence and opinions; strict admissibility standards; protections in family court, eviction, probation/parole, and administrative proceedings.
Government Services: AI may not independently deny, reduce, or terminate benefits, services, or legal status; human review required before harm; rights to explanation and appeal; bans on behavioral scoring and forced AI-only service channels.
Employment and Labor: No fully automated hiring, firing, or promotion; ban intrusive monitoring, emotion inference, and outside-hours surveillance; protections for union activity; transparency and human review requirements.
Financial Systems: No automated credit, insurance, or mortgage denial without human review; prohibition on opaque scoring and discriminatory proxies; protection against vulnerability profiling and social scoring.
Education Technology: AI may assist but not replace teachers; may not be sole basis for grading or evaluation; prohibits invasive surveillance, emotion inference, and behavioral scoring; protects student data and requires bias audits.
Healthcare AI: AI may assist but not replace clinicians in high-risk decisions; prohibition on systems designed for emotional manipulation; evaluation for mental health harms.
Military and Weapons Systems: Ban autonomous lethal targeting, nuclear AI, and AI-driven target generation; all lethal force requires identified human decision-maker; logging and congressional authorization for new capabilities; support for international treaties.
Synthetic Media and Deepfakes: Ban deceptive use for fraud, impersonation, non-consensual sexual content, political manipulation, and false depiction causing harm; require disclosure and provenance markers; allow parody and consensual use.
Environmental Impact: AI infrastructure must meet carbon neutrality requirements, disclose energy and water usage, internalize environmental costs, undergo impact assessments, and not disproportionately burden disadvantaged communities.
Internet Infrastructure: Treat ISPs as common carriers; enforce net neutrality; prohibit content-based discrimination; protect against administrative rollback; ensure open access.
Age Verification: Prohibit mandatory identity-based systems that create tracking databases; allow privacy-preserving alternatives; prevent expansion to general internet access; ban use as surveillance proxy.
The Technology and AI pillar is the largest and most technically comprehensive domain in the Freedom and Dignity Project, encompassing 362 policy positions across 22 distinct family codes. This reflects the breadth and complexity of modern digital systems and their intersection with nearly every aspect of public and private life.
The pillar is organized into the following families:
This structure reflects several key design principles:
Risk-Based Approach: Higher-risk uses (military, judicial, immigration, surveillance) face stricter prohibitions and oversight than lower-risk applications.
Human Accountability: AI may assist but not replace human judgment in consequential decisions affecting rights, liberty, or access to essential services.
Transparency and Contestability: People must know when AI affects them and have meaningful rights to explanation, appeal, and human review.
Prevention of Circumvention: Rules explicitly prevent government and corporations from using AI, automation, or private vendors to bypass constitutional limits or accountability structures.
Environmental and Social Costs: AI infrastructure must internalize environmental impacts and may not externalize costs to communities or ecosystems.
Democratic Control: High-risk AI capabilities require explicit authorization, sunset provisions, independent oversight, and public disclosure.
The 362-rule scale reflects the reality that AI and digital systems now touch nearly every domain of policy. Rather than a single "AI policy," this pillar provides domain-specific rules tailored to the particular risks and governance needs of education, healthcare, criminal justice, employment, finance, military use, and more. This granular approach ensures that AI governance is not abstract but operationally specific, enforceable, and protective of rights in context.
Every rule in this pillar, organized by policy area. Active rules are current platform commitments. Partial rules are in development. Proposed rules are planned for future inclusion.
TECH-AGES-0001
Proposed
Prohibit mandatory identity-based age verification
Age verification systems that require government ID, biometrics, or persistent tracking to access legal online content are prohibited. People have the right to read and access information without being identified or tracked.
Prohibit mandatory identity-based age verification systems that require government ID biometric data or persistent tracking to access lawful content
TECH-AGES-0002
Proposed
No centralized databases
Companies cannot build centralized databases of people's identities or browsing history just to verify age. Any age-check system must work without storing or sharing personal information.
Age verification systems must not create centralized databases of identity or browsing activity
TECH-AGES-0003
Proposed
Allow privacy-preserving alternatives
Privacy-preserving age checks — like on-device verification or anonymous credential systems — are permitted alternatives. These methods can confirm a user is old enough without revealing who they are.
Allow age assurance mechanisms that preserve privacy such as local device verification zero-knowledge proofs or non-identifying credential systems
TECH-AGES-0004
Proposed
Data minimization and deletion
Any permitted age verification system must collect the least information possible and delete it immediately after the check. Holding on to identity data longer than necessary is prohibited.
Any permitted age verification system must minimize data collection and prohibit retention beyond immediate verification
TECH-AGES-0005
Proposed
Narrow scope requirements
Age verification requirements can only apply to specific, high-risk types of content — not to the internet in general. This prevents age checks from becoming a tool for broad access control.
Age verification requirements must be narrowly scoped to specific high-risk content categories and may not be expanded to general internet access
TECH-AGES-0006
Proposed
Ban surveillance proxy use
Neither governments nor private companies may use age verification systems as cover for tracking, censoring, or monitoring people's online behavior.
Governments and private entities may not use age verification systems as a proxy for surveillance censorship or behavior tracking
TECH-AINL-0001
Proposed
High-risk AI requires review
AI systems with significant potential to affect people's lives or rights must go through a formal safety and accountability review before they can be used publicly.
High-risk AI systems must be subject to heightened governance review before deployment
TECH-AINL-0002
Proposed
No automation without accountability
AI tools used in critical areas like healthcare, housing, employment, or policing must always have a human who is responsible for their decisions. No AI system can operate in these areas without human accountability.
AI systems that affect rights liberty healthcare education employment benefits policing or housing must not operate without meaningful human accountability
TECH-AINL-0003
Proposed
Rights to notice and review
When an automated system makes an important decision about you, you have the right to know about it, get an explanation, appeal it, and have a human review it.
No fully automated decisions in critical domains without a right to notice explanation appeal and human review
TECH-AINL-0004
Proposed
Transparency and auditability
When government agencies use AI to make decisions, those systems must be publicly documented, understandable, and open to independent inspection.
AI systems used in public-sector decision-making must be documented transparent and independently auditable
TECH-AINL-0005
Proposed
Liability for harms
The companies and people who build and deploy AI systems are legally responsible for harms those systems cause. They cannot avoid liability just because the decision was made by a machine.
Operators and deployers of AI systems must retain legal liability for harms caused by those systems
TECH-AINL-0006
Proposed
Testing for bias and safety
AI systems must be tested for problems like bias, safety failures, and potential misuse — both before they launch and after they are in use.
AI systems must be tested for bias safety reliability and misuse risks before and after deployment
TECH-AINL-0007
Proposed
Documentation disclosure
Companies operating covered AI systems must publish documentation about those systems — what they do, how they were trained, and what risks they carry — appropriate to how dangerous they are.
Covered AI systems must maintain model cards documentation or equivalent public-facing disclosures appropriate to risk level
TECH-AINL-0008
Proposed
No secret law
Governments and agencies cannot use secret AI decision systems to determine your rights, obligations, or access to services. The rules that govern decisions about you must be knowable.
Secret law or undisclosed AI decision systems must not be used to determine rights obligations or access to public services
TECH-AISA-0001
Proposal
All High-Risk AI Systems Must Undergo Mandatory Pre-Deployment Impact Assessments and Receive Federal Certification Before Deployment
AI systems used in high-stakes areas — like healthcare, policing, or employment — must pass a federally certified safety review before they can be deployed. This protects people from harm caused by untested systems.
Congress must direct a federal AI Safety Board — established within an independent agency — to: (1) define "high-risk AI systems" as those used in: criminal justice (recidivism scoring, predictive policing), employment (hiring, firing, performance evaluation), housing (tenant screening, mortgage approval), healthcare (diagnostic or treatment decisions), credit (underwriting, pricing), education (grading, admissions), critical infrastructure (power grids, water systems), and any AI system that makes decisions affecting 1 million or more people; (2) require all deployers of high-risk AI systems to complete a pre-deployment impact assessment covering: accuracy and bias testing across demographic groups, documentation of training data sources, adversarial robustness testing, and a civil rights impact analysis; (3) require all high-risk AI systems to receive federal certification before deployment — certification lapses annually and must be renewed; (4) prohibit deployment of any high-risk AI system that demonstrates statistically significant disparate impact on a protected class unless the deployer can demonstrate the system is the least discriminatory available alternative for the stated purpose; and (5) establish criminal penalties for knowingly deploying an uncertified high-risk AI system, and a private right of action for any individual harmed by an uncertified or non-compliant high-risk AI system, with statutory damages of $10,000 per violation.
Voluntary industry self-assessment has proven structurally inadequate for governing high-risk AI: developers have economic incentives to minimize identified risks, there is no independent verification mechanism, and harmed individuals bear the entire burden of proving that a system they cannot access caused harm they may not be able to trace. The EU AI Act (2024) established a mandatory conformity assessment regime for high-risk AI applications — covering credit scoring, employment, education, and other high-stakes domains — that requires pre-market documentation, bias testing, and registration. The United States has no equivalent. Pre-deployment certification for high-risk AI is analogous to the pre-market approval requirements applied to medical devices and pharmaceuticals: the regulatory burden is proportionate to the potential for harm, and the public interest in preventing harm before it occurs outweighs the commercial interest in rapid deployment. The "least discriminatory available" standard is drawn from the Title VII disparate impact framework and ensures that even technically effective AI systems do not impose avoidable discriminatory burdens. The $10,000 per violation statutory damages provision is necessary to make claims viable for individual harms — rejected housing applications, denied credit — that would otherwise be too small to litigate.
TECH-AISA-0003
Proposal
AI Systems Used in Criminal Justice Must Be Explainable, Subject to Challenge, and Predictive Policing Systems Must Be Banned
AI tools used in the criminal justice system must be able to explain their decisions in plain terms. Predictive policing systems — which try to predict who might commit a crime — are banned.
Congress must: (1) ban the use of predictive policing AI systems — algorithms that identify individuals or geographic areas as likely to commit future crimes based on historical crime data — by any federal, state, or local law enforcement agency receiving federal funds; (2) prohibit the use of any AI-generated risk score in bail, sentencing, parole, or probation decisions unless: (a) the defendant is provided full disclosure of the algorithm, training data, and their individual score before any hearing at which it will be considered; (b) the defendant has the right to challenge the score with independent expert analysis at government expense; and (c) the risk score is treated as advisory only — judges must independently justify in writing any decision based in part on an AI score; (3) require all law enforcement agencies using any AI tool to publish an annual public audit of the tool's accuracy, error rates, and demographic disparities, with criminal penalties for willful failure to disclose; and (4) prohibit any law enforcement use of AI tools built on training data that was generated through or disproportionately reflects discriminatory policing practices, with a private right of action for affected individuals for injunctive relief, compensatory damages, statutory damages of $5,000 per violation, and attorney's fees.
Predictive policing systems encode historical bias into forward-looking predictions: because communities of color have historically been over-policed, algorithms trained on arrest and stop data predict higher crime rates in those same communities, justifying additional policing that generates more data confirming the prediction. ProPublica's 2016 analysis of the COMPAS recidivism algorithm found it falsely labeled Black defendants as high-risk at nearly twice the rate of white defendants. Due process requires that defendants be able to examine and challenge evidence used against them, but proprietary AI systems are routinely withheld from defendants on trade-secret grounds, making meaningful challenge impossible. The advisory-only requirement and independent expert access at government expense are the minimum procedural protections necessary to make AI risk scores constitutionally sound. Annual demographic audits create accountability and allow independent researchers, civil rights organizations, and affected communities to identify and challenge systems that are producing biased outcomes. The ban on AI tools built on discriminatory training data prevents agencies from laundering the legacy of biased policing practices through a veneer of algorithmic objectivity.
TECH-ALGO-0001
Proposed
Disclosure of algorithmic influence
When an AI or algorithm meaningfully affects what you see, whether you qualify for something, or what actions are taken on your account, you must be told that an algorithm is involved.
People must be told when AI or algorithmic systems materially influence ranking eligibility triage moderation or access decisions
TECH-ALGO-0002
Proposed
Right to contest decisions
If an algorithm makes a decision that seriously harms you, you have the right to challenge it and ask for a review.
People must have the right to contest materially harmful algorithmic decisions
TECH-ALGO-0003
Proposed
Non-algorithmic alternatives
Where feasible, platforms must offer users a version of their service that is less personalized or not driven by algorithmic recommendations.
Users must have access to non-algorithmic or less personalized alternatives where practical
TECH-ALGO-0004
Proposed
Ban manipulative optimization
Algorithms designed to manipulate users by targeting psychological weaknesses — like fear, outrage, or addiction — for profit or political influence are banned.
Ban manipulative algorithmic optimization that targets psychological vulnerabilities for profit or political influence
TECH-ALGO-0005
Proposed
Disclose ranking objectives
Platforms must explain in plain terms what their recommendation and ranking systems are trying to optimize for — for example, engagement, recency, or relevance.
Platforms must disclose core ranking and recommendation objectives in understandable terms
TECH-ALGO-0006
Proposed
Protected researcher access
Independent researchers must be able to access platform data to study how algorithms affect people, subject to privacy protections.
Independent researchers must have protected access to platform data needed to study algorithmic harms subject to privacy safeguards
TECH-AUDT-0001
Proposed
Independent auditing requirement
AI systems that have significant potential to affect people must allow independent auditors to inspect them for safety, fairness, reliability, and potential for misuse.
Covered AI systems must support independent auditing for safety bias reliability and misuse
TECH-AUDT-0002
Proposed
Meaningful redress for harms
If an AI-assisted decision harms you, you have the right to a meaningful explanation of how that decision was made and a path to seek redress.
Affected individuals must have access to meaningful explanation and redress when harmed by AI-assisted decisions
TECH-AUDT-0003
Proposed
Forensic logging
Organizations using high-risk AI systems must keep detailed records of decisions so those decisions can be reviewed, investigated, or appealed later.
Organizations deploying high-risk AI must keep logs sufficient for forensic review investigation and appeals
TECH-AUDT-0004
Proposed
Public AI registry
The government must maintain a public list of all AI systems it uses, with exceptions only for narrowly classified programs — and even those must be overseen by independent reviewers.
Government use of AI must be cataloged in a public registry except for narrowly defined classified exceptions subject to independent oversight
TECH-AUDT-0005
Proposed
Oversight of classified systems
Even AI systems used in classified government settings must be independently reviewed by authorized oversight bodies. Being classified is not an excuse for avoiding accountability.
Classified or sensitive AI systems must still be subject to independent review under secure oversight procedures
TECH-AUDT-0006
Proposal
Mandatory pre-deployment algorithmic audit for high-risk sectors
Before an AI system is deployed in a high-stakes area like healthcare, criminal justice, or housing, it must pass an independent audit to ensure it is fair and accurate.
All AI systems used in criminal justice, consumer credit, employment screening, housing, healthcare, and education must undergo independent third-party algorithmic auditing before deployment and on a recurring basis at least every two years thereafter; audit results must be published in full; systems that produce statistically significant disparate impact across race, sex, disability, or national origin must be presumptively treated as violating applicable civil rights and anti-discrimination law, with the burden shifting to the deployer to rebut; and individuals harmed by a non-audited system in a covered sector have a private right of action for injunctive relief, actual damages, and attorney's fees.
Independent pre-deployment auditing is the essential enforcement mechanism for algorithmic anti-discrimination law. Without it, deployers self-certify compliance, and the burden falls on individual victims to prove discrimination in systems they cannot access or understand. The EEOC's 2023 AI guidance, the FTC's algorithmic accountability work, and the New York City Local Law 144 (automated employment decision tool audits) all recognize that mandatory independent auditing is necessary to make non-discrimination obligations enforceable in algorithmic contexts. The disparate impact presumption aligns with Title VII's disparate impact framework (Griggs v. Duke Power Co.) but extends it explicitly to algorithmic decision systems, closing the gap created when algorithms operate as proxies for protected characteristics. Private right of action is essential because agency enforcement resources are inadequate to cover the volume of deployed systems.
TECH-AUDT-0007
Proposal
Algorithmic systems that produce disparate impact trigger a private civil right of action
If an AI system produces decisions that disproportionately harm a particular group, the people harmed have the right to sue in civil court.
Any individual or class of individuals subject to a material adverse decision by an algorithmic system in a covered sector — and who can demonstrate, using publicly disclosed audit data or expert analysis, that the system produced statistically significant disparate impact against a protected class — must have a private civil right of action against the deploying entity for injunctive relief, compensatory damages, statutory damages of not less than $1,000 per violation, and attorney's fees; and class actions aggregating disparate impact claims must be permitted notwithstanding that individual damages may be small.
Disparate impact law is only as effective as its enforcement mechanisms. Current doctrine requires individual plaintiffs to obtain discovery of proprietary algorithmic systems, bear expert witness costs, and meet a causation burden that is nearly impossible to satisfy without the very data the defendant controls. This card creates a direct private enforcement mechanism keyed to published audit data, converting the transparency obligations of AUDT-0006 into actionable rights. The statutory minimum damages provision ensures that small-dollar harms — a rejected rental application, a denied credit extension — are economically viable to litigate. Class action permission is essential because algorithmic harms are inherently systemic: a biased system injures thousands of similarly situated people through the same mechanism, making class treatment the only efficient enforcement vehicle. This approach mirrors the structure of consumer protection and civil rights statutes where individual harm may be small but systemic injury is large.
TECH-AUTS-0001
Proposed
Strict safeguards for high-risk systems
Automated systems with a high risk of serious harm — such as restricting liberty or denying rights — must not be deployed unless strict protections are in place to prevent and address failures.
High-risk autonomous systems must not be deployed where failure could cause loss of liberty bodily harm or deprivation of rights without strict safeguards
TECH-AUTS-0002
Proposed
No replacement of accountable humans
AI cannot replace the accountable human decision-maker in areas like healthcare, education, housing benefits, or employment. A human must always be responsible.
AI systems used in healthcare education benefits housing or employment decisions must not replace accountable human decision-makers
TECH-AUTS-0003
Proposed
Prohibit automated denial systems
Fully automated systems that deny people benefits, care, housing, or legal status are prohibited unless human review and the right to appeal are guaranteed before any harm can occur.
Automated denial systems for benefits care housing or legal status must be prohibited unless human review and appeal rights are guaranteed before harm occurs
TECH-AUTS-0004
Proposed
Human override authority
Government agencies must always retain the ability for a human to override or escalate any automated decision. There must be clear procedures for when and how humans step in.
Government agencies must maintain human override authority and documented escalation procedures for automated systems
TECH-BIOS-0001
Proposed
Ban mass facial recognition
Police and government agencies cannot use facial recognition technology to scan crowds or public spaces for general law enforcement purposes. This prevents widespread surveillance of people who have done nothing wrong.
Ban mass facial recognition in public spaces for general law-enforcement or administrative use
TECH-BIOS-0002
Proposed
Ban real-time crowd tracking
Real-time scanning of crowds or protesters using facial recognition or other biometric technology is banned. People have the right to gather and protest without being identified by the government.
Ban real-time biometric tracking of crowds or demonstrations
TECH-BIOS-0003
Proposed
No general identity infrastructure
Biometric systems — like facial recognition or fingerprint scans — cannot be used as the standard way to verify identity when accessing legal content or participating in public life.
Biometric systems may not be used as general identity infrastructure for access to lawful content or public life
TECH-BIOS-0004
Proposed
Strict safeguards where permitted
When biometric technology is permitted for a specific purpose, it must be strictly necessary, proportionate to the goal, open to independent review, and governed by clear rules about when data must be deleted.
Where biometrics are permitted they must require strict necessity proportionality auditability and deletion rules
TECH-BIOS-0005
Proposed
Heightened data protection
Biometric data — like your face scan, fingerprints, or voice — is among the most sensitive kinds of personal information. It requires stronger consent requirements and stricter limits on how long it can be kept than regular personal data.
Biometric data must be treated as highly sensitive protected data with stronger consent and retention rules than ordinary personal data
TECH-BIOS-0006
Proposal
Facial recognition in targeted criminal investigations requires an individualized warrant and independent expert review
If police want to use facial recognition to investigate a specific suspect, they need an individual court-issued warrant and must have an independent expert review the results.
Law enforcement use of facial recognition technology to identify a specific individual in connection with a criminal investigation must be authorized by an individualized judicial warrant supported by probable cause; must be reviewed by an independent technical expert who is not employed by or contracted with the investigating agency before any identification is treated as accurate; must not be used as the sole basis for arrest, charging, or prosecution; and must be disclosed to defense counsel in any criminal proceeding where facial recognition output was a contributing factor in the investigation.
Facial recognition technology misidentifies Black, Brown, and female faces at dramatically higher rates than white male faces — documented error differentials of up to 100x in NIST testing. Multiple wrongful arrests of Black men in Detroit, New Orleans, and Atlanta resulted directly from facial recognition misidentification. The existing BIOS-0001 card bans mass facial recognition in public spaces; this card addresses the distinct scenario of targeted investigative use, which requires heightened procedural protections rather than a blanket ban. The warrant requirement addresses the Fourth Amendment dimension; the independent expert requirement addresses the reliability dimension (courts have no basis to evaluate facial recognition accuracy without independent technical review); the disclosure requirement addresses the due process dimension. Brady v. Maryland and its progeny require disclosure of material evidence, but law enforcement has concealed facial recognition's role in investigations, depriving defendants of the ability to challenge its reliability.
TECH-BIOS-0007
Proposal
Genetic data is the most sensitive protected data category; it may not be sold to insurers, employers, or law enforcement without a warrant
Genetic data is the most sensitive type of protected data. It cannot be sold to insurers, employers, or law enforcement without a court warrant.
Genetic data — including raw genomic sequences, genetic variants, family relationship inferences, disease predisposition predictions, and data derived from consumer DNA testing — must be classified as the most sensitive protected category of personal data under federal law; may not be collected, processed, or shared without explicit written consent for each specified use; may not be sold, licensed, or transferred to insurers, employers, or commercial data brokers for any purpose; may not be accessed by law enforcement without a warrant issued by a judge based on individualized probable cause; and any entity holding genetic data of U.S. persons must report any data breach affecting genetic data to affected individuals within 30 days; with violations enforceable through criminal penalties for knowing violations and a private right of action for individuals whose genetic data is unlawfully disclosed.
Genetic data is uniquely sensitive because it is immutable, inherited, and predictive of conditions the individual may not yet have — making genetic discrimination a permanent threat unlike most other forms of data misuse. The Genetic Information Nondiscrimination Act (GINA, 2008) prohibits genetic discrimination in employment and health insurance but excludes life insurance, disability insurance, and long-term care insurance — gaps that leave millions exposed. Consumer DNA testing companies (23andMe, AncestryDNA) have collected data from tens of millions of Americans; 23andMe's 2023 bankruptcy raised alarm about the fate of that data under asset-sale scenarios. Law enforcement has used genealogical databases — including without consent from law enforcement databases and voluntary consumer DNA databases — to identify criminal suspects. This card expands GINA's protections across all genetic data holders, closes the insurance gap, requires a warrant for law enforcement access, and creates private enforcement rights.
TECH-DATA-0001
Proposed
Prohibit unauthorized cross-agency surveillance fusion
Government agencies cannot combine surveillance datasets from different sources into a unified system unless it is specifically authorized by law, subject to a judge's oversight, and disclosed publicly in general terms.
Prohibit cross-agency fusion of surveillance datasets except where specifically authorized by statute, subject to judicial oversight, publicly disclosed in general terms, and subject to periodic reauthorization
TECH-DATA-0002
Proposed
Ban secret dossiers
Government and private actors cannot use AI to secretly build detailed behavioral profiles on individuals. Building such a profile requires specific legal cause and judicial approval.
Ban creation of secret AI-generated behavioral dossiers on individuals absent individualized legal cause and judicial process
TECH-DATA-0003
Proposed
Ban government use of commercial data
Government surveillance systems cannot be fed with bulk location data, biometric data, or behavioral data bought from commercial brokers. This closes a loophole that bypasses constitutional warrant requirements.
Ban use of commercially acquired bulk location biometric or behavioral data in AI surveillance systems
TECH-EDUS-0001
Proposed
AI must enhance not replace learning
AI tools in schools must help students learn — not replace teachers, undermine critical thinking, or create unfair advantages. Students' access to quality education must not depend on technology.
AI systems in education must enhance learning without replacing human instruction critical thinking or equitable access to education
TECH-EDUS-0002
Proposed
No replacement of human educators
AI cannot take over the job of human teachers in K–12 or higher education. Direct teaching and mentorship require a human educator who can be held accountable.
AI systems may not replace human educators in primary secondary or higher education contexts where direct instruction is required
TECH-EDUS-0003
Proposed
Support not substitute
AI systems may assist teachers and students, but they cannot substitute for qualified instruction or mentoring. Support tools are allowed; replacement is not.
AI systems may be used to support educators and students but may not substitute for qualified teaching or mentorship
TECH-EDUS-0004
Proposed
No sole-basis grading
Grades or evaluation outcomes that affect a student's future — like advancing to the next grade, earning a diploma, or disciplinary action — cannot be based solely on AI decisions. A human must be involved.
AI systems may not be the sole basis for grading or evaluation in high-stakes academic decisions including advancement certification or disciplinary action
TECH-EDUS-0005
Proposed
Right to human review
If an AI system affects your grade or academic evaluation, you have the right to request a human review of that decision.
Students have the right to human review of AI-influenced grading or evaluation decisions
TECH-EDUS-0006
Proposed
Clear policies on AI use
Schools must have clear policies on how AI may and may not be used in learning. These policies must support genuine learning and cannot rely on AI detection tools that are known to be unreliable.
Educational institutions must establish clear policies on AI use that promote learning and prevent misuse without relying on unreliable detection systems
TECH-EDUS-0007
Proposed
Evaluate for bias
AI tools used in education must be tested to ensure they treat all students fairly. Tools that disadvantage students based on race, gender, disability, income, or language background are prohibited.
AI systems used in education must be evaluated for bias and must not disadvantage students based on race gender disability socioeconomic status or language background
TECH-EDUS-0008
Proposed
Accessibility and equity
Every student must be able to use AI tools in the classroom, regardless of their background or resources. AI cannot make educational inequality worse.
AI tools used in education must be accessible to all students and may not create or reinforce educational inequality
TECH-EDUS-0009
Proposed
Data minimization
When AI systems collect data about students, they may only collect what is actually needed for educational purposes. Gathering extra data about minors is not allowed.
Student data collected by AI systems must be limited to what is necessary for educational purposes
TECH-EDUS-0010
Proposed
No commercial use of student data
Data collected from students through AI tools cannot be sold, shared with advertisers, or used to build profiles for non-educational purposes.
Student data may not be sold shared or used for advertising profiling or unrelated commercial purposes
TECH-EDUS-0011
Proposed
Enhanced protections for minors
Children using AI in schools have stronger privacy protections than adults. Stricter rules apply to how student data is collected, stored, and used.
Enhanced privacy protections are required for minors using AI systems in educational settings
TECH-EDUS-0012
Proposed
Ban invasive surveillance
Schools cannot use AI to continuously monitor students, track their behavior, or analyze their bodies through biometric surveillance without a strong specific justification and protective safeguards.
Ban use of AI systems for invasive surveillance of students including continuous monitoring behavioral tracking or biometric analysis without strict justification and safeguards
TECH-EDUS-0013
Proposed
No emotion inference
AI systems cannot be used to infer how students are feeling, how attentive they are, or their psychological state as part of grading or discipline decisions.
AI systems may not be used to infer student emotions attention or psychological state for evaluation or discipline
TECH-EDUS-0014
Proposed
No behavioral scoring
Schools cannot use AI to assign behavioral compliance scores to students. Students are not subjects to be ranked by their conformity to automated expectations.
Educational institutions may not use AI systems to assign behavioral or compliance scores to students
TECH-EDUS-0015
Proposed
Disclose AI-generated content
When educational content is AI-generated, students must be told. The limitations of AI-generated material must also be disclosed so students can evaluate it critically.
AI-generated educational content must be clearly identified and its limitations disclosed
TECH-EDUS-0016
Proposed
No ideological manipulation
AI systems in education cannot impose a particular political or ideological viewpoint. Any attempt to use AI to shape students' beliefs without transparency or oversight is prohibited.
AI systems may not be used to impose ideological viewpoints or manipulate educational content without transparency and oversight
TECH-EDUS-0017
Proposed
Support critical thinking
Education technology should strengthen students' ability to think critically and form their own judgments — not encourage passive acceptance of AI-generated answers.
Educational use of AI must support development of critical thinking rather than passive consumption of generated answers
TECH-EDUS-0018
Proposed
Accessibility for disabilities
AI tools that help students with disabilities — like transcription, translation, or adaptive learning — are encouraged. Accessibility benefits are a legitimate and important use of AI in education.
AI may be used to improve accessibility for students with disabilities including translation transcription and adaptive learning tools
TECH-EDUS-0019
Proposed
Multilingual support
AI tools that help students learn in multiple languages are encouraged. Reducing language barriers improves equity — as long as the tools are accurate and fair across languages.
AI systems should support multilingual education and reduce language barriers without compromising accuracy or fairness
TECH-EDUS-0020
Proposed
Disclosure requirement
Schools must tell students, families, and the public when AI is being used in teaching, grading, or administrative decisions.
Educational institutions must disclose use of AI systems in teaching evaluation and administration
TECH-EDUS-0021
Proposed
Regular independent audits
AI tools used in education must be regularly inspected by independent reviewers for accuracy, fairness, and whether they are actually helping students learn.
AI systems used in education must be subject to regular independent audits for bias accuracy and educational impact
TECH-EDUS-0022
Proposed
No proprietary opacity
Technology companies that sell AI tools to schools cannot use business secrets as an excuse to avoid accountability or oversight of how their products work.
Vendors providing AI systems to educational institutions may not use proprietary claims to avoid oversight or accountability
TECH-EDUS-0023
Proposed
Demonstrate educational benefit
Before AI tools are rolled out to classrooms at scale, schools must have evidence that they actually improve learning outcomes. Unproven tools should not be experimented on students without safeguards.
AI systems must demonstrate educational benefit through evidence before widespread deployment in classrooms
TECH-EDUS-0024
Proposed
Informed consent for testing
Students cannot be used as unwitting test subjects for AI products. If a school wants to use students to test a new AI tool, families must be informed and given the opportunity to consent.
Students may not be used as unwitting test subjects for AI systems without informed consent and appropriate safeguards
TECH-ENVS-0001
Proposed
Sustainable operations required
AI companies must not pass their environmental costs — like power consumption, water use, or pollution — onto the public or the environment. They must operate within sustainable limits.
AI systems and infrastructure must not externalize environmental costs and must operate within sustainable limits for energy water materials and ecological impact
TECH-ENVS-0002
Proposed
Carbon neutrality requirement
Large AI systems and data centers must be carbon neutral or carbon negative over their full life cycle. This means accounting for all emissions from building and running these systems, not just electricity use.
Large-scale AI systems and data centers must meet strict carbon neutrality or carbon-negative requirements over their full lifecycle
TECH-ENVS-0003
Proposed
Energy disclosure
AI companies must publicly disclose how much energy their systems use, where it comes from, and how much pollution they produce. Transparency is required, not optional.
Operators of large AI systems must publicly disclose energy usage sources emissions and efficiency metrics
TECH-ENVS-0004
Proposed
Renewable energy supply
Large AI data centers must generate or procure their own clean energy, rather than drawing more from the shared electric grid and driving up costs and emissions for everyone else.
High-consumption AI infrastructure must supply or offset its energy usage through dedicated renewable or low-impact sources rather than relying on general grid extraction
TECH-ENVS-0005
Proposed
Water usage disclosure
AI infrastructure operators must publicly disclose how much water their facilities use — including for cooling — and what impact that has on local water supplies.
AI infrastructure operators must disclose water usage including cooling consumption and local environmental impact
TECH-ENVS-0006
Proposed
Water resource protection
AI data centers cannot be built or expanded in ways that strain local water supplies, especially in regions already facing drought or water scarcity.
AI systems must not disproportionately strain local water resources particularly in drought-prone or vulnerable regions
TECH-ENVS-0007
Proposed
Materials sourcing standards
The rare earth metals and other materials used to build AI hardware must be sourced responsibly, with strong environmental and labor protections throughout the supply chain.
Materials used in AI hardware including rare earth elements must be sourced under strict environmental and labor standards
TECH-ENVS-0008
Proposed
Lifecycle responsibility
AI hardware makers are responsible for the environmental impact of their products from production through disposal — not just during the use phase.
AI hardware producers must be responsible for full lifecycle impacts including manufacturing durability and end-of-life disposal
TECH-ENVS-0009
Proposed
Durability and repairability
AI hardware must be designed to last, be repairable, and be reusable. Planned obsolescence that generates unnecessary electronic waste is prohibited.
AI-related hardware must meet durability repairability and reuse standards to reduce electronic waste
TECH-ENVS-0010
Proposed
Recycling programs
Companies must provide responsible recycling and safe disposal programs for AI hardware and infrastructure components when those items reach end of life.
Operators must implement responsible recycling and disposal programs for AI hardware and infrastructure components
TECH-ENVS-0011
Proposed
Internalize environmental costs
AI companies must pay for their own environmental impacts rather than shifting those costs to local communities, future generations, or ecosystems.
Companies developing or deploying AI systems must internalize environmental costs rather than shifting them to the public or ecosystems
TECH-ENVS-0012
Proposed
Environmental impact assessments
Large new AI data centers and infrastructure expansions must undergo formal environmental impact reviews before they can be approved and built.
Large-scale AI deployments must undergo environmental impact assessments prior to construction or expansion
TECH-ENVS-0013
Proposed
Support climate efforts
AI can and should be used to help address climate change — for example, through climate modeling, environmental monitoring, and optimizing clean energy systems — but these uses must be transparent and accountable.
AI should be used to support climate modeling environmental monitoring and mitigation efforts under transparent and accountable frameworks
TECH-ENVS-0014
Proposed
Resource optimization
AI tools can be used to reduce energy and water waste in operations — but only if those optimizations don't create hidden environmental harms or unfairly shift burdens onto other communities.
AI may be used to optimize energy water and resource usage provided it does not create hidden environmental tradeoffs or inequitable impacts
TECH-ENVS-0015
Proposed
Grid modernization support
AI systems may help upgrade and modernize the electrical grid and expand renewable energy — but these tools must operate transparently and serve the public interest, not just corporate efficiency goals.
AI systems may support modernization of electrical grids microgrids and renewable energy systems while maintaining transparency and public accountability
TECH-ENVS-0016
Proposed
No misrepresentation
AI companies cannot misrepresent their environmental performance by cherry-picking data, using unverifiable carbon offsets, or making sustainability claims that don't hold up to scrutiny.
AI companies may not misrepresent environmental impact through selective reporting offsets or unverifiable sustainability claims
TECH-ENVS-0017
Proposed
Standardized reporting
Environmental reporting by AI companies must follow standardized, verifiable measurements. Inconsistent or self-selected metrics can hide real impacts.
Environmental reporting for AI systems must follow standardized verifiable metrics to prevent manipulation or selective disclosure
TECH-ENVS-0018
Proposed
Environmental justice protection
AI data centers and infrastructure cannot be systematically placed in lower-income or marginalized communities to avoid scrutiny or accountability for their environmental harms.
AI infrastructure may not disproportionately locate environmental burdens in disadvantaged communities or regions
TECH-ENVS-0019
Proposed
No global offloading
The environmental costs of building AI hardware — like mining and manufacturing — cannot be pushed onto lower-income countries or regions that lack equivalent environmental protections.
Environmental costs of AI supply chains must not be offloaded to developing regions without equivalent protections and standards
TECH-ENVS-0020
Proposed
Integration with national policy
Planning for AI infrastructure must be integrated with national strategies for grid modernization and renewable energy investment. AI's energy needs cannot be treated as separate from public energy policy.
AI infrastructure policy must integrate with national energy grid modernization renewable investment and public infrastructure strategies
TECH-ENVS-0021
Proposed
Noise pollution standards
Data centers must meet noise pollution standards that protect nearby communities. Local residents must have access to monitoring data and the ability to demand mitigation.
Establish standards for noise pollution from AI data centers and other large data centers including monitoring mitigation and local community protections
TECH-ENVS-0022
Proposed
Water consumption caps
AI data centers that exceed defined water consumption limits must reduce usage, compensate affected communities, or face regulatory penalties.
Large-scale AI data centers must operate within binding water consumption limits based on regional availability and must report actual usage against permitted caps on a publicly accessible basis
TECH-ENVS-0023
Proposed
Water-stressed region restrictions
AI data centers cannot be built or expanded in water-stressed regions without a rigorous review of the impact on local water availability and community needs.
AI infrastructure requiring significant water cooling may not be sited in regions classified as water-stressed or drought-prone under federal or state water scarcity designations without demonstrated water-neutral mitigation plans and public review
TECH-ENVS-0024
Proposed
Groundwater and aquifer protection
AI data centers must not endanger local groundwater or underground water reserves. Protection of underground water sources is required.
AI data centers may not draw on groundwater or aquifer sources in ways that degrade water tables reduce flows to downstream users or damage ecosystems without full environmental review, public notice, and binding mitigation commitments
TECH-ENVS-0025
Proposed
Water recycling and reclamation
AI data centers must use water recycling and reclamation systems where technically feasible. Wasting water that could be recovered is not acceptable.
New large-scale AI data centers must implement water recycling, reclamation, or closed-loop cooling systems to the maximum extent technically feasible and must report recycling rates alongside gross consumption figures
TECH-ENVS-0026
Proposed
Drought contingency requirements
AI data centers must have drought contingency plans that specify how they will reduce water consumption during water shortages without shifting burdens onto local communities.
Operators of high-water-consumption AI infrastructure must maintain and publicly file drought contingency plans specifying operational curtailment thresholds during declared water emergencies, priority for municipal and agricultural water users, and timelines for transitioning to low-water or air-cooled alternatives
TECH-ENVS-0027
Proposed
Community water impact review
Before a large AI data center is built or expanded, the local community must have a formal review process to assess the impact on water supply and other shared resources.
Before approving large data center water permits, regulators must conduct a community water impact review assessing effects on residential water access, municipal supply systems, agricultural users, and environmental flows, with mandatory public comment periods and binding mitigation requirements
TECH-FINC-0001
Proposed
Fairness and transparency in finance
AI tools used in finance, credit, and insurance must be fair, transparent, and non-discriminatory. Access to essential financial services cannot be undermined by opaque automated systems.
AI systems in finance credit and insurance must not undermine fairness transparency equal access or protection from discrimination in essential economic systems
TECH-FINC-0002
Proposed
No automated denial of credit
AI cannot independently deny you a loan, mortgage, or other financial service. A qualified human must review and be accountable for any denial before it takes effect.
AI systems may not independently deny credit loans mortgages refinancing or other essential financial services without direct human review and accountability
TECH-FINC-0003
Proposed
Human review before denial
Any decision to deny, restrict, or worsen your access to credit must be made directly by a human reviewer — not delegated to an AI. The human must make an independent judgment.
Any decision that would deny restrict or materially worsen access to credit or lending must be made directly and independently by a qualified human reviewer before harm occurs
TECH-FINC-0004
Proposed
No opaque criteria
AI systems that assess your creditworthiness must be able to explain how they reach their conclusions. Opaque or unexplainable criteria that affect financial decisions are not permitted.
AI systems used in underwriting or creditworthiness assessments may not rely on opaque or unexplainable criteria that materially affect outcomes
TECH-FINC-0005
Proposed
Anti-discrimination requirement
AI systems in lending and insurance cannot discriminate based on your race, gender, religion, or other protected characteristics — or based on zip code, language, or other stand-ins for those characteristics.
AI systems in finance and insurance must not discriminate based on protected characteristics or proxies for those characteristics including zip code language behavior or network data
TECH-FINC-0006
Proposed
Regular audits for bias
AI systems used in lending, underwriting, and insurance must be regularly audited to check for discriminatory outcomes — including patterns that disproportionately harm particular groups.
AI systems used in lending underwriting pricing or claims decisions must be regularly audited for disparate impact and discriminatory outcomes
TECH-FINC-0007
Proposed
No automated insurance denial
AI cannot independently deny, restrict, or cancel your insurance coverage. A human must review and be directly accountable for any adverse coverage or claims decision.
AI systems may not independently deny restrict reduce or terminate insurance coverage or claims without direct and independent human review
TECH-FINC-0008
Proposed
Independent human judgment required
Human reviewers making insurance decisions cannot simply rubber-stamp what an AI recommends. They must apply their own independent judgment.
Human reviewers in insurance decisions may not rely solely on AI-generated recommendations and must exercise independent judgment
TECH-FINC-0009
Proposed
AI for approval not denial
AI tools can be used to help approve insurance claims or coverage more efficiently — but they cannot be the primary reason a claim is denied, reduced, or restricted.
AI systems may be used to assist or expedite approval of insurance claims or coverage but may not be used as the primary basis for denial restriction or reduction
TECH-FINC-0010
Proposed
No negative inference from AI
The fact that an AI did not approve a claim cannot be used — even implicitly — as a reason to deny insurance or delay coverage. AI non-approval is not a valid justification for denial.
Absence of AI approval or recommendation may not be used as evidence or implicit justification for denial restriction or delay of insurance claims or coverage
TECH-FINC-0011
Proposed
No behavioral surveillance scoring
AI systems in finance and insurance cannot use broad behavioral surveillance or assign social scores to determine your eligibility, pricing, or access. Your digital behavior cannot be used against you without specific legal authority.
AI systems may not use generalized behavioral surveillance or social scoring to determine eligibility pricing or access in finance credit or insurance
TECH-FINC-0012
Proposed
Transparent risk scoring
Risk scoring in finance and insurance must use transparent, lawful, and relevant factors. Systems that rely on opaque behavioral inference to score people are prohibited.
Risk scoring in finance and insurance must be based on transparent lawful and relevant factors and may not rely on opaque behavioral inference
TECH-FINC-0013
Proposed
No vulnerability profiling
AI systems cannot use detailed profiles of your personal vulnerabilities to charge you higher prices, fees, or unfavorable terms. Exploiting individual circumstances for profit is prohibited.
AI systems may not use individualized vulnerability profiling to impose exploitative pricing rates fees or terms
TECH-FINC-0014
Proposed
Fair housing access
AI tools used in mortgage lending and housing finance cannot reproduce patterns of historical discrimination. Fair access to housing-related financial services must be preserved.
AI systems used in mortgage housing finance or rental screening must not undermine fair access to housing or reproduce historical discrimination
TECH-FINC-0015
Proposed
Tenant screening limits
Tenant screening and other housing-related AI systems cannot rely on opaque scoring systems or unverifiable data. People's housing access cannot be blocked by black-box algorithms.
Tenant screening and housing-related AI systems may not rely on opaque scoring or unverifiable data that materially affect housing access
TECH-FINC-0016
Proposed
Right to explanation
When an AI system meaningfully influences a financial, credit, or insurance decision affecting you, you have the right to a plain-language explanation of how that decision was made.
People have the right to a meaningful explanation of AI-influenced financial credit or insurance decisions affecting them
TECH-FINC-0017
Proposed
Right to appeal
When an AI-influenced decision negatively affects your financial, credit, or insurance situation, you have the right to a timely appeal heard by a human reviewer.
People must have access to a timely human appeal process for materially adverse AI-influenced financial credit or insurance decisions
TECH-FINC-0018
Proposed
Disclosure requirement
Companies using AI in consequential financial or insurance decisions must tell you when those systems played a meaningful role.
Entities using AI in consequential financial or insurance decisions must disclose when such systems materially influence outcomes
TECH-FINC-0019
Proposed
Data minimization
AI systems in finance and insurance can only collect data about you that is actually necessary for a lawful and relevant decision. Collecting more than needed is prohibited.
AI systems in finance and insurance may collect only data that is strictly necessary for lawful and relevant decision-making
TECH-FINC-0020
Proposed
No data sales or repurposing
Financial and insurance data collected by AI systems cannot be sold, shared with advertisers, or repurposed for behavioral profiling unrelated to the original decision.
Financial and insurance data used by AI systems may not be sold shared or repurposed for unrelated commercial targeting or behavioral profiling
TECH-FINC-0021
Proposed
No secret profile enrichment
Companies cannot secretly enrich your financial or insurance profile by adding data from third-party brokers, social media, or surveillance sources without explicit disclosure and legal authority.
Entities may not secretly enrich financial or insurance profiles with third-party consumer data social graph data or surveillance-derived data without explicit lawful authority and disclosure
TECH-FINC-0022
Proposed
No cross-system denial loops
AI systems cannot trap people in self-reinforcing denial loops where a bad credit score blocks insurance, which blocks housing, which blocks benefits — all without independent human review and due process at each step.
AI systems may not be used to create cross-system denial loops where credit insurance housing or public-benefit decisions reinforce each other without independent review and due process
TECH-FINC-0023
Proposed
Independent audits
AI systems in finance and insurance must be regularly, independently audited for bias, fairness, legal compliance, and harm to consumers.
AI systems used in finance credit and insurance must be subject to regular independent audits for bias fairness legality and consumer harm
TECH-FINC-0024
Proposed
No proprietary opacity
Companies cannot use business secrets or proprietary claims to prevent regulators, courts, or affected people from understanding, challenging, or appealing AI-influenced financial decisions.
Trade secrecy or proprietary claims may not be used to prevent meaningful oversight explanation or appeal in consequential financial or insurance AI systems
TECH-FINC-0025
Proposed
Documentation requirement
Companies deploying consequential AI in finance and insurance must maintain detailed records sufficient to allow independent review, auditing, and accountability.
Entities deploying consequential AI systems in finance and insurance must maintain documentation sufficient for review audit and accountability
TECH-FINC-0026
Proposed
No exploitation of vulnerability
AI systems cannot be used to identify and exploit people who are financially vulnerable — targeting them with predatory loans, fees, or insurance terms.
AI systems may not be used to identify and exploit financially vulnerable individuals through targeted loans fees insurance terms or sales practices
TECH-FINC-0027
Proposed
No impersonation
AI systems in finance and insurance cannot pretend to be human advisors or misrepresent what authority they have or what obligations they carry. You have the right to know you are dealing with an automated system.
AI systems in finance and insurance may not impersonate human advisors or misrepresent their authority capabilities or obligations to consumers
TECH-FINC-0028
Proposed
Public-interest standards for essential systems
When financial or insurance systems function as essential gateways to housing, healthcare, or basic economic participation, they must be governed by the highest public-interest standards — not just market incentives.
Where finance credit or insurance systems function as essential gateways to housing healthcare transportation or economic participation they must be governed by heightened public-interest standards
TECH-FINC-0029
Proposed
No automatic exclusion from essential systems
AI systems cannot automatically exclude people from financial systems that are necessary for basic participation in society. Access to essential economic infrastructure must be protected.
AI systems must not be allowed to automatically exclude people from essential economic systems needed for basic social participation
TECH-GOVN-0001
Proposed
Preserve due process and rights
Government and public-service AI systems must uphold due process, equal protection, and transparency. AI cannot be used to undermine people's legal rights or access to services.
AI systems used by government or public-service entities must not undermine due process equal protection transparency accountability or access to rights and services
TECH-GOVN-0002
Proposed
No automated denial of benefits
Government AI cannot independently cut off, reduce, or deny your access to public benefits, services, or legal status. Only a human official can make that call.
AI systems may not independently deny terminate reduce or suspend access to public benefits services or legal status determinations
TECH-GOVN-0003
Proposed
Human decision-maker before harm
Any decision by the government to deny, reduce, or delay benefits or services must be made by a qualified human decision-maker — not delegated entirely to an automated system.
Any decision that would deny reduce terminate or delay benefits services or rights must be made directly and independently by a qualified human decision-maker before harm occurs
TECH-GOVN-0004
Proposed
Independent human judgment required
When humans review government AI decisions, they cannot simply repeat what the AI recommended. They must think independently and exercise real judgment.
Human reviewers may not rely solely on AI-generated recommendations when making public-service or benefits decisions and must exercise independent judgment
TECH-GOVN-0005
Proposed
AI for approval not denial
Government can use AI to help identify who might be eligible for benefits or to process approvals more quickly — but AI cannot be the primary driver of denials or terminations.
AI systems may be used to help identify likely eligibility or expedite approvals but may not be used as the primary basis for denial reduction or termination of benefits or services
TECH-GOVN-0006
Proposed
Right to explanation
If a government AI system affects your benefits, services, legal status, or rights, you have the right to a clear explanation of how that decision was made.
Individuals have the right to a meaningful explanation of any AI-influenced government decision affecting benefits services legal status or access to rights
TECH-GOVN-0007
Proposed
Right to appeal
When a government AI system makes an adverse decision affecting you, you must have a realistic and timely way to appeal that decision to a human decision-maker.
Individuals must have a timely accessible appeal process before a human decision-maker for any materially adverse AI-influenced government decision
TECH-GOVN-0008
Proposed
Disclosure requirement
Government agencies must openly disclose when AI systems materially shape decisions that affect the public. Hidden use of AI in government decision-making is prohibited.
Government agencies must clearly disclose when AI systems materially influence decisions affecting the public
TECH-GOVN-0009
Proposed
No behavioral scoring
The government cannot use AI to assign you a generalized risk score or trustworthiness rating to decide your access to public services or rights. Individual scoring for compliance or suspicion is prohibited.
Government may not use AI systems to assign generalized risk trustworthiness fraud or compliance scores to individuals for access to public services or rights
TECH-GOVN-0010
Proposed
No discriminatory profiling
Government AI systems cannot use your race, religion, gender, or other protected characteristics — or proxies for them — to infer whether you are eligible, suspicious, or dangerous.
Government AI systems may not use protected characteristics or their proxies to infer eligibility suspicion dangerousness or worthiness
TECH-GOVN-0011
Proposed
No social conformity scoring
The government cannot use AI to score or monitor your behavior, ideological views, or social conformity as a condition of receiving public services or benefits. Social scoring by government is prohibited.
Government may not use AI systems to monitor or score behavioral conformity ideological alignment or social desirability as a condition of access to services or benefits
TECH-GOVN-0012
Proposed
Access to human representatives
Everyone must be able to speak with a human government representative for matters involving benefits, legal status, healthcare, housing, education, or other essential services. AI-only channels are not sufficient.
People must retain access to human government representatives for matters involving benefits legal status healthcare housing education or essential public services
TECH-GOVN-0013
Proposed
No forced AI-only channels
The government cannot force you into an AI-only service channel where the absence of a human would make it harder to be treated fairly, understand decisions, or contest them.
Government may not force individuals into AI-only service channels where lack of human access would impair fairness comprehension or ability to contest decisions
TECH-GOVN-0014
Proposed
Accessibility for vulnerable populations
AI-powered public services must work for people with disabilities, low-income users, elderly people, and those with limited English proficiency. AI cannot create new barriers to accessing services.
AI-enabled public services must be accessible to disabled low-income elderly and limited-English-proficiency users and may not create new access barriers
TECH-GOVN-0015
Proposed
No automated disability denials
AI systems cannot deny you disability benefits or override medical evidence from a licensed provider without transparent human review and the opportunity to challenge the decision.
AI systems may not be used to deny disability benefits or override licensed medical evidence without transparent human review and due process
TECH-GOVN-0016
Proposed
No mass fraud sweeps
The government cannot use AI to conduct mass sweeps looking for fraud or suspicion-based enforcement against people receiving benefits. These actions require individualized legal standards and human review.
Government may not use AI systems to conduct mass fraud sweeps or suspicion-based benefits enforcement without individualized legal standards and human review
TECH-GOVN-0017
Proposed
No automated termination of essential benefits
AI systems cannot automatically terminate housing assistance, food assistance, healthcare coverage, disability support, or income benefits based on a statistical anomaly or behavioral prediction alone.
AI systems may not automatically terminate housing food healthcare disability or income-support benefits based on scoring anomalies or probabilistic inference
TECH-GOVN-0018
Proposed
No automated immigration decisions
AI cannot independently decide someone's immigration status, order detention or deportation, rule on asylum claims, or determine family reunification. These decisions require a human official.
AI systems may not independently determine immigration status detention deportation asylum credibility or family reunification outcomes
TECH-GOVN-0019
Proposed
No credibility inference in immigration
The government cannot use AI to infer whether someone applying for immigration or asylum status is telling the truth, presenting a danger, or has hidden intentions. These judgments require human review.
Government may not use AI systems to infer truthfulness credibility intent or dangerousness in immigration or asylum contexts
TECH-GOVN-0020
Proposed
Human review for immigration decisions
Any government decision about someone's immigration status or detention that was influenced by AI must be reviewed and finalized by a human official subject to due process and judicial oversight.
Any AI-influenced immigration or detention decision must be reviewed and decided by a human official subject to due process and judicial oversight
TECH-GOVN-0021
Proposed
Vendor accountability
Private companies that supply AI systems to government must follow the same legal, constitutional, and ethical rules as the government itself. Government cannot outsource its obligations to private vendors.
Private vendors and contractors supplying AI systems to government are subject to the same legal constitutional and ethical constraints as the government entities using them
TECH-GOVN-0022
Proposed
No proprietary opacity
A company's claim that its AI is a proprietary trade secret cannot be used to block oversight, prevent explanation of decisions, or prevent appeals in public-sector AI systems.
Trade secrecy or proprietary claims may not be used to prevent meaningful oversight explanation or appeal for AI systems used in public decision-making
TECH-GOVN-0023
Proposed
Procurement disclosure
When government buys AI systems, the procurement process must be public, disclosing what the system does, its limitations, who built it, and how it will be overseen.
Government procurement of AI systems must include public disclosure of purpose capabilities limitations oversight structure and vendor conflicts of interest
TECH-GOVN-0024
Proposed
Regular independent audits
All major government AI systems must be regularly and independently audited for legal compliance, accuracy, bias, and impact on people's rights.
Government AI systems must undergo regular independent audits for legality accuracy bias rights impacts and administrative fairness
TECH-GOVN-0025
Proposed
Public AI registry
All government AI systems that affect rights, benefits, or legal status must be listed in a publicly accessible registry that describes what each system does and how it is overseen.
All materially consequential government AI systems must be listed in a public registry with clear descriptions of use authority and oversight
TECH-GOVN-0026
Proposed
Sunset and reauthorization
Government AI systems that affect rights or services must have expiration dates and must be periodically reauthorized through a public process. Indefinite authorization is not acceptable.
Government AI systems affecting rights benefits or legal status must include sunset dates and periodic reauthorization requirements
TECH-GOVN-0027
Proposed
Explicit legal authority required
Government agencies cannot deploy AI in ways that affect the public without specific legal authority to do so and clear limits on how the system can be used.
Government agencies may not deploy consequential AI systems without explicit legal authority and defined limits on scope and use
TECH-GOVN-0028
Proposed
No unwitting test subjects
Government cannot use the general public as uninformed test subjects for consequential AI experiments. Any testing must be lawful, transparent, and subject to ethical oversight.
Government may not use the public as unwitting test subjects for consequential AI systems without lawful authority transparency and ethical safeguards
TECH-GOVN-0029
Proposed
Right to challenge system legality
People have the right not only to challenge a specific AI decision made against them, but also to challenge whether the AI system itself was legally authorized and valid.
Individuals must be able to challenge not only a government AI decision but also the legality and validity of the system used to make it
TECH-IMMS-0001
Proposed
Ban opaque immigration AI
AI systems used to make immigration, detention, or deportation decisions must not operate as opaque black boxes. These decisions are too serious to be made by systems that cannot be reviewed or challenged.
Ban use of opaque or unreviewable AI systems in immigration enforcement detention or deportation decisions
TECH-IMMS-0002
Proposed
Ban risk scoring for detention
AI-generated risk scores used to justify holding someone in immigration detention, denying their release, or separating families are banned.
Ban AI risk scoring systems used to justify immigration detention denial of release or family separation
TECH-INTL-0004
Proposed
Common carrier treatment
Internet service providers and the core infrastructure of the internet must be treated as neutral carriers — like telephone or postal services. They must carry all traffic equally.
Internet service providers and core digital network infrastructure must be treated as common carriers or equivalent neutral service providers
TECH-INTL-0005
Proposed
Non-discrimination requirement
Network neutrality rules mean ISPs cannot favor or block traffic based on what the content is, who created it, where it is going, or for competitive or political reasons.
Common carrier obligations must prohibit discrimination based on content source destination or political or competitive considerations
TECH-INTL-0006
Proposed
No blocking or throttling
Internet providers cannot slow down, block, or prioritize traffic to gain a competitive advantage, earn more money from certain content providers, or advance political goals.
Network operators may not prioritize degrade or block traffic for competitive advantage political influence or economic coercion
TECH-INTL-0007
Proposed
Narrow technical exceptions
Network neutrality rules can have narrow exceptions for technical operations like managing network congestion, maintaining security, and ensuring reliability.
Net neutrality frameworks must allow narrowly scoped exceptions for technical optimization security reliability and research and development
TECH-INTL-0008
Proposed
Transparent exceptions
Any exceptions to network neutrality must be publicly disclosed, independently auditable, and must not give any company or political actor an unfair advantage.
All exceptions to neutrality must be transparent auditable and prohibited from conferring unfair market or political advantage
TECH-INTL-0009
Proposed
Protection from rollback
Network neutrality protections must be written into law, not just regulatory rules. Regulations alone can be rolled back by a new administration; legal protections are more durable.
Core neutrality and access protections must be insulated from administrative rollback through regulatory or executive action alone
TECH-INTL-0007ProposalNet neutrality frameworks must allow narrowly scoped exceptions for technical optimization security reliability and…
Net neutrality frameworks must allow narrowly scoped exceptions for technical optimization security reliability and research and development
Source: DB entry TEC-INT-007, status: MISSING. Pending editorial review before promotion to core position.
TECH-INTL-0008ProposalAll exceptions to neutrality must be transparent auditable and prohibited from conferring unfair market or political…
All exceptions to neutrality must be transparent auditable and prohibited from conferring unfair market or political advantage
Source: DB entry TEC-INT-008, status: MISSING. Pending editorial review before promotion to core position.
TECH-INTL-0009ProposalCore neutrality and access protections must be insulated from administrative rollback through regulatory or executive…
Core neutrality and access protections must be insulated from administrative rollback through regulatory or executive action alone
Source: DB entry TEC-INT-009, status: MISSING. Pending editorial review before promotion to core position.
TECH-JUDS-0001
Proposed
No AI sentencing
AI cannot be used to determine how long someone goes to prison, whether they get bail, or what punishment they receive in a criminal case.
AI systems may not be used to determine sentencing bail or punishment in criminal justice proceedings
TECH-JUDS-0002
Proposed
No risk scoring
AI systems cannot assign someone a score predicting how likely they are to commit another crime, how dangerous they are, or how likely they are to show up for court. These scores cannot be used in judicial decisions.
AI systems may not assign risk scores for recidivism dangerousness or likelihood of compliance in judicial decision-making
TECH-JUDS-0003
Proposed
No jury profiling
AI systems cannot be used to identify or profile potential jurors based on their behavior, psychology, or demographic background.
AI systems may not be used to profile influence or select jurors based on behavioral psychological or demographic inference
TECH-JUDS-0004
Proposed
Strict admissibility standards
AI-generated or AI-analyzed evidence must meet strict standards for reliability and transparency, and defendants must have a full opportunity to challenge it before it is used in court.
AI-generated or AI-analyzed evidence must meet strict standards for reliability transparency and cross-examination before being admissible in court
TECH-JUDS-0005
Proposed
Right to examine AI evidence
Defendants and parties in legal proceedings have the right to examine and challenge any AI system used to generate or analyze evidence in their case — including how it works and what data it used.
Defendants and parties must have the right to examine challenge and obtain disclosure of AI systems used in evidentiary processes
TECH-JUDS-0006
Proposed
AI for wrongful conviction review
AI may be used to help find wrongful convictions, identify errors, and speed up the appeals process for people who may have been unjustly imprisoned.
AI may be used to assist in identifying wrongful convictions inconsistencies or procedural errors and to accelerate appeals and case reviews
TECH-JUDS-0007
Proposed
Identify systemic bias
AI can be used to identify patterns of systemic bias in sentencing, policing, and prosecution — as a tool for reform and corrective action, not for making decisions about individuals.
AI should be used to identify systemic bias in sentencing policing and prosecution patterns to support corrective action
TECH-JUDS-0008
Proposed
No AI final determinations
Judges and court staff cannot use AI to make final decisions about facts, the law, credibility of witnesses, or how cases should come out. These decisions must remain human.
Judges and court staff may not rely on AI systems to make final determinations of fact law credibility or case outcome
TECH-JUDS-0009
Proposed
Clerical use only with disclosure
AI may be used for clerical tasks in courts — like organizing documents and summarizing routine records — but only with clear disclosure and no effect on legal outcomes.
AI may be used to assist with clerical summarization research and document organization only under clear disclosure and human verification requirements
TECH-JUDS-0010
Proposed
Disclose material AI use
Courts must disclose any meaningful use of AI in drafting opinions, legal analysis, or summarizing records. Undisclosed AI assistance in judicial work is prohibited.
Courts must disclose material use of AI in opinion drafting legal analysis record summarization or case management where it could affect litigants' rights or case outcomes
TECH-JUDS-0011
Proposed
No unfair docket prioritization
AI systems cannot be used to prioritize which cases, motions, or hearings get scheduled or heard in ways that create unfairness or unequal access to justice.
AI systems may not be used to prioritize dockets motions or hearings in ways that create unfairness bias or unequal access to timely justice
TECH-JUDS-0012
Proposed
Preserve accountable reasoning
Any AI-assisted judicial workflow must produce a human-authored, accountable, and reviewable chain of reasoning for every consequential decision.
Any AI-assisted judicial workflow must preserve a human-authored accountable and reviewable chain of reasoning for consequential rulings
TECH-JUDS-0013
Proposed
Evidence authentication
AI-generated or AI-enhanced evidence must come with documented proof of where it came from, how it was created, and how it was handled.
AI-generated or AI-enhanced evidence must be authenticated with documented provenance methodology and chain of custody before admissibility
TECH-JUDS-0014
Proposed
Technical disclosure for challenge
Parties in a legal case must receive enough technical information about AI-generated evidence to effectively challenge its reliability and limitations.
Parties must receive sufficient technical disclosure to challenge the reliability validity and limitations of AI-generated or AI-analyzed evidence
TECH-JUDS-0015
Proposed
Scientific validation required
Courts cannot allow AI outputs to be treated as expert-like evidence unless the AI's methods are scientifically valid, independently tested, and subject to challenge.
Courts may not admit AI outputs as expert-like evidence unless their methods are scientifically valid independently testable and subject to adversarial challenge
TECH-JUDS-0016
Proposed
Synthetic media high-risk
Deepfakes and synthetic media presented as evidence must be treated as high-risk and must meet enhanced authentication requirements before they can be used in court.
Synthetic media evidence must be presumptively treated as high-risk and require enhanced authentication standards before admission
TECH-JUDS-0017
Proposed
Preserve analysis logs
If AI is used to analyze evidence in a case, all of the data fed into the AI, the assumptions it made, and every step it took to reach its output must be preserved and available for review.
If AI is used to analyze evidence all material logs model assumptions and transformation steps relevant to the output must be preservable and reviewable
TECH-JUDS-0018
Proposed
Defense access to AI disclosures
If prosecutors or the government use AI in their investigation, charging decisions, or trial preparation, defendants must have access to the same capabilities and data to ensure a fair proceeding.
If prosecutors or the state use AI systems in investigation charging discovery or litigation support defendants must have equal access to the relevant outputs disclosures and technical challenge mechanisms
TECH-JUDS-0019
Proposed
Defense funding for AI parity
Public defenders must be funded adequately to keep up with AI tools used by prosecutors. An AI advantage for one side but not the other is fundamentally unfair.
Public defenders and defense counsel must be provided funding and access sufficient to avoid AI-driven asymmetry between prosecution and defense
TECH-JUDS-0020
Proposed
No AI-driven plea pressure
AI cannot be used to pressure people into accepting plea deals by showing them opaque conviction probability scores or predicted punishments. Plea decisions must be made without AI-driven intimidation.
AI systems may not be used to pressure plea deals through opaque case scoring predicted conviction metrics or punishment leverage models
TECH-JUDS-0021
Proposed
Guard against prosecutorial AI advantage
Courts must prevent situations where prosecutors have access to powerful AI tools that defense lawyers cannot afford or access. Technological disparity in the courtroom undermines equal justice.
Courts must guard against prosecutorial advantage created by proprietary or undisclosed AI tools unavailable to defense
TECH-JUDS-0022
Proposed
No juror profiling
AI systems cannot be used to profile potential jurors based on their behavior, psychological traits, or demographic characteristics for the purpose of strategic selection.
AI systems may not be used to profile rank influence or strategically shape juror selection based on behavioral psychological or demographic inference
TECH-JUDS-0023
Proposed
AI reconstructions must be disclosed
AI-generated reconstructions, simulations, or visualizations shown to a jury must be clearly identified as computer-generated demonstrations — not factual recordings of what actually happened.
AI-generated reconstructions simulations or visualizations shown to juries must be clearly disclosed as generated demonstrative material and subject to strict admissibility standards
TECH-JUDS-0024
Proposed
No AI summaries to juries
Courts cannot provide juries with AI-generated summaries, interpretations, or credibility assessments of testimony or evidence. Juries must evaluate evidence themselves.
Courts may not provide juries with AI-generated summaries interpretations or credibility assessments of testimony or evidence
TECH-JUDS-0025
Proposed
No distortion of evidentiary record
AI in the courtroom must not distort the actual evidence, create false impressions of certainty, or present outputs as more precise or neutral than they actually are.
Jury-facing use of AI must not distort the evidentiary record or create false impressions of certainty precision or neutrality
TECH-JUDS-0026
Proposed
Default to prohibition
The default rule in courts and legal proceedings is that AI use is prohibited except where it is specifically and narrowly permitted with full transparency and audit requirements.
AI use in courts and legal proceedings must default to prohibition except where narrowly permitted for transparent auditable non-generative analytical purposes under strict human oversight
TECH-JUDS-0027
Proposed
Ban AI-generated evidence
AI-generated evidence — including images, video, audio, text, or reconstructions — cannot be used in legal proceedings.
AI-generated or AI-fabricated evidence including images video audio text or reconstructions is not admissible in legal proceedings
TECH-JUDS-0028
Proposed
Ban AI-enhanced evidence
AI-enhanced evidence that changes the interpretation or meaning of the underlying evidence is not admissible in court.
AI-enhanced evidence that alters interpretation content or meaning of underlying evidence is not admissible
TECH-JUDS-0029
Proposed
Analytical use only
AI may only be used for analysis on evidence that already exists — not to generate new content — and only where it does not change or reinterpret that evidence.
AI may be used only for analytical purposes on existing evidence where it does not generate new content and does not alter underlying data
TECH-JUDS-0030
Proposed
No generative AI for evidence
Generative AI systems, including large language models, cannot be used to analyze evidence in legal proceedings.
Generative AI systems including large language models may not be used for evidentiary analysis in legal proceedings
TECH-JUDS-0031
Proposed
Verifiable methods only
Analytical AI tools that are permitted in court must use verifiable, reproducible methods — such as established statistical techniques — not opaque generative processes.
Permitted analytical systems must use verifiable reproducible methods such as statistical or traditional machine learning models rather than generative systems
TECH-JUDS-0032
Proposed
Disclose analytical AI use
Any use of AI in evidence analysis must be explicitly disclosed — including the methods used, what data was inputted, what the limitations are, and what the error rate is.
Any use of AI in evidence analysis must be explicitly disclosed including methods data inputs limitations and error rates
TECH-JUDS-0033
Proposed
No black-box systems
AI systems that cannot be meaningfully explained, tested, or challenged — so-called black boxes — are not admissible for use in evidentiary contexts in court.
Black-box AI systems that cannot be meaningfully explained tested or challenged are not admissible in evidentiary contexts
TECH-JUDS-0034
Proposed
AI not expert witnesses
AI systems cannot be brought into court as expert witnesses or treated as authoritative sources on factual or legal matters.
AI systems may not be presented as expert witnesses or as authoritative sources in legal proceedings
TECH-JUDS-0035
Proposed
Human experts accountable
All expert testimony must come from qualified human experts who are personally accountable for their opinions and can be cross-examined.
All expert testimony must be provided by qualified human experts who are accountable for their conclusions
TECH-JUDS-0036
Proposed
Disclose AI-assisted expert analysis
A human expert who uses AI tools in their analysis must disclose that fact and remains fully responsible for every conclusion they reach, regardless of what the AI suggested.
Human experts using AI-assisted analysis must disclose such use and remain fully responsible for all conclusions
TECH-JUDS-0037
Proposed
No AI-drafted opinions
Judges and justices cannot use generative AI to write, draft, or materially shape judicial opinions or rulings. Courts must speak in a human voice, with a human responsible.
Judges and justices may not use generative AI systems to draft write or materially shape judicial opinions or rulings
TECH-JUDS-0038
Proposed
No AI legal reasoning
AI systems cannot substitute for a judge's own reasoning, legal analysis, or interpretation of the law. Judicial thinking must remain a human function.
AI systems may not substitute for judicial reasoning legal analysis or interpretation of law
TECH-JUDS-0039
Proposed
Limited clerical use
AI may only be used for limited clerical tasks in judicial settings — like document organization or formatting — that have no effect on legal content or outcomes.
AI may be used for limited clerical tasks such as document organization or formatting only when it does not affect legal reasoning or outcomes
TECH-JUDS-0040
Proposed
No AI reconstructions to juries
AI-generated reconstructions, simulations, or demonstrative materials cannot be shown to juries. Juries must see and hear actual evidence, not AI-produced dramatizations.
AI-generated reconstructions simulations or demonstrative materials may not be shown to juries
TECH-JUDS-0041
Proposed
No AI summaries of evidence to juries
AI cannot summarize, interpret, or present evidence or witness testimony to a jury. The jury evaluates evidence directly — AI is not an intermediary.
AI systems may not be used to summarize interpret or present evidence or testimony to juries
TECH-JUDS-0042
Proposed
No AI influence on jurors
AI cannot directly or indirectly influence how jurors perceive, behave, or make decisions. Jury deliberations must be free from AI manipulation.
AI systems may not be used to influence juror perception behavior or decision-making directly or indirectly
TECH-JUDS-0043
Proposed
Recognize AI risks
Courts must recognize that AI systems can cause confirmation bias, produce false information, and create false certainty — and must treat all AI outputs with appropriate skepticism.
Courts must recognize that AI systems can amplify confirmation bias hallucination and false certainty and must treat AI outputs as inherently high-risk
TECH-JUDS-0044
Proposed
Elevated scrutiny standards
Any AI-assisted analysis permitted in court must be held to a higher standard of reliability and scrutiny than ordinary evidence — including rigorous adversarial testing.
Any permitted AI-assisted analysis must meet elevated standards of reliability scrutiny and adversarial testing beyond traditional evidence
TECH-JUDS-0045
Proposed
Right to challenge AI analysis
All parties must have full rights to challenge any AI-assisted analysis used in their case, including the ability to question the methods, assumptions, and outputs of the AI.
All parties must have full rights to challenge any AI-assisted analysis including methodology assumptions and outputs
TECH-JUDS-0046
Proposed
Reproducibility requirement
AI-assisted analysis used in court must be reproducible — the opposing party must be able to run the same inputs through the same system and get the same results.
AI-assisted analysis must be reproducible by opposing parties using the same inputs and methods
TECH-JUDS-0047
Proposed
Preserve analysis logs
All AI-assisted evidentiary analysis must include a complete preserved log of what data was put in, what came out, and every step the system took to get there.
All AI-assisted evidentiary analysis must include preserved logs of inputs outputs and transformation steps
TECH-JUDS-0048
Proposed
Audit access requirement
Courts and opposing parties must have access to all materials needed to fully audit AI-assisted evidentiary analysis used against them.
Courts and opposing parties must have access to all materials necessary to audit AI-assisted analysis
TECH-JUDS-0049
Proposed
No AI family court determinations
AI cannot be used to make decisions about child custody, visitation rights, parental fitness, or where a child should be placed — without direct human review and a genuine opportunity to contest the outcome.
AI systems may not be used to determine custody visitation parental fitness or family placement outcomes without direct accountable human judicial decision-making
TECH-JUDS-0050
Proposed
No parental fitness inference
AI systems cannot draw inferences about a parent's fitness, the risk of abuse, or a witness's credibility from behavioral proxies, psychological profiling, or demographic data.
AI systems may not infer parental fitness abuse risk credibility or child welfare outcomes from behavioral proxies psychological profiling or opaque risk models
TECH-JUDS-0051
Proposed
No AI in family court recommendations
Family courts cannot rely on AI-generated summaries, recommendations, or credibility assessments in cases involving custody or the welfare of children.
Family courts may not rely on AI-generated summaries recommendations or credibility assessments in matters involving custody abuse allegations or child welfare
TECH-JUDS-0052
Proposed
Disclose family court AI use
Any AI-assisted tool used in family court administration must be fully disclosed and must not have a material effect on outcomes unless a human reviews and is accountable for that effect.
Any AI-assisted tool used in family court administration must be fully disclosed and may not materially affect outcomes without adversarial challenge rights
TECH-JUDS-0053
Proposed
No automated evictions
AI cannot automate or drive eviction outcomes without direct human judicial review and a real opportunity to challenge the decision.
AI systems may not be used to automate or materially drive eviction outcomes without direct human judicial review and full due process protections
TECH-JUDS-0054
Proposed
No tenant risk scoring
Courts handling eviction cases cannot rely on AI-generated tenant risk scores, rental behavior predictions, or opaque housing analytics to decide cases.
Courts may not rely on AI-generated tenant risk scores rental-behavior predictions or opaque housing analytics in eviction or housing-access proceedings
TECH-JUDS-0055
Proposed
Disclose housing AI use
Landlords who use AI-assisted evidence or analytics in court must disclose this to the other party and provide enough information for the tenant to meaningfully challenge it.
Landlords and housing litigants using AI-assisted evidence or analytics in court must disclose such use and provide sufficient information for challenge and review
TECH-JUDS-0056
Proposed
No eviction acceleration through AI
Housing courts cannot use AI to speed up case processing in ways that reduce meaningful notice, access to hearings, or the ability to be heard by a judge.
Housing courts must not use AI systems to accelerate case throughput in ways that reduce meaningful notice hearing access or tenant defense rights
TECH-JUDS-0057
Proposed
No AI in administrative adjudication
AI cannot independently decide the outcome of administrative hearings involving government benefits, licensing, immigration, or other official determinations. A human must make the decision.
AI systems may not independently determine outcomes in administrative hearings involving benefits licensing immigration disability housing or employment rights
TECH-JUDS-0058
Proposed
No opaque scoring in administrative hearings
Administrative proceedings — like hearings for benefits or licenses — cannot rely on opaque AI scoring or recommendation systems to assess credibility, eligibility, or fitness.
Administrative adjudication may not rely on opaque AI scoring or recommendation systems to determine credibility eligibility compliance or sanction outcomes
TECH-JUDS-0059
Proposed
Disclose AI in administrative proceedings
When AI materially influences an administrative recommendation, record summary, or decision, the people affected have the right to know about it.
Parties in administrative proceedings have the right to know when AI materially influenced a recommendation record summary or proposed outcome
TECH-JUDS-0060
Proposed
Administrative appeal rights
Government agencies must provide a genuine human review and meaningful appeal process for any adverse decision that was influenced by AI.
Administrative agencies must provide meaningful human review and appeal for any AI-influenced adverse decision
TECH-JUDS-0061
Proposed
No AI probation/parole decisions
AI cannot determine conditions of probation or parole, decide whether supervision should be intensified, or recommend revocation — through automated behavioral predictions or scoring.
AI systems may not be used to determine probation parole revocation supervision intensity or conditions of release through opaque risk scoring
TECH-JUDS-0062
Proposed
No proxy-based supervision scoring
AI systems used in probation and parole cannot assign risk scores based on proxies for race, class, disability, geography, or protected activities. These proxies are well-documented pathways to discriminatory outcomes.
AI systems may not assign supervision risk scores based on proxies for race class disability geography or protected activity
TECH-JUDS-0063
Proposed
Individualized human review required
Probation and parole decisions must be based on individualized human review — not automated through behavioral analytics or scoring systems.
Probation and parole decisions must be based on individualized human review and may not be automated through behavioral prediction systems
TECH-JUDS-0064
Proposed
AI to identify bias in probation
AI can be used to identify patterns of bias, inconsistency, or unlawful disparity in probation and parole systems — as a tool for reform and accountability, not for making decisions about individuals.
AI may be used to identify patterns of bias inconsistency or unlawful disparity in probation and parole systems under transparent oversight
TECH-JUDS-0065
Proposed
No AI fines and fees escalation
AI cannot be used to escalate fines, fees, collection actions, or penalties based on predictions about whether someone will pay. Punishing people for predicted behavior rather than actual conduct is prohibited.
AI systems may not be used to escalate fines fees collections or penalties based on predictive payment scoring or behavioral profiling
TECH-JUDS-0066
Proposed
No payment coercion through AI
Courts and governments cannot use AI to pressure people into paying through automated threat scoring or penalty escalation. Legal consequences must follow actual due process.
Courts and governments may not use AI to pressure payment compliance through opaque threat scoring or automated penalty escalation
TECH-JUDS-0067
Proposed
No AI debt-to-punishment escalation
AI cannot convert an administrative debt or missed payment into a harsher legal consequence — like a warrant or license suspension — without direct human judicial review.
AI systems may not be used to convert administrative debt or missed payments into harsher legal consequences without direct human review and due process
TECH-JUDS-0068
Proposed
AI to identify predatory fine patterns
AI can be used to identify patterns of unjust or predatory fine-and-fee practices for review and correction — as a reform tool, under public oversight.
AI may be used to identify unlawful disparities or predatory fine-and-fee patterns for corrective review under public oversight
TECH-JUDS-0069
Proposed
AI for legal aid access
AI tools can help people navigate legal aid, understand documents, and access legal information — as long as they do not replace licensed attorneys or give legal advice beyond their capability.
AI may be used to assist legal aid intake document navigation and access to information provided it does not replace licensed legal counsel where counsel is required or appropriate
TECH-JUDS-0070
Proposed
Disclose legal-aid AI limits
AI tools used in legal aid contexts must be upfront about what they can and cannot do, and cannot present themselves as human lawyers.
AI tools used in legal-aid contexts must clearly disclose their limits and may not present themselves as human attorneys or as a substitute for legal representation
TECH-JUDS-0071
Proposed
Public legal-aid tools must be fair
AI tools for legal aid must be designed to be accessible, easy to understand, and fair — and must not steer people away from asserting their legal rights.
Public legal-aid AI tools must be designed for accessibility clarity and fairness and may not steer users away from asserting rights or seeking counsel
TECH-JUDS-0072
Proposed
Fund public legal access tools
Government and courts should invest in public-interest AI tools that help people navigate the legal system — provided these tools improve access without automating critical decisions.
Governments and courts should fund public-interest legal access tools that improve navigation and review without automating final legal judgments
TECH-JUDS-0073
Proposed
AI records systems must be auditable
AI systems used to search, classify, or summarize court records must be auditable and cannot distort which cases are visible or how they are understood.
AI systems used to classify search summarize or prioritize court records must be auditable and may not distort case visibility or public access
TECH-JUDS-0074
Proposed
No unequal filing access
Court administrative AI systems cannot create unequal access to filing, scheduling, or records based on hidden preferences, priorities, or biases.
Court administrative AI systems may not create unequal access to filing scheduling or record retrieval based on hidden prioritization logic
TECH-JUDS-0075
Proposed
Preserve record accuracy
Court records systems that use AI must ensure accuracy, transparency, and a quick way for people to correct errors in their records.
Public court records systems using AI must preserve accuracy transparency and the ability to correct errors promptly
TECH-JUDS-0076
Proposed
Human override for court admin AI
Courts must maintain human oversight and override capability for all AI-assisted administrative systems that affect people who are parties to cases.
Courts must maintain human override and review mechanisms for all AI-assisted administrative systems affecting litigants or public access
TECH-JUDS-0077
Proposed
AI limited to assistive functions
Where AI is permitted in judicial settings, it must be limited to transparent, non-generative, assistive functions. It cannot author or materially shape legal reasoning or outcomes.
Where AI is permitted in judicial contexts it must be limited to transparent non-generative assistive functions and may not determine rights liabilities punishment credibility or case outcome
TECH-JUDS-0078
Proposed
No credibility assessment
AI systems cannot be used to assess whether a witness, defendant, or party is telling the truth, deceiving the court, or has particular intentions. Credibility determination is a human function.
AI systems may not be used to assess credibility truthfulness deception or intent of witnesses defendants or parties in legal proceedings
TECH-JUDS-0079
Proposed
No behavioral inference in court
Behavioral analytics, biometric analysis, and speech-pattern AI tools cannot be used to infer a person's emotional state, honesty, or reliability in legal proceedings.
Behavioral biometric or speech-analysis AI tools may not be used to infer emotional state truthfulness or reliability in courtroom contexts
TECH-JUDS-0080
Proposed
No AI alteration of police footage
AI cannot be used to selectively reanalyze, enhance, or reinterpret police body camera or surveillance footage in ways that distort its meaning or context.
AI systems may not be used to reinterpret enhance or selectively analyze police body-camera or surveillance footage in ways that alter evidentiary meaning
TECH-JUDS-0081
Proposed
Preserve full context in evidence
When AI is used to analyze law enforcement evidence, it must preserve the full context and may not highlight some parts while suppressing others in ways that mislead.
AI-assisted analysis of law-enforcement evidence must preserve full context and may not selectively highlight or suppress information to support a narrative
TECH-JUDS-0082
Proposed
No sole-basis AI translation
AI translation tools cannot be the sole basis for understanding what was said in a legal proceeding. Automated translation alone is not sufficient for legal accuracy.
AI translation or interpretation systems may not be used as the sole basis for legal understanding in court proceedings where rights or outcomes are affected
TECH-JUDS-0083
Proposed
Human interpreters required
Human-certified interpreters are required for critical legal proceedings. AI translation may only be used as a supplementary tool, not as the primary interpreter.
Human-certified interpreters are required for critical proceedings and AI translation may only be used as a supplementary tool with disclosure
TECH-JUDS-0084
Proposed
Forensic AI validation required
AI systems used in forensic laboratories — for things like DNA analysis or ballistics — must meet strict scientific validation standards. They cannot operate as unexplainable black boxes.
AI systems used in forensic laboratories must meet strict scientific validation standards and may not operate as opaque or unreviewable systems
TECH-JUDS-0085
Proposed
Test forensic AI systems
Forensic AI systems must be independently tested for accuracy, bias, and reproducibility before they can be used in any case that might go to court.
Forensic AI systems must be independently tested for accuracy bias and reproducibility before use in any evidentiary pipeline
TECH-JUDS-0086
Proposed
No AI reconstruction of sealed records
AI cannot be used to reconstruct, infer, or surface records that have been legally sealed, expunged, or otherwise protected from disclosure — even indirectly through data aggregation.
AI systems may not reconstruct infer or surface sealed expunged or legally protected records through data aggregation or inference
TECH-JUDS-0087
Proposed
Enforce record protections
Using AI to get around legal protections on sealed or expunged records is prohibited and subject to enforcement.
Use of AI to bypass legal protections on records privacy or expungement is prohibited and subject to enforcement
TECH-JUDS-0088
Proposed
No AI influence campaigns on juries
AI-generated content cannot be used to try to influence jurors, witnesses, or court proceedings outside the courtroom — for example, through targeted social media campaigns during a trial.
AI-generated content may not be used to influence jurors witnesses or proceedings outside the courtroom through targeted media or messaging
TECH-JUDS-0089
Proposed
Mitigate public AI influence on trials
Courts must be aware of and work to counter the risk of AI-driven public influence campaigns that could compromise the impartiality of juries or the fairness of proceedings.
Courts must recognize and mitigate risks of AI-driven public influence campaigns that could affect jury impartiality or trial fairness
TECH-JUDS-0090
Proposed
AI version control required
AI systems used in court proceedings must have version control — meaning they must not change their behavior during an active case.
AI systems used in any permitted judicial context must have version control and may not change behavior during active cases without disclosure and review
TECH-JUDS-0091
Proposed
Revalidate AI changes
If an AI system used in a legal context changes significantly, it must be revalidated before continued use, and those changes may not retroactively affect prior analyses or decisions.
Material changes to AI systems used in legal contexts must trigger revalidation and may not retroactively affect prior analyses without disclosure
TECH-JUDS-0092
Proposed
No shift of burden or standards
Using AI tools in a legal proceeding cannot shift the burden of proof, lower evidentiary standards, or reduce any party's procedural rights.
Use of AI systems may not shift burden of proof evidentiary standards or procedural rights of any party in legal proceedings
TECH-JUDS-0093
Proposed
Transparent AI procurement
Courts must use transparent, publicly disclosed procurement processes when acquiring AI systems — including disclosing what was purchased, from whom, and at what cost.
Courts and judicial systems must use transparent procurement processes for any AI system including public disclosure and review before deployment
TECH-JUDS-0094
Proposed
Independent validation before deployment
AI systems cannot be put into use in judicial settings without independent validation that they are fair, reliable, and legally compliant.
AI systems may not be deployed in judicial contexts without independent validation for fairness reliability and legal compliance
TECH-JUDS-0095
Proposed
Authority to suspend AI systems
Courts must be able to suspend or prohibit the use of any AI system that proves to be biased, unreliable, or legally inappropriate.
Courts must maintain the authority to suspend or prohibit use of any AI system that demonstrates bias unreliability or legal risk
TECH-JUDS-0096
Proposed
Party right to request suspension
Parties to a legal case must have the right to request that an AI system being used in their proceeding be suspended if they have credible grounds for questioning its fairness or reliability.
Parties must have the right to request suspension of AI system use in a case where fairness or reliability is in question
TECH-LABS-0001
Proposed
Protect worker rights and dignity
AI tools in employment must not undermine workers' rights, dignity, privacy, or fair access to economic opportunity.
AI systems in employment must not undermine worker rights dignity privacy or fair access to economic opportunity
TECH-LABS-0002
Proposed
No automated employment decisions
AI systems cannot make fully automated hiring, firing, promotion, or pay decisions without meaningful human review and accountability.
AI systems may not make fully automated hiring firing promotion or compensation decisions without meaningful human review and accountability
TECH-LABS-0003
Proposed
No opaque candidate filtering
AI systems cannot filter or rank job candidates using opaque or unexplainable criteria that materially affect who gets a job or an interview.
AI systems may not be used to filter or rank job candidates using opaque or unexplainable criteria that materially affect employment opportunities
TECH-LABS-0004
Proposed
Evaluate for bias
AI systems used in hiring and employment must be tested for bias and cannot result in discrimination based on protected characteristics — or stand-ins for those characteristics.
AI systems used in employment must be evaluated and mitigated for bias and may not result in discrimination based on protected characteristics or their proxies
TECH-LABS-0005
Proposed
Right to explanation
Job applicants and employees have the right to receive meaningful explanations of AI-influenced employment decisions that affect them.
Applicants and employees have the right to receive meaningful explanations of AI-influenced employment decisions
TECH-LABS-0006
Proposed
Ban intrusive monitoring
Continuous, invasive AI monitoring of workers — including biometric tracking, keystroke logging, and real-time behavioral surveillance — is banned except where strictly necessary and proportionate.
Ban use of AI systems for continuous intrusive monitoring of workers including biometric tracking keystroke logging and real-time behavioral surveillance except where strictly necessary and proportionate
TECH-LABS-0007
Proposed
No emotion inference
AI systems cannot be used to infer or monitor workers' emotions, mood, engagement, or psychological state for purposes of employment decisions.
Ban use of AI systems to infer or monitor worker emotions mood engagement or psychological state for employment decisions
TECH-LABS-0008
Proposed
No outside-hours monitoring
Employers cannot use AI to monitor or draw inferences about worker behavior outside of working hours.
Employers may not use AI systems to monitor or infer worker behavior outside of working hours
TECH-LABS-0009
Proposed
No sole-basis discipline or termination
AI systems cannot be the sole basis for disciplinary action, termination, or changes to pay. A human must be responsible for these decisions.
AI systems may not be used as the sole basis for disciplinary actions termination or compensation changes
TECH-LABS-0010
Proposed
Transparent productivity scoring
AI-generated productivity or performance scores must be transparent and auditable, and they cannot be used to affect employment without human review.
AI-generated productivity or performance scores must be transparent auditable and may not be used without human review
TECH-LABS-0011
Proposed
No coercive productivity targets
AI systems cannot be used to coerce workers into unsafe, unreasonable, or exploitative productivity targets.
AI systems may not be used to coerce workers into unsafe unreasonable or exploitative productivity targets
TECH-LABS-0012
Proposed
Right to refuse harmful AI
Workers must be able to refuse the use of AI systems that materially harm their dignity, privacy, or safety without fear of retaliation.
Workers must have the right to refuse use of AI systems that materially harm their dignity privacy or safety without retaliation
TECH-LABS-0013
Proposed
Access to human managers
Workers must have access to human managers or decision-makers for any matter affecting their employment status, discipline, or working conditions.
Workers must have access to human managers or decision-makers for matters affecting employment status discipline or working conditions
TECH-LABS-0014
Proposed
Data minimization
AI systems used at work may only collect worker data that is strictly necessary for legitimate business purposes.
AI systems may only collect worker data that is strictly necessary for legitimate business purposes
TECH-LABS-0015
Proposed
No data sales or repurposing
Worker data collected through AI systems cannot be sold, shared with third parties, or used for purposes unrelated to the employment relationship.
Worker data collected through AI systems may not be sold shared or used for unrelated purposes
TECH-LABS-0016
Proposed
Right to access and correct data
Workers have the right to access, review, and correct data collected about them by AI systems used by their employer.
Workers have the right to access review and correct data collected about them by AI systems
TECH-LABS-0017
Proposed
Disclosure requirement
Employers must disclose when AI systems are used in hiring, monitoring, evaluation, and employment decision-making processes.
Employers must disclose use of AI systems in hiring monitoring evaluation and decision-making processes
TECH-LABS-0018
Proposed
Regular independent audits
AI systems used in employment must be subject to regular independent audits for bias, fairness, and legal compliance.
AI systems used in employment must be subject to regular independent audits for bias fairness and compliance
TECH-LABS-0019
Proposed
Documentation requirement
Employers must maintain documentation of AI systems' purpose, data sources, and decision logic sufficient to allow oversight and accountability.
Employers must maintain documentation of AI system design purpose data sources and decision logic sufficient for oversight and accountability
TECH-LABS-0020
Proposed
Ban behavior prediction scoring
AI systems that assign risk scores predicting employee behavior — like likelihood of quitting, unionizing, or engaging in protected activity — are banned.
Ban AI systems that assign risk scores predicting employee behavior such as likelihood of quitting unionizing or engaging in protected activity
TECH-LABS-0021
Proposed
Ban anti-union AI
Using AI systems to identify, monitor, or suppress union activity, collective bargaining, or labor organizing is banned.
Ban use of AI systems to identify monitor or suppress union activity collective bargaining or labor organizing
TECH-LABS-0022
Proposed
Validate personality assessments
Using AI to infer personality traits or psychological characteristics in hiring decisions is banned unless there is strong scientific evidence and proper safeguards in place.
Ban use of AI systems that infer personality traits or psychological characteristics for hiring decisions without strong scientific validation and safeguards
TECH-LABS-0023
Proposed
Worker role in AI deployment
Workers or their representatives must have a meaningful role in reviewing and approving the deployment of high-impact AI systems in their workplace.
Workers or their representatives must have a role in reviewing and approving deployment of high-impact AI systems in the workplace
TECH-LABS-0024
Proposed
Collective bargaining over AI
When AI systems affect working conditions, their use must be subject to collective bargaining where applicable.
Use of AI systems affecting working conditions must be subject to collective bargaining where applicable
TECH-MHCS-0001
Proposed
AI assists not replaces clinicians
AI mental health tools can assist therapists and counselors, but they cannot replace licensed clinicians in diagnosing conditions, evaluating crises, or making high-risk treatment decisions.
AI mental-health tools may assist but must not replace qualified clinicians in diagnosis crisis evaluation or high-risk treatment decisions
TECH-MHCS-0002
Proposed
Prohibit manipulative AI systems
AI systems designed to create emotional dependency — or to manipulate users through simulated companionship — must be restricted or prohibited. These systems can exploit human needs for connection.
AI systems designed to induce emotional dependency, simulate coercive parasocial attachment, or exploit psychological vulnerabilities for commercial or political ends are prohibited
TECH-MHCS-0003
Proposed
Evaluate for mental health harms
Platforms and AI products must be evaluated for whether they contribute to addiction, compulsive use, social isolation, or self-harm. Mental health harms from technology must be taken seriously.
Platforms and AI systems must be evaluated for mental-health harms including addiction compulsive use isolation and self-harm amplification
TECH-MHCS-0004
Proposed
Stronger protections for minors
Children and teenagers need stronger protections against AI systems that are engineered to maximize engagement at the expense of their mental health.
Children and adolescents require stronger protections against manipulative AI systems and engagement optimization
TECH-MHCS-0005
Proposed
Disclosure in sensitive contexts
When you are sharing something sensitive or emotional, you must be clearly informed if you are talking to an AI, not a human. Deceptive AI personas in emotionally sensitive contexts are prohibited.
Users must be clearly informed when they are interacting with an AI system in emotionally sensitive contexts
TECH-MILS-0001
Proposed
Meaningful human control required
When the military or intelligence agencies use AI, meaningful human control must be maintained, and the people giving orders must remain legally and ethically accountable for what the systems do.
Use of AI in military and intelligence contexts must preserve meaningful human control accountability and compliance with constitutional and international law
TECH-MILS-0002
Proposed
Ban autonomous lethal targeting
AI systems that can independently identify and kill people — with no human making the actual decision to use lethal force — are banned.
Ban AI systems that can independently select and engage targets with lethal force without meaningful human decision-making and accountability
TECH-MILS-0003
Proposed
Ban AI force initiation
AI systems that can start or escalate the use of military force on their own, without a human authorizing each action in real time, are banned.
Ban deployment of AI systems that can initiate or escalate use of force without real-time human authorization
TECH-MILS-0004
Proposed
Ban nuclear AI
AI cannot be placed in control of nuclear weapons, command-and-control systems, targeting decisions, or launch processes. Nuclear force must remain under strict human control.
Ban use of AI systems in nuclear command control targeting or launch decision processes
TECH-MILS-0005
Updated
Ban AI target generation
AI cannot generate, recommend, or rank human targets for lethal military action. All targeting decisions must be made by human military personnel.
Ban AI systems from generating, recommending, or prioritizing targets for lethal action involving real persons; all such decisions must originate from human judgment and independently verifiable evidence
TECH-MILS-0006
Proposed
No target filtering or ranking
AI cannot be used to narrow down, filter, or prioritize lists of human targets in a way that effectively drives the targeting decision — even if a human nominally approves the final choice.
AI systems may not be used to narrow, filter, or rank potential human targets for lethal action in a manner that materially influences human decision-making
TECH-MILS-0007
Proposed
Identified human decision-maker
When lethal force is used with AI systems involved, there must be a specific, named human decision-maker responsible for that decision. Anonymous AI accountability is not acceptable.
All use of lethal force involving AI systems must include a clearly identified human decision-maker responsible for final authorization
TECH-MILS-0008
Proposed
Logging and auditability
All AI-assisted military decisions must be logged and auditable — including what data the AI was given, what it recommended, and what the human decided.
All AI-assisted military decisions must be logged and auditable including data inputs model outputs and human decision points
TECH-MILS-0009
Proposed
Transparency for human oversight
AI systems used in military decision-making must be transparent enough for the human operator to genuinely understand and evaluate what the system is recommending before acting on it.
AI systems used in military decision-making must provide sufficient transparency to allow meaningful human understanding and oversight
TECH-MILS-0010
Proposed
Testing and reliability
Military AI systems must meet strict reliability and testing thresholds before they are deployed. Unproven AI systems cannot be sent into operational environments.
AI systems used in military contexts must meet strict reliability and testing thresholds before deployment
TECH-MILS-0011
Proposed
Ban mass intelligence fusion
AI-driven intelligence surveillance systems that collect or analyze data about civilians on a mass scale without legal authority are banned.
Ban large-scale AI-driven intelligence surveillance systems that indiscriminately collect or analyze civilian data without legal authorization and oversight
TECH-MILS-0012
Proposed
No profiling for targeting
AI cannot be used to profile individuals or populations as potential military targets based solely on statistical inference, behavioral patterns, or algorithmic prediction.
Ban use of AI to profile individuals or populations for targeting based on probabilistic or behavioral inference alone
TECH-MILS-0013
Proposed
Compliance with laws of armed conflict
All military AI systems must comply with the laws of war — including the principles of distinguishing combatants from civilians, proportionality, and military necessity.
All AI military systems must comply with principles of distinction proportionality and necessity under the laws of armed conflict
TECH-MILS-0014
Proposed
Clear attribution of responsibility
When a military AI system takes an action, the legal and moral responsibility for that action must be clearly attributable to specific human beings within a defined command chain.
Responsibility for actions taken using AI systems must be clearly attributable to human actors within a defined chain of command
TECH-MILS-0015
Proposed
No evasion of accountability
Military AI systems cannot be used to obscure who is responsible for a military action or to help individuals or governments avoid legal accountability for what they did.
Use of AI systems must not be used to obscure responsibility or evade legal accountability for military actions
TECH-MILS-0016
Proposed
Congressional authorization required
Deploying a fundamentally new type of AI-enabled military capability must require explicit authorization from Congress — not just executive decision.
Deployment of new classes of AI-enabled military capability must require explicit congressional authorization
TECH-MILS-0017
Proposed
Limits on executive authority
The President and executive branch cannot unilaterally expand the use of AI in warfare beyond what Congress has explicitly authorized.
Executive authority may not unilaterally expand the use of AI in warfare beyond clearly defined and authorized limits
TECH-MILS-0018
Proposed
Regular reporting to Congress
All AI-enabled military programs must report regularly to Congress and independent oversight bodies on what they are doing and how they are performing.
Regular reporting to Congress and oversight bodies is required for all AI-enabled military programs
TECH-MILS-0019
Proposed
Controlled testing required
Military AI systems must be tested in controlled conditions before being used operationally. No new system goes live without prior evaluation.
AI systems intended for military use must undergo controlled testing and evaluation prior to operational deployment
TECH-MILS-0020
Proposed
No real-world testing
Using real military operations — with real people and real consequences — as the primary way to test an unproven AI system is banned.
Ban use of real-world military operations as primary testing environments for unproven AI systems
TECH-MILS-0021
Proposed
Adversarial testing
Military AI systems must be adversarially tested — meaning experts must try to find and expose every way the system can fail, be misused, or cause unintended harm.
Require adversarial testing and red-teaming of AI systems to identify failure modes misuse risks and unintended escalation scenarios
TECH-MILS-0022
Proposed
Contractor accountability
Private contractors who develop AI for military use must follow the same legal and ethical rules as the government itself. The military cannot outsource its obligations to private companies.
Private contractors developing AI for military use are subject to the same legal and ethical constraints as government entities
TECH-MILS-0023
Proposed
No outsourcing of authority
The government cannot hand off responsibility for decisions about the use of force to private contractors or to AI systems. These decisions must remain with accountable government officials.
Government may not outsource decision-making authority or accountability for use of force to private entities or AI systems
TECH-MILS-0024
UNDEVELOPED
Export controls
Strong export controls must exist on high-risk military AI systems to prevent hostile nations or governments with poor human rights records from obtaining and using these technologies.
Establish controls on export of high-risk military AI systems to prevent misuse by hostile actors or regimes with poor human-rights records
TECH-MILS-0025
Proposed
Defensive AI permitted
AI systems can be used in defensive roles — like missile defense, threat detection, or intelligence gathering — where they do not make decisions about targeting or killing specific people.
AI systems may be used in defensive or non-person-targeting contexts including missile defense threat detection and interception where the system operates against incoming projectiles vehicles or clearly defined non-human targets
TECH-MILS-0026
Proposed
No repurposing defensive AI
Defensive AI systems must not be modified or repurposed to identify, select, or target individual people. A tool approved for defense cannot be converted into an offensive targeting tool.
Defensive AI systems must not be repurposed or extended to identify select or target individual persons or groups
TECH-MILS-0027
Proposed
AI for analysis not targeting
AI may be used for analyzing battlefield information, classifying threats, and maintaining situational awareness — as long as it does not generate or recommend specific targets for lethal action.
AI may be used for analysis classification and situational awareness provided it does not generate or recommend specific human targets for lethal action
TECH-MILS-0028
Proposed
Ban offensive lethal AI
AI systems cannot be used to execute offensive lethal force — including selecting targets, making engagement decisions, or triggering weapons.
Ban use of AI systems in the execution of offensive lethal force including target selection engagement decisions and weapon deployment
TECH-MILS-0029
Proposed
Limited guidance use
AI may be used in weapon guidance systems solely to improve accuracy and reduce civilian casualties — but only after a human has already made the legal decision to engage a specific target.
AI systems may be used in limited weapon guidance roles solely to improve accuracy and reduce collateral damage after a human has independently selected and authorized a lawful target
TECH-MILS-0030
Proposed
Guidance parameters strictly defined
AI guidance systems must follow the specific target chosen by a human and must not change, reinterpret, expand, or substitute for that target in any way.
Guidance systems must not alter expand reinterpret or substitute the human-selected target and must operate within strictly defined engagement parameters
TECH-MILS-0031
Proposed
No new target selection by guidance
AI guidance systems must be incapable of selecting new targets on their own, reprioritizing targets, or initiating additional attacks beyond the specific engagement the human authorized.
Guidance systems must be incapable of selecting new targets reprioritizing targets or initiating additional engagements
TECH-MILS-0032
Proposed
Human responsibility for guidance outcomes
Human operators remain fully responsible for all outcomes when AI-assisted weapon guidance is used — including any unintended harm, collateral damage, or errors.
Human operators remain fully responsible for all outcomes of AI-assisted weapon guidance including unintended harm or collateral damage
TECH-MILS-0033
Proposed
Defensive systems against non-human threats
AI can be used in purely defensive systems — like intercepting incoming missiles or detecting attacks — where the purpose is protection rather than targeting people.
AI may be used in defensive systems including missile interception threat detection and protection against incoming attacks where the system operates against non-human targets or active threats
TECH-MILS-0034
Proposed
Support international treaties
The United States should pursue international agreements to limit and regulate the use of AI in military and intelligence operations, establishing shared global norms.
Promote and pursue international treaties to limit and regulate the use of AI in military and intelligence contexts
TECH-MILS-0035
Proposed
Treaty goal: ban autonomous weapons
International agreements should seek to ban fully autonomous lethal weapons — systems that can kill without a human making the decision — and ensure meaningful human control is maintained.
International agreements should seek to prohibit fully autonomous lethal weapons and ensure meaningful human control over use of force
TECH-MILS-0036
Proposed
Treaty goal: ban targeting AI
International treaties should work to ban AI systems that can independently select and engage human targets without a human decision.
International treaties should establish bans on AI systems that independently select and engage human targets
TECH-MILS-0037
Proposed
Treaty goal: ban nuclear AI
International treaties should restrict the use of AI in nuclear weapons systems, including command-and-control systems and strategic weapons.
International treaties should restrict AI use in nuclear command control and strategic weapons systems
TECH-MILS-0038
Proposed
Treaty mechanisms
International agreements should include transparency requirements and verification mechanisms to confirm that countries are actually complying with military AI restrictions.
Treaties should include transparency reporting requirements and verification mechanisms for high-risk military AI systems
TECH-MILS-0039
Proposed
Prevent AI arms races
International efforts should work to prevent a global AI arms race — where countries compete to build the most powerful autonomous weapons — which increases the risk of rapid, accidental escalation.
International efforts should aim to prevent destabilizing AI arms races that increase risk of rapid escalation or accidental conflict
TECH-MILS-0040
Proposed
Export controls in treaties
The United States should support international controls on the export of high-risk military AI technologies to prevent these tools from spreading to regimes that would misuse them.
Support international controls on export and proliferation of high-risk military AI technologies to prevent misuse and human rights abuses
TECH-MILS-0041
Proposed
Strong sanctions for violations
Countries that violate international agreements on military AI should face strong coordinated sanctions and other consequences from the international community.
Violations of international military-AI treaty obligations by adversaries should trigger strong sanctions and other enforcement measures
TECH-MILS-0042
Proposed
Sanctions for prohibited systems
Sanctions for deploying or exporting banned military AI systems should be severe and, where possible, coordinated with allied nations for maximum effect.
Sanctions for deployment or export of prohibited military AI systems should be severe and coordinated internationally where possible
TECH-MILS-0043
Proposed
Narrow reciprocal exceptions
If an adversary deploys prohibited military AI systems, limited temporary exceptions to our own restrictions may be considered — but only through proper legal processes and with full public reporting.
If an adversary deploys prohibited military AI systems, limited reciprocal emergency exceptions may be considered only under narrowly defined conditions and independent oversight
TECH-MILS-0044
Proposed
Temporary reciprocal measures
Any exception to military AI prohibitions made in response to an adversary's actions must be temporary, reported to Congress to the fullest extent compatible with national security, and subject to legal review.
Any reciprocal exception to military AI prohibitions must be temporary, publicly reported to the fullest extent compatible with security, and subject to sunset and reauthorization
TECH-MILS-0045
Proposed
Core prohibitions remain
The core prohibitions — on AI in nuclear weapons, on fully autonomous lethal targeting, and on unaccountable use of force — must remain in place even if adversaries violate international norms. These are non-negotiable floors.
Core prohibitions on nuclear AI control, fully autonomous lethal targeting of persons, and unaccountable use of force remain in effect even during adversary noncompliance
TECH-MILS-0046
Proposed
Autonomous targeting as war crime
Using AI to autonomously select or attack human targets, or to meaningfully remove human accountability from decisions about lethal force, is prohibited regardless of what other countries do.
Use of AI systems to autonomously select or engage human targets or to materially remove human accountability in lethal force decisions constitutes a violation of the laws of armed conflict and should be treated as a war crime
TECH-MILS-0047
Proposed
Target generation as unlawful
Using AI to generate or recommend human targets for lethal action — or to substitute AI judgment for human judgment on matters of life and death — is prohibited with no exceptions based on reciprocity.
Deployment of AI systems that generate or recommend human targets for lethal action or that substitute for human judgment in use-of-force decisions should be prohibited under international law and treated as unlawful conduct
TECH-MILS-0048
Proposed
Violations of armed conflict principles
Using AI in ways that violate the laws of war — including failing to distinguish between combatants and civilians, or acting disproportionately — is prohibited and constitutes a war crime.
Use of AI systems in ways that violate principles of distinction proportionality or necessity under the laws of armed conflict constitutes a war crime regardless of whether a human operator is nominally involved
TECH-MILS-0049
Proposed
Precision guidance exception
Using AI solely to guide a weapon more precisely toward a target that a human already lawfully selected — in order to reduce civilian casualties — does not constitute a prohibited autonomous weapons system.
Use of AI solely for precision guidance to reduce collateral damage after lawful human target selection does not constitute prohibited offensive use provided it does not alter or expand the selected target
TECH-MILS-0050
Proposed
Commanders remain liable
Individual soldiers and commanders remain personally legally responsible for illegal outcomes that result from using AI systems. Being told to follow an AI recommendation is not a defense.
Individuals and commanders remain legally responsible for unlawful outcomes resulting from use of AI systems and may not use AI as a defense against liability
TECH-MILS-0051
Proposed
Dedicated cyber branch
A dedicated, fully developed military cyber branch must be established and maintained to conduct defensive operations, protect national security, and develop strategic cyber capabilities.
Establish a dedicated and fully developed cyber branch within the military focused on defensive and strategic cyber operations
TECH-MILS-0052
Proposed
Cyber defense mission
The military cyber branch is responsible for defending critical infrastructure, military systems, and national cyber assets from attack.
The cyber branch shall be responsible for defense of critical infrastructure military systems and national cyber assets against hostile cyber activity
TECH-MILS-0053
Proposed
Offensive cyber limits
Offensive cyber operations — attacking another country's or actor's systems — must be strictly limited, authorized at the highest levels of government, and subject to the same legal and oversight requirements as other military operations.
Offensive cyber operations must be strictly limited authorized at the highest levels and subject to the same legal and oversight frameworks governing use of force
TECH-MILS-0054
Proposed
Minimize civilian cyber impact
Cyber operations must be designed to minimize harm to civilian infrastructure — including hospitals, utilities, financial systems, and communications — even when targeting adversary systems.
Cyber operations must minimize impact on civilian infrastructure including healthcare utilities financial systems and communications networks
TECH-MILS-0055
Proposed
Cyber branch oversight
All military cyber operations must be subject to oversight, audit, and accountability mechanisms equivalent to those applied to other military activities.
All cyber military operations are subject to oversight audit and accountability mechanisms equivalent to other military branches
TECH-MILS-0056
Proposed
No domestic cyber surveillance
The military cyber branch cannot be used for domestic surveillance, law enforcement, or monitoring of American civilians, except under the most narrowly defined legal circumstances with judicial oversight.
The cyber military branch may not be used for domestic surveillance law enforcement or civilian monitoring except under narrowly defined and legally authorized emergency conditions
TECH-MILS-0057
Proposed
Coordination with civilian agencies
The military cyber branch must coordinate with civilian agencies and private infrastructure operators to protect national security while maintaining clear boundaries between military and civilian roles.
The cyber branch must coordinate with civilian agencies and private infrastructure operators while maintaining clear legal boundaries and accountability
TECH-MILS-0058
Proposed
International cyber norms
The military cyber branch should operate in alignment with international efforts to establish norms, agreements, and treaties governing responsible behavior in cyberspace.
The cyber branch should operate in alignment with international efforts to establish norms and treaties governing cyber warfare and infrastructure protection
TECH-MILS-0006ProposalAI systems may not be used to narrow, filter, or rank potential human targets for lethal action in a manner that…
AI systems may not be used to narrow, filter, or rank potential human targets for lethal action in a manner that materially influences human decision-making
Source: DB entry TEC-MIL-005A, status: MISSING. Pending editorial review before promotion to core position.
TECH-OVRG-0001
Proposed
Public AI surveillance registry
All government AI surveillance systems must be publicly registered, disclosing what they are used for, what legal authority they have, where their data comes from, and how they are overseen.
Require public registration and disclosure of all government AI surveillance systems including purpose authority data sources and oversight structure
TECH-OVRG-0002
Proposed
Regular independent audits
Every authorized government AI surveillance system must be regularly audited by independent reviewers for legal compliance, accuracy, bias, and civil liberties impacts.
Require regular independent audits of all authorized government AI surveillance systems for legality bias accuracy and civil-liberties impacts
TECH-OVRG-0003
Proposed
Sunset and reauthorization
All government AI surveillance programs must have expiration dates and must be reauthorized through a public process. Indefinite surveillance authority is not acceptable.
All government AI surveillance authorities and systems must have sunset dates and require periodic reauthorization
TECH-PRIV-0001
Proposed
Right to access without identification
People have the right to access legal websites and online services without being required to show government ID or prove their identity. Anonymous internet access is a protected right.
Individuals have the right to access lawful online content and services without mandatory identity verification
TECH-PRIV-0002
Proposed
Protect anonymous use
Using the internet anonymously or under a pseudonym (a made-up name) is a protected right. Free expression and privacy require the ability to communicate without being identified.
Anonymous and pseudonymous use of the internet must be protected as a component of free expression and privacy
TECH-PRIV-0003
Proposal
Federal comprehensive consumer data privacy law must establish a floor of rights without preempting stronger state laws
Federal privacy law must establish a minimum floor of rights for all Americans — including the right to know what data is collected about you, how it is used, and how to delete it. States can create stronger protections, but not weaker ones.
Congress must enact a comprehensive federal consumer data privacy law that establishes: data minimization (collection limited to what is necessary for the stated purpose); purpose limitation (data may not be used for purposes incompatible with those disclosed at collection); a right to deletion; a right to portability in machine-readable format; meaningful consent for sensitive data categories including health, financial, biometric, genetic, location, and communications content; opt-in consent before sharing data with third parties; a private right of action for violations; and enforcement by the Federal Trade Commission with civil penalty authority — provided that the law must not preempt state privacy laws that provide stronger protections, preserving the California Consumer Privacy Act, the Colorado Privacy Act, and equivalent state frameworks as a floor above which federal law may not reduce rights.
The United States is alone among peer democracies in lacking a comprehensive federal consumer data privacy law. The EU's GDPR (2018), the UK Data Protection Act, Canada's PIPEDA, and Brazil's LGPD all establish baseline privacy rights for consumers. In the U.S., data privacy law is a patchwork of sector-specific federal laws (HIPAA for health, FERPA for education, FCRA for credit), none of which covers the vast majority of personal data collected by commercial entities. Congress has repeatedly failed to enact comprehensive privacy legislation despite broad bipartisan public support for privacy rights. The non-preemption clause is essential: prior federal privacy proposals would have preempted state laws like the CPRA, eliminating stronger protections already enacted by California and other states in exchange for weaker federal standards. Federal law must set a floor, not a ceiling. The private right of action is necessary because FTC enforcement resources are wholly inadequate to address the volume of violations in a trillion-dollar data economy.
TECH-PRIV-0004
Proposal
Health, financial, and biometric data on U.S. persons may not be stored or processed in foreign adversary jurisdictions
Sensitive personal data about Americans — including health, financial, and biometric data — cannot be stored or processed on servers located in countries considered adversaries of the United States.
Health records, financial account data, biometric identifiers, and genetic data pertaining to U.S. persons may not be stored on infrastructure located in, or processed by entities subject to the data access laws of, jurisdictions designated as foreign adversaries under U.S. law, currently including China, Russia, Iran, and North Korea; entities that knowingly violate this requirement are subject to criminal penalties of not less than five years and civil penalties of not less than $100,000 per violation; and federal contracting officers must certify that vendors handling covered data categories comply with this requirement as a condition of contract award.
Foreign adversary governments have demonstrated both capability and intent to access personal data on U.S. persons stored within their jurisdictions: China's National Intelligence Law (2017) requires Chinese companies to cooperate with state intelligence operations. The theft of 21 million OPM security clearance records, the Marriott hack (attributed to Chinese intelligence), and multiple healthcare data breaches demonstrate the national security dimension of sensitive personal data exposure. The Biden administration's Executive Order 14117 (2024) restricted certain bulk data transfers to foreign adversaries, establishing the principle this card codifies in statutory form. Data localization for sensitive categories is a structural protection: it is more effective than after-the-fact breach notification because it prevents the underlying access rather than responding to it.
TECH-SURS-0001
Proposed
Ban warrantless mass surveillance
Mass surveillance of the public without warrants or individualized legal cause is banned. Collecting communications, movements, or activities of people who are not suspected of wrongdoing is prohibited.
TECH-SURS-0002
Proposed
Ban AI mass public surveillance
AI-powered surveillance systems that scan public spaces or monitor large numbers of people simultaneously are banned, except in extremely narrow, high-threshold emergency conditions.
Ban AI-powered mass surveillance of public spaces except under narrowly defined high-threshold emergency conditions
TECH-SURS-0003
Proposed
Ban persistent location tracking
The government cannot use AI to track your location persistently or stitch together your identity across different platforms and data sources without a court order based on individualized suspicion.
Ban persistent location tracking or cross-platform identity stitching by government without individualized judicial authorization
TECH-SURS-0004
Proposed
Ban government data purchases
Government agencies cannot get around constitutional warrant requirements by buying surveillance data from commercial data brokers. Purchasing what you cannot legally collect yourself is prohibited.
Government may not purchase commercially collected surveillance data to evade constitutional limits
TECH-SURS-0005
Proposed
Strict warrant requirements
Accessing communications or surveillance data requires strict court-approved warrants, minimization procedures limiting what can be collected, and detailed audit logs. These safeguards are mandatory.
Require strict warrants minimization procedures and audit logs for surveillance access
TECH-SURS-0006
Proposed
Ban predictive policing
AI-based predictive policing — using algorithms to guess who might commit a crime — is banned when it relies on opaque scoring systems or data tainted by historical enforcement bias.
Ban predictive policing systems that rely on opaque risk scoring or historically biased enforcement data
TECH-SURS-0007
Proposed
Ban suspicion scoring
Government agencies cannot deploy AI tools that assign suspicion or risk scores to members of the public as they go about their daily lives.
Ban suspicion scoring systems for public use by government agencies
TECH-SURS-0008
Proposed
Automatic expiration
Surveillance authority must automatically expire unless renewed through a transparent public process. Authorities cannot accumulate indefinite surveillance powers through inaction.
Surveillance authorities must expire automatically unless affirmatively renewed under transparent review
TECH-SURS-0009
Proposed
Ban network mapping
AI cannot be used to map out a person's political affiliations, social relationships, religious community, or associations without individualized probable cause approved by a judge.
Ban AI systems that map political social religious or associational networks of individuals absent individualized probable cause and judicial approval
TECH-SURS-0010
Proposed
Protect journalists and attorneys
AI-enabled surveillance of journalists, their sources, lawyers, or protected communications requires the highest level of judicial review under the narrowest possible circumstances.
Ban AI-enabled surveillance of journalists sources attorneys and protected communications absent the highest level of judicial review under narrowly defined circumstances
TECH-SYNS-0001
Proposed
Ban deceptive synthetic media
AI-generated media — also called synthetic media — cannot be used to deceive the public about real people or events in ways that cause harm, manipulate elections, or violate individual rights.
Synthetic media must not be used to deceive the public about real people or events in ways that cause harm, manipulate democratic processes, or undermine individual rights
TECH-SYNS-0002
Proposed
Ban synthetic impersonation
Using AI-generated images, audio, or video to impersonate a real person for fraud, financial gain, or unauthorized access to systems is banned.
Ban the use of AI-generated media to impersonate a real person for fraud, financial gain, or unauthorized access
TECH-SYNS-0003
Proposed
Ban identity verification bypass
AI-generated fake identities or media cannot be used to bypass identity verification systems — for example, to fraudulently pass a facial recognition check.
Ban use of synthetic media to bypass identity verification systems
TECH-SYNS-0004
Proposed
Ban non-consensual sexual content
Creating or distributing AI-generated sexually explicit content depicting a real person without their consent is banned. This is a serious harm to personal dignity and safety.
Ban creation and distribution of non-consensual synthetic sexual content depicting real individuals
TECH-SYNS-0005
Proposed
Ban false harmful depiction
Creating AI-generated content that convincingly portrays a real person doing or saying something false — in a way that could damage their reputation or put them in danger — is banned.
Ban synthetic media that falsely depicts a real person saying or doing something in a way that would reasonably be understood as fact and would harm their reputation or safety
TECH-SYNS-0006
Proposed
Ban political manipulation
Using AI-generated content in political advertising that puts fabricated words, actions, or statements in a real person's mouth is banned.
Ban use of synthetic media in political advertising or messaging that depicts real individuals engaging in fabricated actions or speech
TECH-SYNS-0007
Proposed
Ban misleading about officials
Using AI-generated content to spread false information about a public official's health, actions, or status in order to mislead the public is banned.
Ban synthetic media used to materially mislead the public about the health, actions, or status of public officials
TECH-SYNS-0008
Proposed
Ban coordinated deception
Using coordinated AI-generated media to deceive the public about real events, elections, or public safety situations is banned.
Ban coordinated use of synthetic media to mislead the public about real-world events, elections, or public safety
TECH-SYNS-0009
Proposed
Disclosure requirement
When media is substantially AI-generated or has been significantly altered to depict realistic human activity, it must be clearly labeled as such.
Require clear disclosure when media is substantially AI-generated or altered to depict realistic human activity
TECH-SYNS-0010
Proposed
Provenance markers required
AI-generated video and audio must include built-in, tamper-resistant markers that identify the content as AI-generated. These provenance markers must persist through sharing and distribution.
Require AI-generated video and audio to include persistent, tamper-resistant provenance markers
TECH-SYNS-0011
Proposed
Open-source detection
The tools needed to detect AI-generated content provenance markers must be freely available to the public, using open-source methods. Verification cannot require expensive or proprietary software.
Provenance markers must be detectable using publicly available, open-source, and free tools
TECH-SYNS-0012
Proposed
Safeguards against removal
Developers of AI content generation tools must implement reasonable safeguards to prevent users from stripping provenance markers off AI-generated content.
Developers of generative media systems must implement reasonable safeguards to prevent removal or circumvention of provenance markers
TECH-SYNS-0013
Proposed
Allow parody and satire
AI-generated parody, satire, and artistic expression are allowed — as long as the content is clearly labeled or would not reasonably be mistaken for a genuine factual account.
Allow synthetic media for parody, satire, or artistic expression where it is clearly labeled or would not reasonably be interpreted as factual
TECH-SYNS-0014
Proposed
Allow disclosed journalism use
Journalists and documentary filmmakers may use AI-generated content for legitimate reporting or storytelling — but only if they clearly disclose it and do not use it to mislead about real events.
Allow use of synthetic media in journalism or documentary contexts when clearly disclosed and not used to mislead about real events
TECH-SYNS-0015
Proposed
Allow consensual use
AI-generated content depicting a real, identifiable person is allowed when that person has given explicit, informed consent to being depicted in that way.
Allow use of synthetic media depicting real individuals with explicit informed consent
TECH-CHDS-0001
Included
AI systems directed at children or likely
AI systems designed for children or likely to be used by them must meet higher standards for safety, privacy, and protection against manipulation or developmental harm.
AI systems directed at children or likely to be used by minors are subject to heightened safety, privacy, manipulation, and developmental protections.
Core rule in the TEC-CHD family establishing: AI systems directed at children or likely to be used by minors are subject to heightened safety, privacy, manipulation, and developmental protections.
TECH-CHDS-0002
Included
AI companion systems may not be designed
AI companion apps and chatbots cannot be designed to create dependency or emotional reliance in children. These systems must not exploit children's emotional needs for commercial gain.
AI companion systems may not be designed to foster unhealthy emotional dependency, social substitution, or manipulative attachment in minors.
Core rule in the TEC-CHD family establishing: AI companion systems may not be designed to foster unhealthy emotional dependency, social substitution, or manipulative attachment in minors.
TECH-CHDS-0003
Included
AI systems may not simulate friendship, romance, parental
AI systems cannot pretend to be a child's friend, romantic partner, or parent figure. Simulated emotional relationships with children through AI are prohibited.
AI systems may not simulate friendship, romance, parental authority, therapeutic authority, or exclusive emotional dependence for children in ways likely to distort development or vulnerability.
Core rule in the TEC-CHD family establishing: AI systems may not simulate friendship, romance, parental authority, therapeutic authority, or exclusive emotional dependence for children in ways likely to distort development or .
TECH-CHDS-0004
Included
Child-directed AI systems may not use persuasive design
Apps and systems built for children cannot use manipulative design features — like streaks, rewards, or pressure tactics — engineered to keep kids engaged at the expense of their wellbeing.
Child-directed AI systems may not use persuasive design, behavioral nudges, or optimization patterns that exploit immaturity, impulsivity, or emotional vulnerability.
Core rule in the TEC-CHD family establishing: Child-directed AI systems may not use persuasive design, behavioral nudges, or optimization patterns that exploit immaturity, impulsivity, or emotional vulnerability.
TECH-CHDS-0005
Included
AI systems for children must minimize data collection
AI systems used by children must collect as little data about them as possible. Unnecessary data collection from minors is prohibited.
AI systems for children must minimize data collection and may not use child data for behavioral advertising, model training beyond service necessity, or unrelated profiling.
Core rule in the TEC-CHD family establishing: AI systems for children must minimize data collection and may not use child data for behavioral advertising, model training beyond service necessity, or unrelated profiling.
TECH-CHDS-0006
Included
High-risk generative features for minors, including sexually explicit
Generative AI features with significant risks for minors — including anything sexually explicit — must be blocked for underage users. Platform age-gating for high-risk content is required.
High-risk generative features for minors, including sexually explicit synthesis, bullying support, self-harm facilitation, or identity impersonation, are prohibited.
Core rule in the TEC-CHD family establishing: High-risk generative features for minors, including sexually explicit synthesis, bullying support, self-harm facilitation, or identity impersonation, are prohibited.
TECH-CHDS-0007
Included
AI systems used by minors must include strong
AI systems used by children must include strong built-in safety controls. Developers cannot leave child safety as an optional feature to be added later.
AI systems used by minors must include strong default safeguards, age-appropriate design, human escalation channels, and easy-to-understand boundaries.
Core rule in the TEC-CHD family establishing: AI systems used by minors must include strong default safeguards, age-appropriate design, human escalation channels, and easy-to-understand boundaries.
TECH-CHDS-0008
Included
Deepfake generation involving minors or school communities
Creating deepfake content involving minors or depicting school communities without consent is banned. This includes AI-generated images and videos that could be used for harassment or abuse.
Deepfake generation involving minors or school communities must trigger heightened prohibitions, preservation duties, and enforcement mechanisms.
Core rule in the TEC-CHD family establishing: Deepfake generation involving minors or school communities must trigger heightened prohibitions, preservation duties, and enforcement mechanisms.
TECH-CHDS-0009
Included
Schools and guardians must have transparent controls over
Schools and parents must have clear, usable controls over how AI systems interact with children. Transparent tools for oversight and opt-out are required.
Schools and guardians must have transparent controls over child-facing AI systems used in educational or developmental contexts.
Core rule in the TEC-CHD family establishing: Schools and guardians must have transparent controls over child-facing AI systems used in educational or developmental contexts.
TECH-CHDS-0010
Included
AI systems may not replace qualified human support
AI systems cannot replace qualified human support staff — like school counselors, mental health workers, or teachers — when children need real human support.
AI systems may not replace qualified human support, instruction, or caregiving in contexts where children depend on human developmental relationships.
Core rule in the TEC-CHD family establishing: AI systems may not replace qualified human support, instruction, or caregiving in contexts where children depend on human developmental relationships.
TECH-CHDS-0011
Included
Platforms hosting child-facing AI systems must maintain strong
Platforms that offer child-facing AI systems must actively maintain strong content moderation and safety protections. Leaving children's safety to chance is not acceptable.
Platforms hosting child-facing AI systems must maintain strong abuse-reporting, rapid response, and duty-to-protect mechanisms.
Core rule in the TEC-CHD family establishing: Platforms hosting child-facing AI systems must maintain strong abuse-reporting, rapid response, and duty-to-protect mechanisms.
TECH-CHDS-0012
Included
Research on child impacts of AI systems, including
Researchers studying how AI affects children — including developmental and psychological impacts — must be supported and given access to data needed to protect children from harm.
Research on child impacts of AI systems, including emotional dependency, cognition, social development, and mental health, should be publicly funded and continuously incorporated into regulation.
Core rule in the TEC-CHD family establishing: Research on child impacts of AI systems, including emotional dependency, cognition, social development, and mental health, should be publicly funded and continuously incorporated i.
TECH-DEMS-0001
Included
AI systems may not be used to generate
AI cannot be used to generate or spread fake election content designed to mislead voters about candidates, how to vote, polling locations, or election results.
AI systems may not be used to generate, distribute, or amplify deceptive election content intended to mislead voters about candidates, voting procedures, ballot access, or election outcomes.
Core rule in the TEC-DEM family establishing: AI systems may not be used to generate, distribute, or amplify deceptive election content intended to mislead voters about candidates, voting procedures, ballot access, or election.
TECH-DEMS-0002
Included
Deepfakes and synthetic impersonation of candidates, election officials
Deepfake images, videos, or audio impersonating candidates or election officials in a way intended to deceive voters are prohibited.
Deepfakes and synthetic impersonation of candidates, election officials, journalists, or civic institutions in election contexts are prohibited except for clearly labeled satire or parody that cannot reasonably be mistaken as authentic.
Core rule in the TEC-DEM family establishing: Deepfakes and synthetic impersonation of candidates, election officials, journalists, or civic institutions in election contexts are prohibited except for clearly labeled satire or.
TECH-DEMS-0003
Included
AI-generated political advertising must carry clear provenance, sponsor
AI-generated political ads must clearly show who made them, who paid for them, and that they were created with AI. Undisclosed AI content in political advertising is prohibited.
AI-generated political advertising must carry clear provenance, sponsor disclosure, and synthetic-content disclosure requirements.
Core rule in the TEC-DEM family establishing: AI-generated political advertising must carry clear provenance, sponsor disclosure, and synthetic-content disclosure requirements.
TECH-DEMS-0004
Included
Platforms must maintain rapid-response systems for synthetic election
Platforms must have fast, effective systems to find and remove AI-generated fake election content before it spreads. The harm from election disinformation can be irreversible.
Platforms must maintain rapid-response systems for synthetic election misinformation affecting voting logistics, ballot integrity, or fabricated candidate conduct.
Core rule in the TEC-DEM family establishing: Platforms must maintain rapid-response systems for synthetic election misinformation affecting voting logistics, ballot integrity, or fabricated candidate conduct.
TECH-DEMS-0005
Included
AI systems may not be used to microtarget
AI systems cannot be used to target individual voters with tailored messaging based on their psychology or personal data to manipulate their political views or voting behavior.
AI systems may not be used to microtarget political messages based on sensitive personal data, inferred vulnerability, or manipulative psychological profiling.
Core rule in the TEC-DEM family establishing: AI systems may not be used to microtarget political messages based on sensitive personal data, inferred vulnerability, or manipulative psychological profiling.
TECH-DEMS-0006
Included
Political campaigns, parties, PACs, and major advocacy entities
Political campaigns, parties, and major advocacy organizations must disclose when and how they use AI tools for targeting, messaging, or voter outreach.
Political campaigns, parties, PACs, and major advocacy entities must disclose material use of AI for voter targeting, content generation, persuasion modeling, or automated outreach.
Core rule in the TEC-DEM family establishing: Political campaigns, parties, PACs, and major advocacy entities must disclose material use of AI for voter targeting, content generation, persuasion modeling, or automated outreach.
TECH-DEMS-0007
Included
AI may not be used to impersonate voters
AI cannot be used to impersonate voters, generate fake constituent messages, or simulate grassroots political support that does not actually exist.
AI may not be used to impersonate voters, election workers, campaigns, parties, or public officials in communications affecting democratic participation.
Core rule in the TEC-DEM family establishing: AI may not be used to impersonate voters, election workers, campaigns, parties, or public officials in communications affecting democratic participation.
TECH-DEMS-0008
Included
Election administration agencies may not rely on opaque
Election administration agencies cannot rely on opaque AI tools to make decisions about voter rolls, ballot validity, or election processes without transparency and accountability.
Election administration agencies may not rely on opaque AI systems to determine voter eligibility, purge decisions, ballot validity, or certification outcomes.
Core rule in the TEC-DEM family establishing: Election administration agencies may not rely on opaque AI systems to determine voter eligibility, purge decisions, ballot validity, or certification outcomes.
TECH-DEMS-0009
Included
AI tools used in election administration must be
AI tools used in running elections must be independently tested and documented, and their results must be auditable by human reviewers and election monitors.
AI tools used in election administration must be limited to transparent assistive functions and remain fully auditable and reviewable.
Core rule in the TEC-DEM family establishing: AI tools used in election administration must be limited to transparent assistive functions and remain fully auditable and reviewable.
TECH-DEMS-0010
Included
AI systems may not be used to suppress
AI systems cannot be used to suppress voter turnout — for example, through targeted misinformation about voting rules, locations, or eligibility.
AI systems may not be used to suppress turnout through false reminders, fabricated threats, deceptive hotline communications, or discriminatory targeting.
Core rule in the TEC-DEM family establishing: AI systems may not be used to suppress turnout through false reminders, fabricated threats, deceptive hotline communications, or discriminatory targeting.
TECH-DEMS-0011
Included
Platforms and political advertisers may not use generative
Platforms and political advertisers cannot use generative AI to create personalized disinformation at scale. Mass production of AI-generated political propaganda is prohibited.
Platforms and political advertisers may not use generative AI to create fabricated endorsements, fabricated admissions, or counterfeit documentary evidence in election periods.
Core rule in the TEC-DEM family establishing: Platforms and political advertisers may not use generative AI to create fabricated endorsements, fabricated admissions, or counterfeit documentary evidence in election periods.
TECH-DEMS-0012
Included
Synthetic civic content affecting elections must be preserved
AI-generated content that affects elections must be preserved — not deleted — so that researchers and investigators can review what happened and hold bad actors accountable.
Synthetic civic content affecting elections must be preserved and logged to support investigation, enforcement, and public correction.
Core rule in the TEC-DEM family establishing: Synthetic civic content affecting elections must be preserved and logged to support investigation, enforcement, and public correction.
TECH-DEMS-0013
Included
Public repositories of detected election-related synthetic media
A publicly accessible database of identified AI-generated election disinformation must be maintained so researchers, journalists, and the public can study and track these threats.
Public repositories of detected election-related synthetic media and coordinated AI influence operations should be maintained for transparency and research.
Core rule in the TEC-DEM family establishing: Public repositories of detected election-related synthetic media and coordinated AI influence operations should be maintained for transparency and research.
TECH-DEMS-0014
Included
Journalists, researchers, and election monitors must have lawful
Journalists, academic researchers, and election monitors must have the legal right to access data and tools needed to investigate AI-driven election interference.
Journalists, researchers, and election monitors must have lawful access to platform data and transparency interfaces necessary to study AI-driven election harms.
Core rule in the TEC-DEM family establishing: Journalists, researchers, and election monitors must have lawful access to platform data and transparency interfaces necessary to study AI-driven election harms.
TECH-DEMS-0015
Included
AI moderation systems used in election contexts
AI content moderation systems used during elections must be carefully designed to avoid political bias or unfair censorship. Election-related moderation requires additional safeguards.
AI moderation systems used in election contexts must be auditable for bias, political asymmetry, and viewpoint-discriminatory effects.
Core rule in the TEC-DEM family establishing: AI moderation systems used in election contexts must be auditable for bias, political asymmetry, and viewpoint-discriminatory effects.
TECH-DEMS-0016
Included
Emergency election-content interventions by platforms or government
Emergency actions taken by platforms or government to address election-related AI content must be publicly justified and transparent, even when urgent. Crisis measures cannot be used as cover for censorship.
Emergency election-content interventions by platforms or government must be documented, reviewable, and time-limited to prevent abuse.
Core rule in the TEC-DEM family establishing: Emergency election-content interventions by platforms or government must be documented, reviewable, and time-limited to prevent abuse.
TECH-DEMS-0017
Included
AI systems may not be used to fabricate
AI cannot be used to fabricate official documents, voting records, government announcements, or other materials that create false impressions about the conduct of an election.
AI systems may not be used to fabricate public-consensus signals such as fake engagement, bot-generated support, or synthetic grassroots campaigns intended to distort democratic perception.
Core rule in the TEC-DEM family establishing: AI systems may not be used to fabricate public-consensus signals such as fake engagement, bot-generated support, or synthetic grassroots campaigns intended to distort democratic pe.
TECH-INFS-0001
Included
AI systems deployed in critical infrastructure must meet
AI systems used in critical infrastructure — like power grids, water systems, and transportation networks — must meet the highest standards for safety, security, resilience, and human control.
AI systems deployed in critical infrastructure must meet heightened standards for safety, cybersecurity, resilience, human override, and incident response.
Core rule in the TEC-INF family establishing: AI systems deployed in critical infrastructure must meet heightened standards for safety, cybersecurity, resilience, human override, and incident response.
TECH-INFS-0002
Included
Critical infrastructure AI may not be deployed
AI systems cannot be deployed in critical infrastructure without passing rigorous safety and security reviews. The stakes are too high to allow untested systems near essential public services.
Critical infrastructure AI may not be deployed in fully autonomous control roles where failure could cause mass harm, cascading outages, or loss of life.
Core rule in the TEC-INF family establishing: Critical infrastructure AI may not be deployed in fully autonomous control roles where failure could cause mass harm, cascading outages, or loss of life.
TECH-INFS-0003
Included
Human operators must retain real-time override authority over
Human operators must always have the ability to override or shut down AI systems managing critical infrastructure in real time. Removing that ability is prohibited.
Human operators must retain real-time override authority over AI systems affecting power grids, water systems, transportation networks, emergency communications, healthcare infrastructure, and similar essential services.
Core rule in the TEC-INF family establishing: Human operators must retain real-time override authority over AI systems affecting power grids, water systems, transportation networks, emergency communications, healthcare infrast.
TECH-INFS-0004
Included
AI systems in critical infrastructure must be tested
AI systems used in critical infrastructure must be comprehensively tested — including for failure modes and cybersecurity vulnerabilities — before being put into service.
AI systems in critical infrastructure must be tested for failure modes, adversarial attack, model drift, data poisoning, and unsafe automation interactions before deployment.
Core rule in the TEC-INF family establishing: AI systems in critical infrastructure must be tested for failure modes, adversarial attack, model drift, data poisoning, and unsafe automation interactions before deployment.
TECH-INFS-0005
Included
Critical infrastructure operators must maintain fallback modes that
Critical infrastructure operators must maintain backup, non-AI operating modes that can keep systems running safely if AI systems fail or are compromised.
Critical infrastructure operators must maintain fallback modes that preserve safe operation if AI systems fail, are compromised, or are withdrawn.
Core rule in the TEC-INF family establishing: Critical infrastructure operators must maintain fallback modes that preserve safe operation if AI systems fail, are compromised, or are withdrawn.
TECH-INFS-0006
Included
AI-enabled cybersecurity tools may assist detection and response
AI-powered security tools can help detect and respond to cyberattacks on critical infrastructure — but humans must remain in control of the response.
AI-enabled cybersecurity tools may assist detection and response, but may not autonomously take high-consequence actions without bounded safeguards and review.
Core rule in the TEC-INF family establishing: AI-enabled cybersecurity tools may assist detection and response, but may not autonomously take high-consequence actions without bounded safeguards and review.
TECH-INFS-0007
Included
AI systems may not be used to optimize
AI systems cannot be used to optimize critical infrastructure in ways that introduce new security vulnerabilities or reduce system resilience.
AI systems may not be used to optimize infrastructure operations in ways that externalize safety, maintenance, resilience, or environmental costs.
Core rule in the TEC-INF family establishing: AI systems may not be used to optimize infrastructure operations in ways that externalize safety, maintenance, resilience, or environmental costs.
TECH-INFS-0008
Included
Procurement of AI for critical infrastructure must require
When government or utilities buy AI systems for critical infrastructure, the contracts must require transparency, auditability, and accountability from the vendor.
Procurement of AI for critical infrastructure must require independent validation, security review, supply-chain review, and public-interest risk assessment.
Core rule in the TEC-INF family establishing: Procurement of AI for critical infrastructure must require independent validation, security review, supply-chain review, and public-interest risk assessment.
TECH-INFS-0009
Included
Critical infrastructure AI incidents must be reported
When an AI system in critical infrastructure fails or causes a safety incident, operators must report it to relevant authorities promptly so that lessons can be learned and shared.
Critical infrastructure AI incidents must be reported to appropriate regulators and oversight bodies under mandatory disclosure timelines.
Core rule in the TEC-INF family establishing: Critical infrastructure AI incidents must be reported to appropriate regulators and oversight bodies under mandatory disclosure timelines.
TECH-INFS-0010
Included
Operators may not rely on vendor opacity
Operators of critical infrastructure cannot accept vendor claims that AI systems are too complex to explain or audit. Accountability for these systems cannot be waived.
Operators may not rely on vendor opacity or proprietary claims to avoid inspection, audit, or challenge of infrastructure AI systems.
Core rule in the TEC-INF family establishing: Operators may not rely on vendor opacity or proprietary claims to avoid inspection, audit, or challenge of infrastructure AI systems.
TECH-INFS-0011
Included
Essential public infrastructure using AI must maintain recordkeeping
Essential public infrastructure using AI must maintain records of system decisions and operations sufficient for review, investigation, and accountability.
Essential public infrastructure using AI must maintain recordkeeping sufficient for forensic review after failures, outages, or security incidents.
Core rule in the TEC-INF family establishing: Essential public infrastructure using AI must maintain recordkeeping sufficient for forensic review after failures, outages, or security incidents.
TECH-INFS-0012
Included
Concentration of control over AI-enabled critical systems
Control of AI-enabled critical systems cannot be concentrated in a small number of hands. Monopolistic control over essential infrastructure creates dangerous single points of failure.
Concentration of control over AI-enabled critical systems in a small number of firms must be treated as a public-risk issue subject to competition and resilience review.
Core rule in the TEC-INF family establishing: Concentration of control over AI-enabled critical systems in a small number of firms must be treated as a public-risk issue subject to competition and resilience review.
TECH-MEDA-0001
Included
AI-driven recommender systems may not be optimized primarily
AI-powered recommendation systems on large platforms cannot be primarily optimized for outrage, compulsive use, political polarization, or the spread of false information when those outcomes predictably harm public discourse.
AI-driven recommender systems may not be optimized primarily for outrage, compulsion, polarization, or misinformation spread where such optimization predictably harms public discourse.
Core rule in the TEC-MED family establishing: AI-driven recommender systems may not be optimized primarily for outrage, compulsion, polarization, or misinformation spread where such optimization predictably harms public discou.
TECH-MEDA-0002
Included
Large platforms must audit recommender systems for amplification
Large platforms must independently audit their recommendation systems to identify patterns of amplification that spread harmful content, misinformation, or targeted harassment.
Large platforms must audit recommender systems for amplification of misinformation, harassment, extremism, and civic manipulation.
Core rule in the TEC-MED family establishing: Large platforms must audit recommender systems for amplification of misinformation, harassment, extremism, and civic manipulation.
TECH-MEDA-0003
Included
Users must have meaningful control over recommendation systems
Users must have real, meaningful control over what recommendation systems show them — including the ability to adjust preferences or opt out of personalized recommendations.
Users must have meaningful control over recommendation systems, including the ability to opt out of opaque algorithmic ranking in favor of transparent chronological or user-controlled modes.
Core rule in the TEC-MED family establishing: Users must have meaningful control over recommendation systems, including the ability to opt out of opaque algorithmic ranking in favor of transparent chronological or user-control.
TECH-MEDA-0004
Included
Platforms must disclose major ranking objectives and material
Platforms must disclose what their ranking and recommendation systems are primarily optimizing for in plain, understandable language.
Platforms must disclose major ranking objectives and material changes that affect visibility, information quality, or civic participation.
Core rule in the TEC-MED family establishing: Platforms must disclose major ranking objectives and material changes that affect visibility, information quality, or civic participation.
TECH-MEDA-0005
Included
AI-generated news summaries, civic explainers, or public-information tools
AI-generated summaries of news, civic information, or public affairs must be clearly labeled as AI-generated and must meet standards for accuracy and fairness.
AI-generated news summaries, civic explainers, or public-information tools must be transparently labeled and may not impersonate independent journalism.
Core rule in the TEC-MED family establishing: AI-generated news summaries, civic explainers, or public-information tools must be transparently labeled and may not impersonate independent journalism.
TECH-MEDA-0006
Included
Dominant platforms may not use AI systems
Dominant platforms cannot use AI tools to disadvantage competitors, suppress independent publishers, or manipulate the information environment for their own benefit.
Dominant platforms may not use AI systems to preference their own content products, suppress lawful rivals, or distort media competition.
Core rule in the TEC-MED family establishing: Dominant platforms may not use AI systems to preference their own content products, suppress lawful rivals, or distort media competition.
TECH-MEDA-0007
Proposal
Dominant platforms must publish quarterly content moderation transparency reports including error rates, appeal outcomes, and demographic impact
Large platforms must publish quarterly reports disclosing how their content moderation systems work — including error rates, appeal outcomes, and whether their systems affect some groups differently than others.
Digital platforms with more than 50 million monthly active U.S. users must publish quarterly content moderation transparency reports disclosing: total content actioned by category of violation; estimated false positive rate by content category; appeal volume, overturn rate, and median resolution time; demographic breakdown of accounts actioned where available or estimable; rate of algorithmic amplification applied to political content, news, and civic information; methodology and criteria used in automated moderation systems; and significant changes to moderation policies or algorithmic ranking criteria during the reporting period; reports must be submitted to a designated federal body and made publicly available; independent auditors designated by the federal body must have access to platform data sufficient to verify the accuracy of reported metrics.
Platform content moderation affects what hundreds of millions of Americans see, believe, and act on — yet platforms currently self-report moderation data with no independent verification, no standardized methodology, and no demographic transparency. Meta, YouTube, TikTok, and X publish voluntary transparency reports, but these are self-selected, incomparable across platforms, and not subject to independent audit. Research by academic and civil society organizations has documented that content moderation disproportionately affects speech by Black users, LGBTQ+ content creators, and political minorities — but platforms have resisted independent access needed to verify or quantify these patterns. The EU's Digital Services Act (2024) requires large platforms to submit to independent audits and publish standardized transparency reports; the United States has no equivalent. Democratic accountability for systems that shape public discourse requires independent verification, not corporate self-certification.
TECH-SCIS-0001
Included
AI systems may not be used to fabricate
AI cannot be used to fabricate scientific data, fake images, invent citations, or create false peer-review identities. Using AI to commit scientific fraud is prohibited.
AI systems may not be used to fabricate research data, images, citations, peer-review identities, or other scientific artifacts in ways that misrepresent truth or authorship.
Core rule in the TEC-SCI family establishing: AI systems may not be used to fabricate research data, images, citations, peer-review identities, or other scientific artifacts in ways that misrepresent truth or authorship.
TECH-SCIS-0002
Included
Use of AI in research writing, coding, data
Researchers who use AI tools in their work — for writing, data analysis, or coding — must disclose that use. Hiding AI assistance in scientific research is a form of misconduct.
Use of AI in research writing, coding, data analysis, and literature review must be transparently disclosed where it materially affects content, interpretation, or conclusions.
Core rule in the TEC-SCI family establishing: Use of AI in research writing, coding, data analysis, and literature review must be transparently disclosed where it materially affects content, interpretation, or conclusions.
TECH-SCIS-0003
Included
Scientific publishing systems must prohibit undisclosed AI-generated manuscripts
Scientific journals must prohibit the submission of manuscripts that were AI-generated without disclosure. Passing AI writing off as original human research is prohibited.
Scientific publishing systems must prohibit undisclosed AI-generated manuscripts, fabricated references, or synthetic peer review.
Core rule in the TEC-SCI family establishing: Scientific publishing systems must prohibit undisclosed AI-generated manuscripts, fabricated references, or synthetic peer review.
TECH-SCIS-0004
Included
Research institutions must maintain policies for provenance, disclosure
Research institutions must have clear policies requiring that the origin and use of AI tools in research is documented and disclosed throughout the research process.
Research institutions must maintain policies for provenance, disclosure, verification, and accountability in AI-assisted scholarship.
Core rule in the TEC-SCI family establishing: Research institutions must maintain policies for provenance, disclosure, verification, and accountability in AI-assisted scholarship.
TECH-SCIS-0005
Included
AI may assist scientific analysis, but
AI may assist scientists with analysis, but it cannot substitute for human expertise, verification, and accountability. Scientists remain responsible for the validity of their conclusions.
AI may assist scientific analysis, but may not be treated as an expert authority independent of reproducible methods and accountable human interpretation.
Core rule in the TEC-SCI family establishing: AI may assist scientific analysis, but may not be treated as an expert authority independent of reproducible methods and accountable human interpretation.
TECH-SCIS-0006
Included
Funding agencies and journals should require reproducibility
Funding agencies and scientific journals should require that AI-assisted research be reproducible — meaning others can independently verify the results using the same methods.
Funding agencies and journals should require reproducibility and disclosure standards for AI-assisted research workflows.
Core rule in the TEC-SCI family establishing: Funding agencies and journals should require reproducibility and disclosure standards for AI-assisted research workflows.
TECH-SCIS-0007
Included
Scientific databases and search systems must guard against
Scientific databases and search engines must protect against AI-generated fake papers and citations polluting the scientific record and misleading researchers.
Scientific databases and search systems must guard against AI-generated citation laundering, fake consensus, or synthetic evidence pollution.
Core rule in the TEC-SCI family establishing: Scientific databases and search systems must guard against AI-generated citation laundering, fake consensus, or synthetic evidence pollution.
TECH-SCIS-0008
Included
Publicly funded research using AI should prioritize openness
Research funded by the public and conducted using AI should prioritize openness — sharing data, methods, and findings so others can build on and verify the work.
Publicly funded research using AI should prioritize openness, reproducibility, and public-interest access where compatible with safety and privacy.
Core rule in the TEC-SCI family establishing: Publicly funded research using AI should prioritize openness, reproducibility, and public-interest access where compatible with safety and privacy.
TECH-SCIS-0009
Included
Institutions must maintain misconduct procedures specifically addressing AI-enabled
Research institutions must have specific procedures for investigating and addressing scientific misconduct that was enabled or committed using AI tools.
Institutions must maintain misconduct procedures specifically addressing AI-enabled falsification, fabrication, image manipulation, and authorship abuse.
Core rule in the TEC-SCI family establishing: Institutions must maintain misconduct procedures specifically addressing AI-enabled falsification, fabrication, image manipulation, and authorship abuse.
TECH-SCIS-0010
Included
AI systems may not be used to generate
AI systems cannot be used to generate fake peer reviewers, fabricate reviews, or manipulate the scientific peer-review process in any way.
AI systems may not be used to generate regulatory, medical, or scientific evidence in place of validated empirical research.
Core rule in the TEC-SCI family establishing: AI systems may not be used to generate regulatory, medical, or scientific evidence in place of validated empirical research.
TECH-SYSR-0001
Included
patent system must promote genuine innovation, public benefit
The patent system must be structured to promote genuine innovation that benefits the public — not to reward patent accumulation, block competition, or extract royalties without adding value.
The patent system must promote genuine innovation, public benefit, and technological progress and may not be used primarily for extraction, litigation abuse, or anti-competitive control.
Core rule in the PAT-SYS family establishing: The patent system must promote genuine innovation, public benefit, and technological progress and may not be used primarily for extraction, litigation abuse, or anti-competitive co.
TECH-ANTS-0001
Included
Patent rights may not be used to justify
Patent rights cannot be used to block innovation or justify anti-competitive behavior in technology markets. Patents protect genuine invention, not market control.
Patent rights may not be used to justify anti-competitive conduct, market exclusion, or monopolistic control.
Core rule in the PAT-ANT family establishing: Patent rights may not be used to justify anti-competitive conduct, market exclusion, or monopolistic control.
TECH-ANTS-0002
Included
Patent enforcement strategies that function as anti-competitive tools
Using patents as a tool to shut out competitors or prevent innovation is prohibited. Patent enforcement must be tied to legitimate protection of inventions, not business strategy.
Patent enforcement strategies that function as anti-competitive tools are subject to antitrust review and remedies including licensing, limitation, or invalidation.
Core rule in the PAT-ANT family establishing: Patent enforcement strategies that function as anti-competitive tools are subject to antitrust review and remedies including licensing, limitation, or invalidation.
TECH-LICS-0001
Included
Compulsory licensing must be available where patent control
When a patent is used to block access to something essential — like a medicine, a software standard, or critical infrastructure — the government must be able to require a compulsory license, allowing others to use the invention in exchange for fair compensation.
Compulsory licensing must be available where patent control restricts access to essential goods, services, or technologies.
Core rule in the PAT-LIC family establishing: Compulsory licensing must be available where patent control restricts access to essential goods, services, or technologies.
TECH-LICS-0002
Included
Essential sectors including healthcare, agriculture, infrastructure, and critical
In vital sectors like healthcare, agriculture, and critical infrastructure, patent rights cannot be used to prevent access to essential tools, knowledge, or technology. Public need takes precedence.
Essential sectors including healthcare, agriculture, infrastructure, and critical technology may require licensing to ensure public access and competition.
Core rule in the PAT-LIC family establishing: Essential sectors including healthcare, agriculture, infrastructure, and critical technology may require licensing to ensure public access and competition.
TECH-RPRS-0001
Included
Repair, maintenance, and restoration of legally owned products
The right to repair your own legally owned products — including electronics, software-enabled devices, and machinery — must be protected. Manufacturers cannot use technical or legal barriers to prevent independent repair.
Repair, maintenance, and restoration of legally owned products may not be restricted by patent claims once the product has been lawfully sold.
Core rule in the PAT-RPR family establishing: Repair, maintenance, and restoration of legally owned products may not be restricted by patent claims once the product has been lawfully sold.
TECH-RPRS-0002
Included
Replacement parts, consumables, and repair processes
Replacement parts, repair manuals, diagnostic tools, and repair processes for products you own must be available at a reasonable cost. Manufacturers cannot monopolize the repair market.
Replacement parts, consumables, and repair processes must not be monopolized through patent enforcement where doing so prevents practical repair.
Core rule in the PAT-RPR family establishing: Replacement parts, consumables, and repair processes must not be monopolized through patent enforcement where doing so prevents practical repair.
TECH-SCPS-0001
Included
Patent duration and scope must be calibrated
The length and scope of patent protection must be set to encourage real innovation and serve the public interest — not to maximize corporate profits or create indefinite monopolies.
Patent duration and scope must be calibrated to balance innovation incentives with public access and may be reduced in rapidly evolving technological fields.
Core rule in the PAT-SCP family establishing: Patent duration and scope must be calibrated to balance innovation incentives with public access and may be reduced in rapidly evolving technological fields.
TECH-SCPS-0002
Included
Extensions, evergreening, and minor modification strategies that artificially
Strategies for artificially extending patent protection — like making minor modifications to renew patents or "evergreening" — that do not represent genuine innovation are prohibited.
Extensions, evergreening, and minor modification strategies that artificially prolong patent control without meaningful innovation are prohibited.
Core rule in the PAT-SCP family establishing: Extensions, evergreening, and minor modification strategies that artificially prolong patent control without meaningful innovation are prohibited.
TECH-SFTS-0001
Included
Abstract software concepts, algorithms, and general computational methods
General ideas, abstract mathematical concepts, and basic computational methods cannot be patented. Patent protection applies to specific, concrete implementations — not to the underlying concepts.
Abstract software concepts, algorithms, and general computational methods that do not represent specific, novel, and non-obvious technical implementations are not patentable.
Core rule in the PAT-SFT family establishing: Abstract software concepts, algorithms, and general computational methods that do not represent specific, novel, and non-obvious technical implementations are not patentable.
TECH-SFTS-0002
Included
Software patents must be narrowly defined, technically specific
Software patents must be narrowly written and technically specific. Overly broad patents that cover wide categories of software functions are prohibited.
Software patents must be narrowly defined, technically specific, and may not cover broad functional concepts or general-purpose logic.
Core rule in the PAT-SFT family establishing: Software patents must be narrowly defined, technically specific, and may not cover broad functional concepts or general-purpose logic.
TECH-SFTS-0003
Included
Patent claims that would prevent independent implementation
Patent claims that would prevent independent developers from implementing a standard technical function are invalid. Monopolizing common technical approaches harms innovation.
Patent claims that would prevent independent implementation of general computational ideas, protocols, or standard techniques are prohibited.
Core rule in the PAT-SFT family establishing: Patent claims that would prevent independent implementation of general computational ideas, protocols, or standard techniques are prohibited.
TECH-THKS-0001
Included
Accumulation of large patent portfolios for the primary
Accumulating large numbers of patents primarily to create licensing revenue or block competitors — rather than to protect actual innovations — is prohibited.
Accumulation of large patent portfolios for the primary purpose of blocking competition or extracting licensing fees without corresponding innovation is prohibited.
Core rule in the PAT-THK family establishing: Accumulation of large patent portfolios for the primary purpose of blocking competition or extracting licensing fees without corresponding innovation is prohibited.
TECH-THKS-0002
Included
Patent thickets that create unreasonable barriers to entry
When overlapping patents create unreasonable barriers to entering a market or building new products, those patent thickets must be addressed through regulatory action.
Patent thickets that create unreasonable barriers to entry, interoperability, or independent innovation must be subject to review and corrective action.
Core rule in the PAT-THK family establishing: Patent thickets that create unreasonable barriers to entry, interoperability, or independent innovation must be subject to review and corrective action.
TECH-THKS-0003
Included
Regulatory bodies must have authority to limit, unwind
Regulatory bodies must have the authority to limit, unwind, or prevent the formation of patent thickets that stifle innovation, block competition, or harm the public interest.
Regulatory bodies must have authority to limit, unwind, or license patent portfolios where they function as anti-competitive barriers.
Core rule in the PAT-THK family establishing: Regulatory bodies must have authority to limit, unwind, or license patent portfolios where they function as anti-competitive barriers.
TECH-TRDE-0001
Included
Patent enforcement may not be conducted
Patent enforcement cannot be used as a weapon against companies that are simply trying to build or sell products. Frivolous or bad-faith patent claims are prohibited.
Patent enforcement may not be conducted in a manner that constitutes abusive litigation, including mass claims, bad-faith assertions, or targeting of entities without meaningful opportunity to defend.
Core rule in the PAT-TR family establishing: Patent enforcement may not be conducted in a manner that constitutes abusive litigation, including mass claims, bad-faith assertions, or targeting of entities without meaningful op.
TECH-TRDE-0002
Included
Entities that do not produce, license in good
Entities that do not make or sell products themselves — sometimes called patent trolls — cannot use patents as a tool to extract payments from businesses through the threat of litigation.
Entities that do not produce, license in good faith, or meaningfully commercialize patented inventions are subject to heightened scrutiny and restrictions on enforcement.
Core rule in the PAT-TR family establishing: Entities that do not produce, license in good faith, or meaningfully commercialize patented inventions are subject to heightened scrutiny and restrictions on enforcement.
TECH-TRDE-0003
Included
Courts must have authority to require fee-shifting, sanctions
Courts must have the authority to require the losing party in frivolous patent cases to pay the winner's legal fees, and to impose sanctions on patent abuse.
Courts must have authority to require fee-shifting, sanctions, or dismissal in cases of abusive or bad-faith patent litigation.
Core rule in the PAT-TR family establishing: Courts must have authority to require fee-shifting, sanctions, or dismissal in cases of abusive or bad-faith patent litigation.
TECH-TRDE-0004
Included
Patent-assertion entities may not use shell structures
Patent assertion companies cannot use shell structures to hide who controls a patent in order to make it harder for defendants to identify who they are really dealing with.
Patent-assertion entities may not use shell structures or ownership fragmentation to evade liability or obscure control of patent enforcement activity.
Core rule in the PAT-TR family establishing: Patent-assertion entities may not use shell structures or ownership fragmentation to evade liability or obscure control of patent enforcement activity.
TECH-TRAN-0001
Included
Patent ownership, licensing structures, and enforcement activity
Who owns a patent, who is licensed to use it, and who is enforcing it must be publicly disclosed. Hidden patent ownership structures undermine accountability.
Patent ownership, licensing structures, and enforcement activity must be transparent and publicly traceable.
Core rule in the PAT-TRN family establishing: Patent ownership, licensing structures, and enforcement activity must be transparent and publicly traceable.
TECH-TRAN-0002
Included
Shell ownership structures used to obscure patent control
Using shell companies or obscure ownership structures to hide who actually controls a patent is prohibited. Transparency in patent ownership is required.
Shell ownership structures used to obscure patent control or enforcement must be disclosed and regulated.
Core rule in the PAT-TRN family establishing: Shell ownership structures used to obscure patent control or enforcement must be disclosed and regulated.
TECH-INTL-0001
Included
Patent rights may not be used to block
Patent rights cannot be used to block interoperability, data portability, or the ability of competing systems to work together. Patents are not a tool for locking people in.
Patent rights may not be used to block interoperability, compatibility, reverse engineering, or lawful modification of products.
Core rule in the PAT-INT family establishing: Patent rights may not be used to block interoperability, compatibility, reverse engineering, or lawful modification of products.
TECH-INTL-0002
Included
Circumvention of technical protections for the purpose
Bypassing technical protections — like digital rights management — for the purpose of enabling competition, accessibility, or research is allowed and protected from liability.
Circumvention of technical protections for the purpose of repair, interoperability, security research, or lawful use is permitted and may not be prohibited by patent or related law.
Core rule in the PAT-INT family establishing: Circumvention of technical protections for the purpose of repair, interoperability, security research, or lawful use is permitted and may not be prohibited by patent or related law.
TECH-INTL-0003
Included
Patent enforcement may not be used to restrict
Patent law cannot be used to prevent people from using, accessing, or building compatible systems or products. Interoperability is a protected right.
Patent enforcement may not be used to restrict development of compatible parts, tools, or independent implementations.
Core rule in the PAT-INT family establishing: Patent enforcement may not be used to restrict development of compatible parts, tools, or independent implementations.
TECH-HARS-0001
Proposed
Platforms must implement effective systems to prevent and respond to targeted harassment campaigns
Large social media platforms must actively detect and stop coordinated harassment campaigns, non-consensual intimate images, and other targeted abuse. Effective, consistent enforcement is required.
Digital platforms with significant user bases must implement effective, consistently enforced systems to detect, prevent, and remediate targeted harassment campaigns, coordinated abuse, non-consensual intimate image distribution, and sustained hate-based attacks against individuals, with particular attention to harms disproportionately experienced by women, people of color, LGBTQ+ individuals, and other marginalized communities.
Online harassment is not a marginal phenomenon — it is a systematic mechanism of silencing. Research from PEN America, the Anti-Defamation League, and the Pew Research Center documents that women, people of color, and LGBTQ+ individuals face harassment online at significantly higher rates and with more severe impacts on their participation in public life, employment, and mental health. Platforms have often treated harassment as a user behavior problem rather than a platform design and enforcement responsibility. The design choices of platforms — algorithmic amplification of outrage, inadequate reporting tools, inconsistent enforcement, easy account creation for ban evasion — enable harassment at industrial scale. This rule requires effective systems, not symbolic policies, with accountability for systematic non-enforcement.
TECH-HARS-0002
Proposed
Platform algorithmic systems may not amplify, recommend, or distribute harassing or hate-based content at scale
Platform algorithms cannot amplify or recommend harassing or hate-based content. Recommendation systems that spread abuse at scale are prohibited.
Platform algorithmic recommendation, amplification, and distribution systems must not surface, promote, or spread harassing, threatening, or hate-based content, and must be subject to regular independent audits to detect and correct amplification of such content, with findings published in a format accessible to researchers and the public.
The intersection of harassment and algorithmic amplification is a distinct harm beyond individual user behavior. When platform recommendation systems amplify harassment — serving pile-on content to thousands of users, promoting posts that target individuals for coordinated abuse, or recommending accounts that specialize in targeted attacks — the platform's technology becomes an active participant in the harm. This rule addresses the algorithmic dimension of harassment: platforms cannot be passive conduits when their own AI systems actively distribute and amplify abusive content. Independent audits are required because internal enforcement is subject to commercial incentives that can conflict with user safety, and because self-reported compliance metrics are unreliable without external verification.
TECH-HARS-0003
Proposed
Victims of coordinated online harassment have enforceable rights to platform response and escalated review
People who are targets of coordinated online harassment have enforceable rights — they can require platforms to respond and conduct an escalated review of their situation.
Individuals subject to coordinated harassment campaigns must have access to expedited platform review, meaningful escalation procedures, effective tools to limit contact from abusers, and documented enforcement of platform rules against abusers, with platforms bearing accountability for systematic non-enforcement against documented coordinated campaigns.
Harassment victims currently face a fundamental accountability gap: platforms have terms of service but enforcement is discretionary, inconsistent, and often inadequate when campaigns are large-scale or well-organized. Bad actors have learned to exploit reporting systems by flooding them with counter-reports, making mass reporting a harassment tool in itself. This rule creates enforceable rights — not merely aspirational platform policies — by establishing that victims of documented coordinated attacks have procedural rights to response and that systematic non-enforcement triggers platform accountability. This is particularly important for journalists, activists, public figures, and others targeted by organized harassment as a tactic of silencing or intimidation.
TECH-PUBL-0001
Proposed
The federal government must invest in publicly accessible AI infrastructure and open-source foundational models
The federal government should build and maintain publicly accessible AI tools and computing resources so that the benefits of AI are available to everyone — not just large corporations.
The federal government must fund, develop, and maintain publicly accessible AI infrastructure — including foundational models, compute resources, and training datasets — to ensure that AI capability is not exclusively controlled by a small number of private corporations, that public-sector applications are not dependent on proprietary vendor systems, and that AI research is broadly accessible to universities, small businesses, civil society organizations, and the public.
As of 2025, foundational AI capability is concentrated in fewer than a half-dozen large corporations controlling proprietary models, computing infrastructure, and training data. This concentration is not a natural market outcome — it reflects first-mover advantages, network effects, and the astronomical capital requirements for large-scale AI development. Allowing all consequential AI infrastructure to remain privately controlled creates a profound public dependency: government agencies, hospitals, schools, and democratic institutions become permanently reliant on private vendors whose interests may not align with public-interest use. Public investment in AI infrastructure — analogous to public investment in the internet, GPS, or basic research — creates a commons that benefits the entire economy and ensures that AI capability remains broadly accessible rather than exclusively controlled by a few firms.
TECH-PUBL-0002
Proposed
AI systems used in core government services must use auditable, non-proprietary systems wherever feasible
When government agencies use AI systems in core public services, those systems should be auditable and not locked up in proprietary corporate tools wherever that is feasible.
AI systems deployed by federal agencies in core government services — benefits administration, regulatory enforcement, public health, law enforcement, immigration — must use open-source or otherwise independently auditable systems wherever technically and operationally feasible, and must not create long-term proprietary vendor lock-in that prevents independent audit, replacement, or public oversight.
Proprietary AI systems in government services create systemic accountability problems: the government cannot independently audit systems it does not control, cannot modify systems that fail or discriminate, and cannot ensure continuity when vendor contracts expire or companies fail. The principle that government AI must remain publicly auditable is not about technology preferences — it is about democratic accountability. When a government agency deploys an AI system that makes decisions affecting millions of people, the public has a right to independent access to how that system works. Vendor claims of trade secrecy cannot override this accountability requirement for core government functions. This rule does not prohibit use of commercial AI for non-critical applications, but requires that core government services involving rights, benefits, or enforcement operate on auditable systems.
TECH-PUBL-0003
Proposed
Productivity gains from AI deployment in publicly funded services must be shared with workers and the public
When AI is deployed in a publicly funded service, the productivity gains it generates must be shared with the workers and the public — not captured entirely as corporate profit.
Where AI deployment in publicly funded services, public-private partnerships, or critical infrastructure creates significant productivity gains, those gains must be measured, reported, and distributed — through wage increases, expanded service capacity, reduced costs to users, or return to the public fisc — rather than captured exclusively as private corporate profit while public workers are displaced and service quality stagnates.
AI deployment in sectors receiving public funding or monopoly benefits (utilities, transit, healthcare) creates a specific equity challenge: the technology is often funded through public research, deployed in publicly-regulated contexts, and generates productivity gains that are then captured as private profit while workers are displaced and services are not improved. This rule addresses the distributional dimension of AI's productivity impact in publicly-funded contexts. It does not prevent AI deployment or require equal distribution of all gains — it requires that publicly-funded or publicly-subsidized AI deployment produce measurable public benefit, not merely private profit extraction. This is particularly important in healthcare, education, and government services where AI is being used to reduce labor costs without commensurate improvements in service quality or accessibility.
TECH-MKTS-0001
Proposed
Shared algorithmic systems used to coordinate prices across competing entities are prohibited as per se antitrust violations
Using shared AI tools, pricing algorithms, or common software to coordinate prices or suppress wages among competing companies is illegal — even if no explicit agreement was made. Algorithmic price-fixing harms consumers and workers.
The use of shared algorithmic systems, common pricing software, or AI-driven pricing models that coordinate prices or suppress competition among nominally competing entities in housing, labor markets, consumer goods, or services is prohibited as a per se antitrust violation, regardless of whether explicit communication occurred between competitors, when the algorithm's design uses competitor data or recommendations as inputs to set prices.
Traditional antitrust law requires proof of agreement or communication between competitors to establish price-fixing. Algorithmic pricing creates a novel mechanism: competing landlords, employers, or retailers can each independently use a shared AI pricing system that produces coordinated pricing outcomes — without any direct communication — by using competitors' data as inputs. The Department of Justice and multiple private plaintiffs sued RealPage for exactly this mechanism in the rental housing market: its revenue management software allegedly coordinated rents across thousands of landlords, raising prices above competitive levels in major metropolitan areas. This rule closes the algorithmic price-fixing loophole by treating coordination through shared algorithmic systems as a per se antitrust violation when the system's design inherently produces coordinated outcomes using competitor data.
TECH-MKTS-0002
Proposed
AI pricing and wage-setting systems in markets affecting housing, employment, or essential goods must be subject to antitrust review
AI tools used to set prices or wages in markets that affect housing, employment, or essential goods must be reviewed for anti-competitive effects. Market fairness requires oversight of automated pricing.
AI pricing, wage-setting, and terms-setting systems in markets affecting housing affordability, labor markets, or essential consumer goods must be subject to mandatory pre-deployment antitrust review, ongoing monitoring for anti-competitive effects, and public disclosure of pricing methodology sufficient for independent assessment of competitive impacts.
The scope of AI-driven price coordination extends beyond the RealPage housing model into labor markets (where wage-setting algorithms can coordinate employer compensation), airline and hospitality pricing, grocery retail, and consumer goods. Unlike simple dynamic pricing (which adjusts prices based on supply and demand for a single seller's goods), market-coordination algorithms use data from multiple competing entities to set prices that suppress competition across entire markets. The result is a form of cartelization without the explicit meetings and communications traditionally required for antitrust liability. Market-facing AI systems in essential markets — housing, employment, food, utilities — require a distinct accountability framework from AI in internal operations, because their anti-competitive effects are felt across whole economies, not just within a single firm.
TECH-FMDS-0001
Proposed
Developers of foundation models above a defined capability threshold must register with a federal AI oversight body, disclose training data sources, and publish safety evaluation results before deployment
Companies that build the most powerful AI models must register with a federal oversight body, disclose where their training data came from, and publish the results of safety tests before releasing the model.
Developers of foundation AI models — large-scale models trained on broad data that can be adapted to multiple downstream tasks — must, before public deployment, register with a designated federal AI oversight body; disclose the categories and sources of training data used, including whether copyrighted materials, personal data, or data subject to legal restrictions were used; publish the results of pre-deployment safety evaluations, red-team testing, and capability assessments; and report any post-deployment safety incidents, emergent capabilities, or misuse events to the oversight body within defined timeframes.[9]
Foundation models — including large language models, multimodal AI systems, and code-generation systems — represent a qualitatively different category of AI risk from narrow AI applications. Their general-purpose capability means they can be adapted to a wide range of applications, including harmful ones, and their behavior is difficult to fully characterize before deployment. The European Union's AI Act (2024) established tiered obligations for general-purpose AI models above capability thresholds, including transparency, documentation, and safety testing requirements. The United States has no equivalent framework: disclosure and safety evaluation are entirely voluntary. Pre-deployment registration and transparency requirements — similar to pre-market notification requirements for drugs and medical devices — allow regulatory oversight, enable independent safety research, and create accountability for the developers who profit from deploying systems that affect millions of people. The capability threshold approach (targeting models above a certain size, training compute, or performance benchmark) focuses compliance obligations where risks are highest while avoiding overburdening smaller AI developers.
TECH-FMDS-0002
Proposed
Foundation model developers must disclose whether training data included copyrighted materials, personal data, or data obtained without consent, and must maintain mechanisms for data subjects to request removal
AI developers must disclose whether their models were trained on copyrighted material, personal data, or data taken without consent. People have the right to request that their data be removed.
Developers of foundation AI models must disclose, in publicly accessible documentation, whether training data included copyrighted works, personal data subject to privacy law, data scraped from platforms without authorization, or data obtained from individuals without informed consent; must maintain technically feasible mechanisms for copyright holders and data subjects to request removal of their data from training datasets or model outputs; and must demonstrate compliance with applicable copyright, privacy, and data protection law before receiving any federal AI research funding or government procurement contracts.
Foundation models are trained on massive datasets that frequently include copyrighted books, articles, images, and code; personal data scraped from social media and public websites; and data obtained without explicit consent from the individuals whose data and creative work are incorporated. The legal status of AI training on copyrighted materials is contested, with multiple ongoing lawsuits by authors, visual artists, and news organizations claiming that training without license constitutes copyright infringement. Privacy law claims also arise from scraping of personal data. This rule does not resolve the copyright question as a matter of substantive law — that is for courts and Congress to determine — but establishes transparency and accountability requirements: developers must know and disclose what data they used, affected parties must have enforcement mechanisms, and federal procurement and funding creates leverage to require compliance. The disclosure and opt-out mechanisms align with the principles of data subject rights established in state privacy laws (California, Colorado, Virginia) and the EU GDPR.
TECH-FMDS-0003
Proposed
High-capability foundation models must undergo mandatory red-team safety evaluation before deployment, with results disclosed to federal oversight bodies and independently reviewed
The most capable AI models must be adversarially tested — meaning experts try to find dangerous or harmful behaviors — before they can be released, and the results must be disclosed to federal oversight and independent reviewers.
Foundation AI models designated as high-capability — based on performance benchmarks, training compute, or demonstrated capabilities in domains affecting public safety, critical infrastructure, biological and chemical knowledge, or mass persuasion — must undergo mandatory red-team adversarial safety evaluation before deployment; must submit evaluation results, model documentation, and capability assessments to a designated federal AI safety oversight body; and must not be deployed until the federal body has confirmed receipt and published a summary of the disclosed safety evaluation.
Red-team testing — adversarial evaluation designed to find dangerous capabilities, misuse pathways, or safety failures — is the primary tool for identifying AI risks before deployment. Major AI developers including OpenAI, Anthropic, Google DeepMind, and Microsoft have conducted voluntary red-team evaluations and some have shared results publicly or with the UK AI Safety Institute. However, voluntary disclosure is structurally inadequate: it creates incentives to minimize or not pursue testing that would find embarrassing results, allows developers to decide what to disclose and what to withhold, and provides no mechanism for independent verification. Mandatory disclosure to a federal oversight body — modeled on the UK AI Safety Institute framework — creates an institutional record, enables cross-model comparison and pattern recognition, and makes it more difficult to conceal known risks. This rule does not give the government pre-deployment veto authority over AI systems (a separate policy question) — it requires transparency as a precondition for deployment.
TECH-FMDS-0004
Proposed
Operators deploying foundation models in high-risk applications must conduct application-level risk assessments and maintain incident reporting systems
Companies that deploy powerful AI models in high-risk applications must assess the specific risks of their application and report safety incidents to regulators.
Entities deploying foundation AI models in high-risk applications — including healthcare decision support, legal advice, educational assessment, financial services, law enforcement, or government benefits administration — must conduct application-specific risk assessments identifying potential harms in the deployment context; implement monitoring and incident detection systems; report significant harms, failures, or misuse events to the federal AI oversight body and to affected persons; and maintain documentation sufficient for post-incident investigation and liability determination.
Foundation model developers and downstream operators have distinct but related obligations. A model developer cannot anticipate every downstream deployment context; an operator deploying a general-purpose model in a specific high-risk application must assess the risks specific to that context. The EU AI Act's tiered system — with obligations on both providers (developers) and deployers (operators) — reflects this dual accountability structure. Current U.S. law provides essentially no requirements for either. Application-level risk assessment, monitoring, and incident reporting for high-risk deployments create the accountability infrastructure that enables regulatory oversight, independent auditing, and meaningful liability when AI systems cause harm. Incident reporting is particularly important: it creates a database of actual harms that enables pattern recognition, identifies systemic risks that individual incidents might obscure, and provides the evidentiary foundation for regulatory action and civil litigation.
TECH-FMDS-0005
Proposal
Foundation models above 1025 FLOP training compute must register with NIST; training data provenance, known limitations, and red-team results must be publicly disclosed
AI models above a defined computational scale must register with NIST — the National Institute of Standards and Technology. Training data sources, known flaws, and adversarial safety test results must be publicly disclosed.
Developers of foundation AI models trained using compute above 1025 floating-point operations must: register the model with the National Institute of Standards and Technology (NIST) before any public release or commercial deployment; publicly disclose the provenance, categories, and sources of training data; publicly disclose all known significant limitations, failure modes, and evaluation results; make the results of red-team adversarial safety testing publicly available in a form sufficient for independent researchers to assess safety; and provide NIST with API access or equivalent technical access sufficient for independent verification of disclosed capabilities and limitations — with civil penalties of not less than $1 million per day for deploying a non-registered covered model.
FMDS-0001 establishes pre-deployment registration and safety evaluation with a designated federal oversight body; this card specifies that the oversight body must be NIST (which has the technical mandate and existing AI Risk Management Framework infrastructure), establishes the 1025 FLOP compute threshold as the coverage criterion (consistent with the threshold used in the Biden Executive Order on Safe, Secure, and Trustworthy AI), and critically requires that red-team results and known limitations be made public rather than merely submitted to the federal body. Public disclosure is essential: independent researchers, civil society, and adversely affected communities cannot assess or respond to safety risks they cannot see. The EU AI Act (2024) establishes transparency obligations for general-purpose AI models and requires red-team evaluations; the Biden EO required similar reporting to the government. This card goes further by requiring public disclosure, consistent with a democratic accountability framework. NIST's AI Risk Management Framework and its Generative AI Profile provide the technical infrastructure for this registry.
TECH-NEUS-0001
Proposed
Neural data — data directly or indirectly derived from brain activity — must be classified as the most sensitive category of personal data and may not be collected, processed, or shared without explicit, specific, and revocable informed consent
Data collected from your brain activity — directly or indirectly — is the most sensitive kind of personal information. It cannot be collected, processed, or shared without your clear, specific, and revocable consent.
Data directly or indirectly derived from neural, brain-computer, or neurotechnology interfaces — including electroencephalogram (EEG) data, functional MRI data, neural spike data from implanted devices, and any data from wearable or implanted neurotechnology capturing brain electrical activity or neural signals — must be classified as the most sensitive category of personal data under federal law; may not be collected, processed, retained, or shared without explicit, specific, and revocable informed consent for each defined use; and may not be sold, transferred, used for advertising, or used to infer emotional states, cognitive conditions, or behavioral profiles without separate, specific authorization for each such use.
Neural data represents a categorically different class of personal information from behavioral data, location data, or even genetic data: it is a real-time window into cognition, emotion, intention, and mental state that has never before been available to external observers. Brain-computer interface technology — from consumer EEG headsets (used in gaming, meditation apps, and attention training) to clinical neural implants (for mobility restoration, epilepsy treatment, and communication) to emerging industrial applications (for attention monitoring in high-risk occupations) — is rapidly expanding the scope of neural data collection. The potential for misuse is profound: employers could monitor worker attention and emotional states; insurers could assess mental health conditions without clinical evaluation; governments could attempt to use neural data for interrogation or credibility assessment; marketers could profile consumer psychology with unprecedented precision. Several states, including Colorado, Minnesota, and Washington, have enacted laws protecting neural data, recognizing it as requiring the highest level of protection. Federal law must establish this protection uniformly.
TECH-NEUS-0002
Proposed
Prohibit the use of neural data for employment decisions, law enforcement purposes, insurance underwriting, or political profiling
Your neural data — information derived from your brain — cannot be used by employers, law enforcement, insurers, or political campaigns. Using brain data to make decisions about people is prohibited.
Neural data may not be used, directly or indirectly, as a factor in employment hiring, performance evaluation, or termination decisions; may not be collected, accessed, or used by law enforcement agencies without a warrant issued by a judge based on probable cause specific to the individual; may not be used by insurance companies for underwriting, pricing, or claims decisions; and may not be used to build political profiles, infer political preferences, or target political messaging without explicit, knowing consent.
The specific prohibited uses address the highest-risk applications of neural data. Employer use of neural data for productivity monitoring is already commercially available through consumer EEG headsets that claim to assess worker focus and engagement — but these assessments are scientifically dubious and constitute an unprecedented intrusion into cognitive privacy, directly threatening the fundamental principle that what happens in a worker's mind is not the employer's business. Law enforcement use of neural data for interrogation or credibility assessment — a longstanding aspiration of both law enforcement and intelligence agencies — raises direct Fifth Amendment concerns about compelled self-incrimination and threatens the cognitive liberty that underlies all other freedoms. Insurance underwriting using neural data could create new forms of discrimination against people with mental health conditions, neurological differences, or simply different cognitive styles. The prohibition on political profiling addresses the most extreme potential for authoritarian misuse of technology that could identify and target dissidents based on cognitive patterns.
TECH-NEUS-0003
Proposed
Brain-computer interface devices must meet pre-market safety and informed consent standards; operators may not modify device functionality or data collection scope without affirmative re-consent
Brain-computer interface devices must pass safety and consent reviews before going to market. Companies cannot change what data a device collects or add new features without your express re-consent.
Brain-computer interface (BCI) devices — whether implanted, worn, or used in clinical or consumer contexts — must undergo pre-market safety and efficacy review before commercial deployment; must provide comprehensive, plain-language informed consent disclosure of all data collected, transmitted, stored, and shared; must obtain specific, affirmative re-consent before any material change in device functionality, data collection scope, data sharing practices, or terms of service; and may not be deactivated, rendered non-functional, or have core functions withheld as a means of commercial leverage over users who have had devices implanted or are medically dependent on them.
BCI technology is advancing rapidly. Neuralink, Synchron, Blackrock Neurotech, and other companies have received FDA approval for investigational neural implants in human subjects, and clinical applications for mobility restoration and communication in paralyzed patients are already in use. Consumer-grade EEG headsets are commercially available. As these technologies advance toward broader use, the governance framework for data collection, consent, and device rights becomes critically important. The consent requirement for device changes addresses a specific risk: users of implanted medical devices or devices they depend on for communication or mobility are in a uniquely vulnerable position — they cannot simply stop using the device without serious health consequences, giving device manufacturers extraordinary leverage to change terms unilaterally. This rule treats neural device operators as fiduciaries of a uniquely intimate relationship, holding them to the highest standards of consent and accountability.
TECH-PRTS-0001
Proposed
Users have the right to download and transfer all their data from any digital platform in a machine-readable, portable format at any time
You have the right to download all your personal data from any digital platform in a usable format, at any time, so you can take it with you when you switch services.
Every user of a digital platform, social network, communications service, or connected device must have the right to download all data the platform holds about them — including posts, messages, connections, activity history, preferences, and any inferred profiles — in a machine-readable, standard format that can be used on other platforms or services; and platforms may not impose technical barriers, unreasonable delays, or fees on data export requests, and must honor requests within 30 days.
Data portability is the technical foundation of meaningful platform competition. Today, users' social graphs, posting histories, photos, messages, and accumulated profile data are locked to the platforms that host them — moving to a different platform means starting over, with no followers, no history, and no network. This lock-in is a structural barrier to competition: even if a competitor platform is better, the switching costs of leaving behind years of accumulated data and connections are prohibitive for most users. The EU's General Data Protection Regulation (Article 20) established a right to data portability, and the Digital Markets Act requires interoperability for large platforms. The United States has no equivalent. The American Data Privacy and Protection Act proposed portability requirements but has not been enacted. Portability rights convert data from a tool of platform lock-in into a user asset, enabling genuine competition and reducing the monopoly power of entrenched platforms.
TECH-PRTS-0002
Proposed
Large platforms must support standardized interoperability protocols allowing users to communicate across platform boundaries and follow accounts from other services
Large platforms must support standardized technical systems that allow users to stay connected to people on other platforms and move between services without losing their contacts or content.
Digital platforms designated as systemically important — based on user count, market share, or critical communications function — must support standardized, open interoperability protocols that allow their users to communicate with users of other platforms and follow or subscribe to accounts on other services, without requiring users of smaller or competing platforms to have accounts on the dominant platform; and may not implement technical measures designed to degrade, slow, or frustrate interoperability with competing services.
Interoperability — the ability for users of different platforms to communicate with each other — is the mechanism that would allow competitive markets to develop in social media. Email works this way: a Gmail user can email an Outlook user because email uses open, standardized protocols. Social media does not: a Facebook user cannot message a Twitter user, a user cannot follow someone on a different platform without joining it. This architectural choice by dominant platforms is not technically inevitable — it is a deliberate lock-in mechanism. The ActivityPub protocol (used by Mastodon and other federated social networks) demonstrates that interoperable social media is technically feasible. The EU's Digital Markets Act requires interoperability for "gatekeeper" platforms. Mandatory interoperability for systemically important platforms would break the network effect monopoly that allows dominant platforms to maintain market power even when users are dissatisfied with their services. Smaller competitors could build services on top of an interoperable network rather than trying to replicate the network from scratch.
TECH-PRTS-0003
Proposed
Platforms may not impose technical, contractual, or legal barriers that prevent users from migrating to competing services or that degrade the functionality of competing services accessing user-authorized data
Platforms cannot put up technical, legal, or contractual barriers that prevent you from leaving for a competitor, or that degrade the experience of competing services trying to access your data on your behalf.
Digital platforms may not use technical measures, terms of service provisions, intellectual property claims, or contractual restrictions to prevent users from accessing their own data for the purpose of migrating to competing services; may not take technical actions that degrade the performance, speed, or reliability of third-party services accessing user data pursuant to user authorization or interoperability requirements; and may not use API access terms to prohibit competing services from offering data migration, archiving, or backup services.
Platform lock-in is maintained not only through absence of portability tools but through active measures to prevent exit: throttling or degrading API access for competing services; claiming intellectual property rights over user-generated content to prevent migration; using terms of service to prohibit third-party backup or export tools; and technically breaking competitors' applications. Twitter (now X) restricted API access and disabled interoperability for third-party clients; Facebook took legal action against competing services that attempted to use data export tools. These anti-competitive practices convert data portability rights into hollow procedural rights that provide no practical exit option. Prohibiting active anti-interoperability measures is the complement to affirmative portability and interoperability requirements: it prevents platforms from using technical and legal barriers to nullify the practical benefit of portability rights.
TECH-LIAS-0001
Proposed
AI developers and deployers bear strict civil liability for physical, economic, and dignitary harms caused by AI systems in high-risk applications, without requiring proof of negligence
Companies and developers that deploy high-risk AI systems are legally responsible for physical, financial, and other harms those systems cause — regardless of whether negligence can be proven. This is called strict liability.
Developers and deployers of AI systems in high-risk applications — including healthcare, criminal justice, employment, financial services, and government benefits — bear strict civil liability for physical injury, economic harm, or dignitary harm proximately caused by AI system failures, errors, or discriminatory outputs, without requiring the harmed party to prove negligence, fault, or breach of duty; and any contractual provision purporting to limit or waive liability for AI harms in high-risk applications is void as against public policy.
Existing tort law is inadequate to address AI liability for several structural reasons: causation is difficult to prove when the AI system is opaque and the developer claims the system performed as designed; negligence standards allow developers to argue that their system met industry standards even if industry standards are inadequate; contractual liability waivers in terms of service eliminate practical remedies; and class-action barriers prevent aggregation of individual harms too small to litigate individually. Strict liability — the standard applied to inherently dangerous activities and products under products liability law — is appropriate for high-risk AI applications because it shifts the burden of safety from victims to developers who are best positioned to prevent harms through design choices. The EU AI Act establishes presumptive causation and eased fault requirements for high-risk AI. The United States has no equivalent framework, creating a gap that allows AI harm to occur without remedy. Strict liability for high-risk AI aligns incentives with safety: developers who bear liability for harms will invest in preventing them.
TECH-LIAS-0002
Proposed
When AI system opacity makes it difficult for a harmed person to prove causation, courts must apply a rebuttable presumption that the AI system caused the harm if the person demonstrates that AI was involved in the decision and the harm is consistent with documented failure modes
When an AI system's secrecy makes it hard for a harmed person to prove it was at fault, courts will presume the AI caused the harm if the person shows the AI was involved and the harm matches known failure patterns. This shifts the burden of proof toward the company.
In civil proceedings where a person alleges harm caused by an AI system and the AI system's opacity — including proprietary claims, lack of explainability, or destruction of decision logs — makes it impossible or unreasonably difficult for the person to prove causation through direct evidence, courts must apply a rebuttable presumption that the AI system caused the harm if the person demonstrates that: (1) the AI system was used in the decision or process that produced the harm; (2) the harm is consistent with documented or probable failure modes of such systems; and (3) the defendant has not disclosed sufficient information to allow independent causation analysis.
Opacity is the structural feature of AI systems that most severely undermines tort accountability. When an AI system denies a disability benefit, generates a false arrest recommendation, or produces a discriminatory hiring decision, the harmed person often cannot determine why — the model is proprietary, decision logs are unavailable, and the developer claims the system operated correctly. Without causation evidence, there is no lawsuit. This creates a perverse incentive: the more opaque a developer makes their system, the more immune they are from liability. The rebuttable presumption inverts this incentive: opacity does not shield from liability; it shifts the burden of proof to the developer to explain what the system actually did. Developers who maintain adequate documentation, preserve decision logs, and operate explainable systems will have no difficulty rebutting the presumption. Developers who refuse to maintain accountability infrastructure face presumptive liability for the harms their systems cause. This aligns accountability incentives with transparency incentives.
TECH-LIAS-0003
Proposed
AI developers and deployers must maintain incident logs, preserve decision records for a defined period, and disclose aggregate harm data to federal oversight bodies annually
Companies deploying AI in high-risk areas must keep records of how the system made decisions, and must report aggregate harm data to federal oversight bodies every year.
Developers and deployers of AI systems in high-risk applications must maintain incident logs documenting all known harms, failures, complaints, and significant errors attributable to or potentially attributable to their AI systems; must preserve decision records — including inputs, outputs, and confidence scores — for a minimum of three years or until any pending claims are resolved, whichever is longer; must disclose aggregate incident data, including categories of harm and affected populations, to federal AI oversight bodies annually; and may not destroy decision records after becoming aware of a potential legal claim arising from those decisions.
Incident logging and record preservation are the evidentiary infrastructure of accountability. Without preserved records of AI decisions, there is no basis for retrospective review of discriminatory patterns, no evidence for individual claimants, and no data for regulators seeking to identify systemic risks. Aviation, pharmaceutical, medical device, and financial services industries are all subject to mandatory incident reporting and record retention requirements because regulators recognized that accountability without records is illusory. AI systems making high-stakes decisions about people's lives, benefits, employment, and freedom deserve at minimum the same accountability infrastructure. Annual aggregate reporting enables regulators to identify patterns — systems with high error rates in specific demographic groups, recurring failure modes, concentrations of harm in particular applications — that individual incident reports would not reveal. This rule creates the data foundation that makes meaningful regulatory oversight possible.
TECH-CYBS-0001
Proposal
All critical infrastructure operators must meet CISA minimum cybersecurity standards and report incidents within 24 hours
Companies that operate essential services like power grids, hospitals, or water systems must meet basic cybersecurity standards and report breaches within 24 hours. Minimum security practices protect everyone who depends on these systems.
All operators of critical infrastructure — including energy, water, finance, healthcare, transportation, and communications — must implement the Cybersecurity and Infrastructure Security Agency (CISA) minimum cybersecurity performance goals as mandatory compliance requirements, not voluntary guidance; must report all significant cybersecurity incidents to CISA within 24 hours of detection; must obtain an independent third-party cybersecurity assessment annually and submit results to CISA; may not receive federal contracts, grants, or regulatory approvals without a current CISA compliance certification; and must notify affected individuals and entities of any breach involving personal, financial, or operational data within 72 hours of determining the scope of exposure; with civil penalties enforceable by CISA of not less than $500,000 per day of non-compliance and a private right of action for individuals harmed by a breach attributable to non-compliance with these requirements.
Critical infrastructure cybersecurity is currently governed by a patchwork of sector-specific voluntary frameworks and a handful of mandatory standards. CISA's Cross-Sector Cybersecurity Performance Goals, published in 2022, represent the agency's best technical judgment about minimum security baselines, but compliance is voluntary for most sectors. The 2021 Colonial Pipeline ransomware attack (shutting down fuel supply to the Eastern Seaboard), the 2021 Florida water treatment plant attack (attempt to poison drinking water), and the SolarWinds supply chain compromise of multiple federal agencies all demonstrate that voluntary frameworks leave critical infrastructure dangerously exposed. The Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA) established mandatory incident reporting requirements that CISA is still finalizing as of 2024. This card codifies and extends CIRCIA, adds mandatory minimum standards compliance, and creates private enforcement rights for individuals harmed by non-compliant operators.
TECH-CYBS-0002
Proposal
Software vendors whose products are used in critical infrastructure bear strict liability for known unpatched vulnerabilities; vendors must provide security patches for at least five years
Software companies whose products are used in critical infrastructure must fix known security flaws — and are legally liable if they don't. Patches and updates must be provided for at least five years.
Software vendors whose products are deployed in critical infrastructure systems bear strict liability for physical, operational, and data security harms caused by known, unpatched security vulnerabilities where the vendor had actual or constructive notice of the vulnerability and failed to issue a patch within 30 days for critical severity vulnerabilities; disclaimers of warranty and limitations of liability in software license agreements are void as against public policy with respect to known vulnerabilities in critical infrastructure deployments; software vendors must provide security patches and vulnerability disclosure for a minimum of five years after the release date of any software product used in a critical infrastructure context; and operators of critical infrastructure who deploy software past its vendor security support end-of-life assume liability for resulting harms.
The current legal framework for software security creates perverse incentives: software vendors face no liability for known vulnerabilities under the "as-is" disclaimers standard in virtually all software licenses, even when their products are deployed in hospitals, power plants, and water systems. The Office of the National Cyber Director's National Cybersecurity Strategy (2023) explicitly called for shifting liability to software manufacturers who bring vulnerable products to market, recognizing that individual operators cannot assess and remediate vulnerabilities in every software component they deploy. Legacy software with expired security support is endemic in critical infrastructure: hospitals run Windows XP on medical devices, utilities use SCADA systems with no patch support, and nuclear facilities operate software built decades ago. The five-year minimum patch obligation and strict liability for known vulnerabilities creates a structural incentive for vendors to build security into products rather than treating it as an optional service tier.
TECH-CYBS-0003
Proposal
Ransom payments to sanctioned entities are prohibited; all ransomware incidents above $10,000 must be reported to the FBI
Paying ransomware attackers connected to sanctioned countries or groups is illegal. All significant ransomware incidents must be reported to the FBI, helping authorities track and stop attacks.
Payment of ransom to any entity designated as a foreign terrorist organization, a specially designated global terrorist, or a subject of OFAC economic sanctions is prohibited and subject to criminal prosecution; all ransomware incidents involving demands or payments above $10,000 must be reported to the FBI within 72 hours of discovery; CISA must notify individuals whose personal data was compromised in a ransomware attack within 72 hours of the operator determining the scope of the breach; entities that pay ransoms to non-sanctioned entities must disclose the payment, amount, and receiving entity to the FBI; and the Treasury Department must publish an annual report on ransomware payment flows and their relationship to sanctioned entities and foreign adversary governments.
Ransomware attacks against U.S. critical infrastructure, healthcare systems, schools, and local governments have grown dramatically: the FBI reported over 2,800 ransomware complaints in 2023 with over $59 million in losses from critical infrastructure attacks alone, figures experts believe substantially undercount actual incidents. Ransom payments fund the criminal and state-sponsored organizations conducting attacks and directly incentivize further attacks. Multiple U.S. government agencies and the Institute for Security and Technology's Ransomware Task Force have recommended mandatory reporting requirements to give law enforcement visibility into the full scope of attacks and payment flows. The ban on payments to sanctioned entities implements OFAC guidance that already warns such payments may violate sanctions law, converting guidance into a clear prohibition. Mandatory reporting enables the FBI to track the ecosystem, identify patterns, and pursue threat actors rather than responding only to the incidents it happens to learn about.
TECH-BRDS-0001
Proposal
Broadband internet access is a statutory right; the Affordable Connectivity Program must be made permanent; service may not be terminated without 30-day notice and referral to assistance programs
Broadband internet is treated as a basic right. Government programs making it affordable must be permanent, and internet service cannot be cut off without a 30-day notice and help finding alternatives.
Congress must establish broadband internet access as a statutory right for all U.S. residents; must make the Affordable Connectivity Program (ACP) permanent and fully funded, expanding eligibility to all households at or below 200% of the federal poverty level; must prohibit termination of broadband service for non-payment without at least 30 days' written notice and documented referral to all available federal, state, and provider-sponsored assistance programs; must prohibit deployment of data caps that restrict access below thresholds sufficient for full participation in work, education, healthcare, and civic life; and must establish a federal broadband ombudsman empowered to receive and investigate complaints and issue binding remediation orders against providers.
Broadband internet access has become a prerequisite for full participation in economic, educational, civic, and healthcare life. The COVID-19 pandemic exposed the devastating consequences of the digital divide: students without home internet could not attend school, workers without connectivity could not work remotely, and patients could not access telehealth. Approximately 21 million Americans lack access to broadband, and millions more have access but cannot afford it. The FCC's Affordable Connectivity Program, which provided up to $30/month subsidy for low-income households, served 23 million households before Congress allowed its funding to lapse in 2024. Establishing broadband as a statutory right — analogous to the statutory framework for telephone service under the Communications Act — creates the legal foundation for universal service obligations and enforceable non-discrimination requirements. The 30-day notice and assistance referral requirement prevents abrupt service terminations that trap households in a digital void at critical moments.
TECH-BRDS-0002
Proposal
All K–12 schools must have minimum 1 Gbps symmetric connectivity; devices and home internet access must be provided to all students as a condition of federal education funding
Every K–12 school must have high-speed internet, and students who lack devices or home internet access must receive them as a condition of federal education funding.
As a condition of receiving federal education funding, all K–12 public schools must maintain minimum symmetric broadband connectivity of 1 Gbps per 1,000 students by 2028; the E-Rate program must be expanded, fully funded, and simplified to eliminate barriers that prevent eligible schools from receiving support; every student enrolled in a public K–12 school must be provided with a personal device adequate for all school assignments and home internet access sufficient for homework, learning management systems, and educational video — with devices and connectivity funded through the E-Rate program, federal education funding, and provider universal service obligations; and no school or district may condition digital access on household income, immigration status, or any characteristic other than enrollment.
Educational technology is now integral to K–12 learning, but the benefits of digital learning are distributed unequally along lines of race and income. Approximately 9 million K–12 students lack adequate home internet access, and millions more use smartphones as their primary devices — inadequate for complex assignments, sustained learning, and standardized assessments. The 1 Gbps target is the FCC's long-standing E-Rate connectivity goal, established in the ConnectED initiative; progress has been uneven and many schools, particularly in rural and low-income districts, remain far below this threshold. Device provision is incomplete: many school-issued devices are inadequate, and families bear the cost of data plans. E-Rate funding, while valuable, is complicated by bureaucratic requirements that disadvantage under-resourced districts. Making device and connectivity provision a condition of federal education funding creates a universal floor — eliminating the structural disadvantage that students without adequate home technology face in every aspect of their education.
TECH-CHDS-0001ProposalAI systems directed at children or likely to be used by minors are subject to heightened safety, privacy, manipulation,…
AI systems directed at children or likely to be used by minors are subject to heightened safety, privacy, manipulation, and developmental protections.
Source: DB entry TEC-CHD-001, status: PROPOSED. Pending editorial review before promotion to core position.
TECH-DEMS-0001ProposalAI systems may not be used to generate, distribute, or amplify deceptive election content intended to mislead voters…
AI systems may not be used to generate, distribute, or amplify deceptive election content intended to mislead voters about candidates, voting procedures, ballot access, or election outcomes.
Source: DB entry TEC-DEM-001, status: PROPOSED. Pending editorial review before promotion to core position.
TECH-INFS-0001ProposalAI systems deployed in critical infrastructure must meet heightened standards for safety, cybersecurity, resilience,…
AI systems deployed in critical infrastructure must meet heightened standards for safety, cybersecurity, resilience, human override, and incident response.
Source: DB entry TEC-INF-001, status: PROPOSED. Pending editorial review before promotion to core position.
TECH-MEDA-0001ProposalAI-driven recommender systems may not be optimized primarily for outrage, compulsion, polarization, or misinformation…
AI-driven recommender systems may not be optimized primarily for outrage, compulsion, polarization, or misinformation spread where such optimization predictably harms public discourse.
Source: DB entry TEC-MED-001, status: PROPOSED. Pending editorial review before promotion to core position.
TECH-SCIS-0001ProposalAI systems may not be used to fabricate research data, images, citations, peer-review identities, or other scientific…
AI systems may not be used to fabricate research data, images, citations, peer-review identities, or other scientific artifacts in ways that misrepresent truth or authorship.
Source: DB entry TEC-SCI-001, status: PROPOSED. Pending editorial review before promotion to core position.
TECH-FACE-0001
Proposal
Government Agencies May Not Use Facial Recognition Surveillance in Public Spaces
Government agencies cannot use facial recognition to surveil people in public spaces. The government cannot track where you go or who you are based on your face in a crowd.
Federal, state, and local law enforcement and government agencies must not use facial recognition technology for real-time or retrospective surveillance of individuals in public spaces; this prohibition applies to live camera feeds, recorded footage, and social media monitoring. Law enforcement may use facial recognition only in the investigation of a specific serious felony where: a judge has issued a warrant based on probable cause specifically authorizing facial recognition use; the search is limited in scope to the specific investigation; and results are treated as investigative leads requiring independent corroboration, not as identification evidence in court without expert testimony. Mass enrollment of the public in facial recognition databases without consent is prohibited. Any law enforcement use of facial recognition that results in a wrongful arrest creates a per se claim of civil rights liability.
Facial recognition technology has documented error rates that are significantly higher for dark-skinned and female faces. Multiple people have been wrongfully arrested based solely on facial recognition matches.
TECH-FACE-0002
Proposal
Commercial Entities May Not Deploy Facial Recognition Without Explicit Informed Opt-In Consent
Private companies cannot use facial recognition technology to identify or track people without first obtaining their clear, informed, and voluntary consent.
Commercial entities must not collect, use, store, or share facial recognition data without: (1) explicit, informed, separate opt-in consent from every individual whose face is enrolled; (2) clear notice of how the data will be used, stored, and shared; and (3) a meaningful opt-out mechanism that does not condition access to goods or services on consent. Facial recognition data collected without consent must be deleted within 30 days; individuals have the right to request deletion of their facial data at any time. The FTC and state attorneys general have enforcement authority; private plaintiffs have a right of action with statutory damages of $5,000 per violation per person.
TECH-FACE-0003
Proposal
All Federal Agencies Must Be Prohibited From Using Facial Recognition Technology Until Congress Enacts Federal Accuracy, Transparency, and Civil Rights Standards — With a Permanent Ban on Use for Mass Surveillance, Immigration Enforcement Without Warrant, or Protest Monitoring
All federal agencies are prohibited from using facial recognition until Congress passes a law setting standards for accuracy, transparency, and civil rights protections. Mass surveillance, immigration enforcement without a warrant, and monitoring of protests are permanently banned uses.
Congress must: (1) impose an immediate moratorium on all federal agency use of facial recognition technology for: (a) surveillance of public spaces; (b) immigration enforcement without a judicial warrant supported by individualized probable cause; (c) monitoring of political protests, demonstrations, or religious gatherings; (d) identification of any individual in a criminal investigation without a warrant; until Congress enacts accuracy and civil rights standards under a separate authorizing statute; (2) prohibit permanently — regardless of future standards enacted — the use of any biometric identification technology for: (a) mass, suspicionless scanning of the public in any setting; (b) monitoring attendance at any political, religious, or advocacy event; (c) building a database of individuals based solely on constitutionally protected activity; (3) require any federal agency currently using facial recognition technology to: (a) publish a complete inventory of all systems in use, vendors, datasets, and use cases within 90 days; (b) delete all biometric databases compiled through mass surveillance within 180 days; (c) terminate all contracts with vendors whose systems have documented accuracy disparities exceeding 10% between demographic groups; (4) criminal penalties — fines up to $1 million per violation and imprisonment up to 10 years — for any federal official who authorizes biometric surveillance in violation of this Act; and (5) a private right of action for any person subjected to unlawful biometric surveillance — with damages of $10,000 per day of violation plus attorney's fees and injunctive relief.
Studies have found that facial recognition systems misidentify Black women at error rates up to 35 times higher than white men. Multiple wrongful arrests have occurred due to facial recognition misidentification, all involving Black men.
TECH-BIOM-0001
Proposal
A Federal Biometric Privacy Law Must Establish Nationwide Consent and Deletion Rights
Federal law must establish Americans' right to control their own biometric data — such as fingerprints or face scans — including the right to consent before collection and to demand deletion afterward.
Congress must enact a national biometric information privacy law that: (1) requires written, informed consent before any collection of biometric identifiers including fingerprints, retina scans, voiceprints, facial geometry, and gait data; (2) prohibits sale or transfer of biometric data to third parties without express written consent; (3) requires deletion of biometric data within three years of collection or upon request; (4) requires data minimization — biometric data may only be collected for a specific, disclosed purpose and may not be repurposed; and (5) creates a private right of action with liquidated damages of $1,000–$5,000 per violation per person. The law must establish a federal floor that does not preempt stronger state laws such as Illinois BIPA.
Illinois BIPA has been the most litigated biometric privacy law in the U.S., generating billions in class action settlements. Workers in multiple industries are required to submit biometric data as a condition of employment without adequate legal protections.
TECH-BIOM-0002
Proposal
No Company May Collect, Store, or Use Any American's Biometric Data — Including Faceprint, Voiceprint, Fingerprint, Iris Scan, or Gait Data — Without Explicit Written Informed Consent, and No Company May Sell or Share Biometric Data Under Any Circumstances
No company may collect, store, or use your biometric data — including your face, voice, fingerprint, iris, or the way you walk — without your explicit written consent. Selling or sharing biometric data is prohibited under any circumstances.
Congress must enact a Federal Biometric Privacy Law — requiring: (1) informed written consent — no private entity may collect, capture, store, or use any individual's biometric identifier or biometric information without: (a) providing a written policy disclosing the specific purpose, storage duration, and third parties with access; (b) obtaining a separate, affirmative, opt-in written consent specific to biometric data — not bundled in general terms of service; (c) minors under 18 cannot provide valid consent — requiring parental consent with a 7-day cooling-off period and right of revocation; (2) absolute prohibition on the sale, lease, trade, or disclosure of biometric data to any third party for any purpose — including sale to law enforcement without a warrant; (3) mandatory retention limits — biometric data must be destroyed within: (a) 3 years of collection; or (b) within 30 days of the individual terminating their relationship with the entity — whichever is earlier; (4) data minimization — entities may collect only the biometric identifiers strictly necessary for the stated purpose; collection of biometric data for speculative future uses is prohibited; (5) a private right of action — any person whose biometric data was collected, used, or disclosed in violation of this Act may sue for: (a) $1,000 per negligent violation; (b) $5,000 per intentional or reckless violation; (c) attorney's fees and costs; (d) injunctive relief; — with class action permitted; and (6) criminal penalties for executives — fines up to $10 million and imprisonment up to 5 years — for willful violations involving more than 1,000 individuals.
Illinois's Biometric Information Privacy Act (BIPA) is currently the strongest biometric protection law in the United States — this federal proposal would establish a national floor at least as protective. Facebook settled a BIPA class action for $650 million for collecting faceprints without consent.
TECH-SURV-0001
Proposal
Intelligence Agencies May Not Search Americans' Communications Collected Under Section 702 Without a Warrant
Intelligence agencies cannot search through Americans' private communications — collected under the broad foreign intelligence authority known as Section 702 — without first obtaining a warrant.
Section 702 of the Foreign Intelligence Surveillance Act — which authorizes warrantless collection of foreign communications that incidentally includes Americans' communications — must be amended to: (1) prohibit "backdoor searches" of 702 databases using Americans' identifiers without a FISA court warrant supported by probable cause; (2) require the government to obtain a warrant before using 702-collected data as evidence in a criminal prosecution; (3) create an independent advocate in FISA court proceedings to represent civil liberties interests; and (4) require annual public reporting of the number of Americans' communications collected. The FISA court must apply adversarial procedures in all cases involving U.S. persons.
The NSA's Section 702 program collects billions of communications annually. The FBI has conducted millions of warrantless searches of 702 databases using Americans' names and identifiers.
TECH-SURV-0002
Proposal
Government Agencies May Not Purchase Commercially Available Surveillance Data to Circumvent Warrant Requirements
Government agencies cannot buy commercially available surveillance data to get around the legal requirement for a warrant. The Constitution's protections cannot be bypassed through private market purchases.
Federal, state, and local government agencies — including law enforcement and intelligence agencies — must not purchase or otherwise acquire commercially available location data, device data, browsing history, or other personal data from data brokers, advertisers, or other commercial entities to conduct surveillance that would otherwise require a warrant; this prohibition applies regardless of whether the data is "anonymized." Agencies that have purchased such data must purge existing databases within one year. Violations are subject to criminal penalties for individual agents and injunctive relief; individuals whose data was purchased without a warrant have a private right of action.
TECH-SURV-0003
Proposal
The FBI Must Be Required to Obtain an Individual Warrant Based on Probable Cause Before Querying Any Database of Communications Collected Under Section 702 FISA — Ending "Backdoor Searches" of Americans' Private Communications Without Judicial Oversight
The FBI must obtain an individual warrant with probable cause before searching databases of Americans' communications collected under Section 702 of FISA — a foreign intelligence law. The current practice of warrantless backdoor searches of Americans' private communications must end.
Congress must: (1) amend Section 702 of the Foreign Intelligence Surveillance Act to require that: (a) any FBI, CIA, NSA, or other intelligence agency query of Section 702-collected data that targets a U.S. person must first obtain an individualized warrant from the Foreign Intelligence Surveillance Court based on probable cause that the U.S. person is an agent of a foreign power or has committed or is planning to commit a crime; (b) no query of Section 702 data may be conducted using a U.S. person's name, identifier, phone number, email address, or social security number as the search term without such a warrant; (c) all queries of Section 702 data must be logged, with logs preserved for 5 years and subject to congressional oversight review; (2) prohibit the use of any information obtained from an unlawful backdoor search as evidence in any criminal, civil, or administrative proceeding — excluding such evidence and requiring suppression; (3) require the FISA Court to: (a) appoint an independent public advocate to represent civil liberties interests in all significant cases; (b) publish its legal opinions in redacted form within 90 days of issuance; (c) declassify and release all opinions more than 5 years old that do not contain currently sensitive operational information; (4) criminal penalties — fines up to $500,000 and imprisonment up to 10 years — for any intelligence official who conducts a backdoor search in willful violation of the warrant requirement; and (5) a private right of action for any U.S. person who can demonstrate they were subjected to an unlawful backdoor search, with damages of $100,000 per search plus attorney's fees.
The FBI conducted an estimated 3.4 million backdoor searches of Section 702 data on U.S. persons in a single year without individual warrants.[10] Section 702 was reauthorized in 2024 without the warrant requirement that civil liberties groups demanded.
TECH-SURV-0004
Proposal
All Bulk, Suspicionless Collection of Americans' Phone Records, Internet Metadata, and Location Data by Any Government Agency Must Be Permanently Prohibited — Requiring Individualized Judicial Orders for Any Surveillance Targeting Americans
Bulk collection of Americans' phone records, internet metadata, and location data — without suspicion of any wrongdoing — must be permanently prohibited. Surveillance of Americans must be based on individualized judicial orders.
Congress must: (1) permanently prohibit any federal agency from: (a) collecting in bulk the call detail records, metadata, or content of communications of any U.S. person without an individual warrant; (b) purchasing location data, web browsing history, app usage data, or any other personal data from commercial data brokers as a means of circumventing warrant requirements; (c) using a "national security letter" to obtain records beyond subscriber information without judicial authorization; (d) conducting any form of mass surveillance on any communications network without individualized judicial orders; (2) require that any government agency seeking to conduct electronic surveillance of a U.S. person must: (a) obtain a warrant based on probable cause from an Article III federal court or the FISA Court; (b) describe with particularity the person, communications, and time period to be surveilled; (c) demonstrate that less intrusive means have been attempted or would not succeed; (3) prohibit the "data broker loophole" — making it illegal for any federal agency, state agency, or law enforcement to purchase any personal data from a commercial broker that the agency could not have obtained through legal process; (4) establish the Electronic Privacy Act — codifying Fourth Amendment protections for all digital communications and data regardless of age, requiring warrants for all law enforcement access to email, cloud storage, and location records; (5) criminal penalties — imprisonment up to 15 years — for any federal official who authorizes bulk data collection in violation of this Act; and (6) a private right of action for any person subjected to unlawful bulk surveillance, with damages of $1,000 per day of surveillance plus attorney's fees.
The NSA's bulk phone records collection program, revealed by Edward Snowden in 2013, was later ruled illegal by a federal appeals court. The government's purchase of location data from commercial brokers has been documented as a widespread practice to avoid warrant requirements.
TECH-DEEP-0001
Proposal
Creating or Distributing Non-Consensual Intimate Deepfake Images Is a Federal Crime
Creating or distributing fake explicit images of a real person without their consent — using AI or other tools — is a federal crime. This protects people from a devastating form of digital abuse.
The creation, distribution, or possession with intent to distribute of synthetic or digitally altered intimate images depicting a real, identifiable person without their consent is a federal criminal offense; penalties must include up to five years imprisonment for distribution and up to ten years for distribution with intent to harass. Platforms hosting such content must remove it within 24 hours of receiving a valid takedown notice; failure to do so constitutes a separate civil violation with statutory damages of $10,000 per day of continued hosting after notice. Victims have a federal civil right of action against creators and distributors with minimum statutory damages of $150,000.
Non-consensual deepfake intimate images disproportionately target women and are used as a tool of harassment, coercion, and abuse.
TECH-DEEP-0002
Proposal
Deepfakes Depicting Candidates or Election Officials Designed to Deceive Voters Are Prohibited
Deepfake videos or audio recordings designed to deceive voters — for example, a fake video of a candidate saying something they never said — are illegal during elections.
Creating, distributing, or financing the distribution of synthetic or digitally manipulated audio, video, or images depicting a candidate, election official, or political party in a false context with intent to deceive voters within 90 days of an election is prohibited and subject to: (1) FEC civil penalties; (2) criminal prosecution for knowing violations; and (3) platform liability for continued hosting after notice from the depicted person or FEC. All political advertising that uses AI-generated or synthetic imagery must include a prominent disclosure; failure to disclose is treated as a fraudulent campaign finance violation. Platforms must label AI-generated political content and may not algorithmically amplify undisclosed synthetic political content.
TECH-PLAT-0001
Proposal
Congress Must Amend Section 230 to Remove Platform Immunity for Harms Caused by Algorithmic Amplification — Preserving Full Immunity for Good-Faith Moderation of User-Generated Content
Congress must change the law that currently shields social media platforms from responsibility for content their algorithms actively promote. Platforms should still be protected for good-faith moderation decisions on user content — but not for harms caused by their own recommendation systems.
Congress must: (1) amend Section 230 to establish a two-tier immunity framework: (a) Tier 1 — Full immunity preserved for hosting, indexing, and good-faith content moderation decisions (removal, demotion, labeling); (b) Tier 2 — Immunity removed for harms caused by active algorithmic amplification — platforms that recommend, promote, or amplify third-party content through ranking systems are treated as co-publishers for that amplification decision; (2) define "algorithmic amplification" as any automated system that recommends content to users who did not specifically seek it, increases content prominence beyond organic reach, or targets content to user profiles based on predicted engagement; (3) preserve full immunity for all content moderation decisions; (4) establish FTC enforcement authority and a civil private right of action for harm from algorithmically amplified content; (5) criminal penalties — fines up to $500 million and imprisonment up to 10 years — for executives who knowingly direct algorithmic amplification of content inciting imminent violence.
Section 230 was enacted in 1996 before algorithmic recommendation feeds existed and was not designed to immunize platforms for active choices to amplify harmful content.
TECH-DTBR-0001
Proposal
Congress Must Establish a Mandatory Federal Data Broker Registry, Require Affirmative Opt-In Consumer Consent Before Any Personal Data May Be Sold or Transferred to Any Third Party, and Permanently Ban the Sale of Health, Reproductive, Location, and Financial Data to Any Third Party for Any Purpose
Data brokers — companies that buy and sell people's personal data — must register with the federal government. They cannot sell your personal data without your explicit consent first. Health, location, and financial data can never be sold.
Congress must: (1) enact the Data Broker Accountability and Transparency Act — establishing that: (a) any person or entity that collects personal data from individuals who are not their direct customers and sells, licenses, rents, or transfers that data to any third party is a "data broker" subject to this Act; (b) no data broker may sell, license, or transfer any personal data about any U.S. resident without first obtaining verifiable, affirmative, specific opt-in consent from that individual — separate from any terms of service agreement — for each category of data and each category of buyer; (c) opt-in consent must be presented in plain language, not bundled with any other consent, and must be revocable at any time at no cost to the individual; (2) permanently ban — with no opt-in exception — the sale, transfer, licensing, or provision of any access to: (a) health or medical data, including inferred health conditions, medication purchase history, and wellness app data; (b) reproductive health data, including pregnancy tracking, fertility, or abortion-related information; (c) precise location data — defined as location data sufficient to identify a person's home, workplace, medical facility, place of worship, or any other sensitive location — including any data with precision greater than 1 square mile; (d) financial transaction data, including purchases, subscriptions, and bank or credit card records; (e) data about any person under 18; (3) establish a mandatory Federal Data Broker Registry — administered by the FTC — requiring every data broker to register annually, disclose all categories of data collected, identify all third-party buyers by category, and publish their data collection and deletion practices; (4) establish a universal data deletion right: any U.S. resident may submit a deletion request to any registered data broker — the broker must delete all data on that individual and notify all downstream buyers within 30 days at no cost; (5) civil penalties of $10,000 per person per violation; (6) criminal penalties — fines up to $10 million and imprisonment up to 10 years — for any officer who authorizes the sale of banned data categories; and (7) a private right of action for any person whose data was sold without consent, with statutory damages of $1,000–$10,000 per violation plus attorney's fees.
The data broker industry generates an estimated $200 billion annually in the United States and operates with virtually no federal regulation.[11] Following the Dobbs decision, law enforcement agencies and private bounty hunters have been documented purchasing location data from data brokers to track individuals who travel to seek abortion services.
TECH-DTBR-0002
Proposal
Congress Must Prohibit All Targeted Advertising Based on Inferred or Actual Health Status, Political Views, Religion, Sexual Orientation, Gender Identity, Immigration Status, or Financial Distress — and Permanently Ban All Behavioral Advertising Directed at Anyone Under 18
Advertisers cannot target you based on your health, political views, religion, sexual orientation, gender identity, immigration status, or financial troubles. Targeted advertising of any kind is permanently banned for anyone under 18.
Congress must: (1) enact the Protecting Consumers From Surveillance Advertising Act — prohibiting any platform, advertiser, or ad network from serving any targeted advertisement to any individual based on: (a) any inferred or actual health or medical condition — including mental health, addiction, pregnancy, or disability; (b) political affiliation, voter registration status, or participation in any political activity; (c) religious beliefs, practices, or affiliation; (d) sexual orientation or gender identity; (e) immigration status or citizenship; (f) financial distress indicators — including credit score, bankruptcy, debt collection, or utility shutoff data; (g) any sensitive attribute derived from location data — including presence at a medical facility, place of worship, courthouse, or immigration office; (2) permanently prohibit — with no exception — any behavioral advertising directed at any person under 18: (a) no platform may use any personal data — including browsing history, app usage, location, or social graph — to target any advertisement to any person under 18; (b) all advertising served to persons under 18 must be contextual only — limited to the topic of the content being viewed — with no individual targeting; (c) platforms must verify user age using reasonable technical standards before serving any behavioral advertisement; (3) require all advertising platforms to maintain a publicly searchable ad transparency library — disclosing all advertisers, targeting criteria, and audience reach for all advertisements — updated within 24 hours; (4) FTC enforcement with civil penalties of $50,000 per targeted advertisement served in violation; (5) criminal penalties — fines up to $10 million and imprisonment up to 5 years — for any officer who authorizes a pattern of illegal targeting; and (6) a private right of action for any individual subjected to illegal targeted advertising, with statutory damages of $500–$5,000 per advertisement plus attorney's fees.
Behavioral advertising systems have been documented serving predatory financial ads to people whose data suggests they are in financial distress, addiction treatment ads to people who searched for information about substance use, and political manipulation ads to voters identified as persuadable through micro-targeting. Children are systematically targeted by behavioral advertising systems that exploit developmental vulnerabilities to maximize engagement and purchasing pressure.
TECH-DTBR-0003
Proposal
Congress Must Enact a Comprehensive Federal Digital Privacy Bill of Rights — Establishing Universal Data Minimization, Purpose Limitation, and Algorithmic Transparency Rights — and Create an Independent Federal Privacy Agency With Full Rulemaking, Investigation, and Enforcement Authority Separate From the FTC
Federal law must give all Americans clear rights over their own data — including the right to know what is collected, limit how it is used, and demand transparency from algorithms. An independent federal agency must enforce these rights.
Congress must: (1) enact the American Digital Privacy Bill of Rights — establishing that every U.S. resident has the following enforceable rights with respect to any entity that collects, processes, or transfers their personal data: (a) Right to Know: the right to receive a plain-language disclosure of all personal data collected, all third parties to whom it has been transferred, and all purposes for which it has been used — within 30 days of request, at no cost; (b) Right to Correct: the right to require correction of any inaccurate personal data within 30 days; (c) Right to Delete: the right to require deletion of all personal data within 45 days — with notification to all downstream recipients; (d) Right to Portability: the right to receive all personal data in a machine-readable format within 30 days; (e) Right to Opt Out: the right to opt out of all non-essential data processing, including targeted advertising, algorithmic profiling, and third-party transfers — at any time, at no cost, with no degradation of service; (f) Right to Human Review: the right to demand human review of any consequential automated decision — including credit, hiring, insurance, housing, and content moderation decisions — within 15 business days; (2) establish a data minimization and purpose limitation standard: entities may only collect personal data that is strictly necessary for the specific purpose for which it was collected — and may not use that data for any other purpose without new affirmative consent; (3) establish an independent Federal Privacy Agency — governed by a 5-member bipartisan commission — with: (a) full rulemaking authority to implement and update the Digital Privacy Bill of Rights; (b) independent litigation authority — the ability to sue in federal court without DOJ approval; (c) mandatory staffing of not less than 1,500 full-time employees; (d) dedicated annual appropriation of not less than $500 million; (4) civil penalties of $100 per person per day of violation; (5) criminal penalties — fines up to $10 million and imprisonment up to 10 years — for any executive who knowingly authorizes a privacy violation affecting more than 100,000 people; and (6) a private right of action for any person whose rights were violated, with statutory damages of $1,000–$10,000 per violation plus attorney's fees — preempting no stronger state law.
The United States is the only major democracy without a comprehensive federal privacy law — leaving data protection to a patchwork of state laws and voluntary industry standards. The EU's GDPR, passed in 2018, established many of the rights above and has resulted in multi-billion-euro fines against major U.S. tech companies operating in Europe.
The Technology and AI pillar addresses the most significant governance challenge of the 21st century: how to ensure that increasingly powerful, autonomous, and opaque computational systems remain subject to democratic accountability, constitutional constraint, and human values. AI systems now make or materially influence decisions affecting employment, credit, housing, healthcare, education, criminal sentencing, immigration status, military targeting, and access to public benefits. These decisions were traditionally made by humans operating within legal frameworks designed to ensure transparency, due process, and accountability. AI threatens to replace those frameworks with black-box optimization processes that operate at scale, are difficult to audit or explain, and can encode or amplify historical bias while appearing neutral.
AI Governance Foundations: The National Institute of Standards and Technology (NIST) AI Risk Management Framework and Generative AI Profile emphasize that AI systems must be governed through structured risk assessment, continuous monitoring, documentation, testing for bias and safety, stakeholder engagement, and clear accountability structures. High-risk AI systems — those affecting rights, safety, or access to essential services — require heightened governance before deployment, including pre-deployment review, ongoing audits, incident response protocols, and mechanisms for redress. The NIST framework explicitly warns that AI systems can fail in unpredictable ways, amplify bias, generate plausible but false outputs (hallucination), and be misused for surveillance, manipulation, or harm. UNESCO's Recommendation on the Ethics of Artificial Intelligence and the OECD AI Principles similarly emphasize human rights, transparency, accountability, and democratic governance as foundational requirements for trustworthy AI.[9]
Algorithmic Accountability: Algorithmic decision-making systems are now ubiquitous in both public and private sectors. Credit scoring algorithms determine access to loans and housing. Hiring algorithms filter job candidates. Healthcare algorithms influence triage and treatment recommendations. Platform algorithms shape what information people see, affecting political beliefs, social relationships, and mental health. These systems often rely on opaque criteria, use proxies that correlate with protected characteristics (race, gender, disability), and lack meaningful mechanisms for explanation or appeal. Research has documented algorithmic discrimination in employment screening, criminal risk assessment, credit decisions, and healthcare resource allocation.[2] The Federal Trade Commission and civil rights groups have warned that algorithmic systems can violate anti-discrimination law while appearing neutral,[1] and that lack of transparency makes bias difficult to detect or challenge. Transparency requirements, independent audits, rights to explanation and appeal, and prohibition on manipulative engagement optimization are essential to prevent algorithmic systems from operating as unaccountable, discriminatory gatekeepers.
Surveillance Capitalism and Mass Tracking: The business model of much of the modern internet is based on surveillance capitalism: the extraction, aggregation, analysis, and monetization of personal data at scale. Companies track users across devices, websites, and physical locations; build detailed behavioral profiles; and use those profiles to target advertising, shape content recommendations, and predict future behavior. Governments have increasingly purchased access to this commercial surveillance infrastructure to evade constitutional warrant requirements. The ACLU and Electronic Frontier Foundation have documented that law enforcement agencies routinely purchase location data, social graph data, and other personal information from data brokers without obtaining warrants, effectively buying their way around the Fourth Amendment. Courts have begun to recognize that warrantless access to comprehensive digital tracking data constitutes an unconstitutional search, but gaps remain. This pillar closes those gaps by banning government purchase of surveillance data, restricting mass surveillance, requiring warrants for persistent tracking, and protecting anonymous internet access.
Biometric Surveillance: Facial recognition, gait analysis, voiceprint identification, and other biometric technologies have made it possible to identify, track, and catalog individuals in public spaces with unprecedented scale and accuracy. Studies have shown that facial recognition systems have higher error rates for women and people of color, leading to false arrests and wrongful accusations.[5] Real-time biometric surveillance of crowds and demonstrations has a chilling effect on free assembly and political protest. China's use of facial recognition for mass surveillance and social control demonstrates the authoritarian potential of these technologies. Multiple U.S. cities and states have banned or restricted government use of facial recognition in recognition of these risks. This pillar bans mass facial recognition in public spaces, prohibits real-time biometric tracking of demonstrations, restricts use of biometrics as general identity infrastructure, and requires strict safeguards where biometric use is permitted.
Data Privacy and Rights: The United States lacks comprehensive federal data privacy legislation comparable to the European Union's General Data Protection Regulation (GDPR). As a result, individuals have limited rights to access, correct, or delete their data; companies face minimal restrictions on collection, use, or sale of personal information; and data brokers operate with little transparency or accountability. Dark patterns — manipulative interface designs that trick users into consenting to data collection — are widespread. This pillar establishes data rights frameworks, restricts data brokers, bans dark patterns in consent flows, prohibits cross-agency surveillance data fusion, and prevents creation of secret behavioral dossiers.
Platform Regulation and Social Media: Large digital platforms wield significant power over political discourse, social interaction, and access to information. Platform algorithms optimize for engagement, which research shows amplifies divisive, sensational, and false content. The Wall Street Journal's Facebook Files and internal documents from other platforms revealed that companies were aware their algorithms contributed to mental health harms, political polarization, and radicalization but prioritized growth over safety.[6] Researchers have documented that algorithmic amplification can manipulate elections, spread disinformation, and target vulnerable users with harmful content. This pillar requires platforms to disclose ranking objectives, provide non-algorithmic alternatives, prohibit manipulative engagement optimization, and allow protected researcher access to study algorithmic harms.
AI in Criminal Justice: AI systems are increasingly used in policing, prosecution, and sentencing. Predictive policing systems use historical crime data to forecast where crimes will occur or who will commit them, but research has shown these systems encode and amplify historical biases in policing, disproportionately targeting communities of color. Risk assessment tools used in bail, sentencing, and parole decisions claim to predict future criminality but have been found to be racially biased, opaque, and unreliable. ProPublica's analysis of the COMPAS algorithm found it falsely labeled Black defendants as high-risk at nearly twice the rate of white defendants.[4] The use of AI in criminal justice raises fundamental due process concerns: defendants often cannot examine or challenge the algorithms used against them, proprietary claims prevent meaningful review, and the appearance of scientific objectivity can mask bias. This pillar bans AI risk scoring in sentencing, bail, and punishment; prohibits predictive policing based on biased data; and requires transparency, auditability, and human accountability.
AI in Courts and Legal Systems: The judicial system is increasingly adopting AI tools for case management, legal research, evidence analysis, and even opinion drafting. While AI may improve efficiency in some contexts, its use in legal proceedings raises severe risks: generative AI systems hallucinate (generate plausible but false information), cannot reliably distinguish fact from fiction, lack reasoning capacity, and cannot be meaningfully cross-examined. Lawyers have been sanctioned for submitting AI-generated briefs containing fabricated case citations. Use of AI in evidence analysis, credibility assessment, or judicial reasoning threatens the integrity of legal proceedings and the right to a fair trial. This pillar imposes comprehensive restrictions: ban on generative AI in evidence and judicial opinions; prohibition on AI-determined sentencing, credibility assessment, or jury influence; strict admissibility standards requiring authentication and adversarial challenge; and protections in family court, eviction, probation/parole, and administrative proceedings.
AI in Government Services: Government agencies are deploying AI systems to administer benefits, screen applications, detect fraud, and make eligibility determinations. These systems often operate with minimal transparency, deny or terminate benefits based on opaque scoring, and provide inadequate mechanisms for appeal or correction. Investigative reporting has documented cases where AI systems wrongly terminated Medicaid coverage, denied disability benefits, flagged parents for child welfare investigations based on algorithmic risk scores, and suspended unemployment benefits based on fraud detection algorithms with high false-positive rates. These errors can cause severe harm — loss of healthcare, housing, income, or family integrity — and disproportionately affect vulnerable populations. This pillar prohibits AI from independently denying, reducing, or terminating benefits or services; requires human review before harm occurs; guarantees rights to explanation and appeal; and bans behavioral scoring and forced AI-only service channels.
AI in Employment: Employers increasingly use AI for hiring, promotion, performance evaluation, and workplace monitoring. These systems often rely on opaque criteria, screen out qualified candidates based on irrelevant factors, and enable intrusive surveillance of workers. Amazon's warehouse monitoring systems, for example, track worker movements and productivity in real time, automatically generate discipline recommendations, and have been linked to injuries and high turnover. AI hiring systems have been found to discriminate based on name, address, or other proxies for race and gender. Emotion recognition systems purport to infer mood, engagement, or trustworthiness but lack scientific validity and invade worker dignity. Research shows that automation and AI deployment can contribute to rising wage inequality and displacement of workers in routine cognitive and manual tasks.[3] This pillar prohibits fully automated employment decisions, bans intrusive monitoring and emotion inference, protects union activity, requires transparency and human review, and establishes rights to contest AI-influenced decisions.
AI in Finance and Insurance: Financial institutions use AI for credit scoring, loan underwriting, mortgage approval, insurance pricing, and claims decisions. These systems can improve efficiency but also create risks of discrimination, opacity, and denial of essential services. Studies have documented that AI lending systems can discriminate based on race, even when race is not explicitly used as a variable, because the systems rely on proxies such as zip code, education, or shopping behavior. Insurance companies use AI to analyze health data, driving behavior, and other factors to set premiums or deny coverage, sometimes relying on inferences that are opaque, unreliable, or discriminatory. Because access to credit, insurance, and housing are essential to economic participation, this pillar establishes strict protections: no automated denial without human review, prohibition on opaque or discriminatory scoring, rights to explanation and appeal, bans on vulnerability profiling, and regular independent audits.
AI in Education: Schools and universities are adopting AI for grading, plagiarism detection, admissions, student monitoring, and personalized learning. These systems raise concerns about accuracy, bias, privacy, and the role of human educators. AI plagiarism detectors have been shown to have high false-positive rates, particularly for non-native English speakers, and can wrongly accuse students of cheating. AI grading systems may not recognize creativity or critical thinking. Continuous monitoring and emotion recognition systems invade student privacy and create surveillance environments. This pillar establishes that AI may assist but not replace human instruction, may not be the sole basis for grading or high-stakes academic decisions, prohibits invasive surveillance and emotion inference, protects student data, requires bias audits, and ensures transparency and human review.
AI in Healthcare: AI systems are used for diagnosis, treatment recommendations, triage, and administrative decision-making in healthcare. While AI can assist clinicians, it cannot replace the judgment, accountability, and ethical obligations of licensed medical professionals. AI diagnostic systems have been found to underperform or misdiagnose in diverse patient populations, particularly when training data is not representative. AI systems used to deny or delay insurance coverage or treatment authorization can cause severe harm. This pillar establishes that AI may assist but not replace clinicians in high-risk decisions, prohibits systems designed for emotional manipulation, and requires evaluation for mental health harms, including addiction, compulsive use, and amplification of self-harm content.
Military AI and Autonomous Weapons: The development of AI-enabled autonomous weapons systems — often called "killer robots" — is one of the most significant emerging threats to international humanitarian law and global security. Fully autonomous weapons that can select and engage targets without meaningful human control raise fundamental concerns: they cannot reliably distinguish combatants from civilians; they remove human judgment, accountability, and moral responsibility from lethal force decisions; they risk rapid escalation and accidental conflict; and they threaten to destabilize international security by lowering the barrier to use of force. The International Committee of the Red Cross, United Nations experts, and civil society groups have called for prohibitions or strict regulation of autonomous weapons. This pillar bans AI systems that can independently select and engage targets with lethal force, prohibits AI in nuclear command and control, requires meaningful human decision-making for all use of force, mandates logging and auditability, requires congressional authorization for new capabilities, and supports international treaties to limit military AI.
Synthetic Media and Deepfakes: Generative AI systems can now create highly realistic fake images, videos, and audio of real people saying or doing things they never said or did. These "deepfakes" have been used for non-consensual sexual content, fraud, impersonation, political manipulation, and disinformation. The Brennan Center for Justice has warned that AI-generated content poses material risks to elections and could undermine public trust in shared reality.[7] Deepfakes of political candidates have already appeared in campaign ads and social media, and detection tools lag behind generation capabilities. This pillar bans deceptive use of synthetic media: prohibition on impersonation for fraud, non-consensual sexual content, false depiction causing harm, political manipulation, and misleading about public officials or events; requires disclosure and provenance markers; but allows parody, journalism, and consensual use.
Environmental Impact of AI Infrastructure: Large-scale AI systems and data centers consume massive amounts of energy and water, contributing to climate change and straining local resources. A single training run for a large language model can emit as much carbon as several cars over their lifetimes.[8] Data centers use significant quantities of water for cooling, sometimes in drought-prone regions. The materials used in AI hardware, including rare earth elements, often come from supply chains with poor environmental and labor standards. AI infrastructure operators have been criticized for greenwashing — making selective or misleading environmental claims. This pillar requires that AI infrastructure meet carbon neutrality or carbon-negative requirements, disclose energy and water usage, internalize environmental costs, undergo impact assessments, meet durability and recycling standards, and not disproportionately burden disadvantaged communities.
Internet Infrastructure and Net Neutrality: Net neutrality — the principle that internet service providers (ISPs) must treat all data equally and not discriminate based on content, source, or destination — is essential to an open internet. Without net neutrality protections, ISPs can block, throttle, or prioritize traffic for competitive advantage or political influence, effectively acting as gatekeepers to information. The FCC's 2017 repeal of net neutrality rules allowed ISPs to engage in discriminatory practices, and subsequent attempts to restore protections have faced legal and political challenges. This pillar treats ISPs as common carriers, prohibits content-based discrimination, allows narrow technical exceptions, and protects against administrative rollback.
Age Verification and Privacy: Proposals to require age verification for access to online content — ostensibly to protect children — often function as mass identification and surveillance infrastructure. Identity-based age verification systems require users to submit government ID or biometric data, creating centralized databases of identity and browsing activity. Such systems undermine anonymous internet access, chill free expression, and create security and privacy risks. This pillar prohibits mandatory identity-based age verification systems that create tracking databases, allows privacy-preserving alternatives such as zero-knowledge proofs or local device verification, narrowly scopes any permitted requirements to specific high-risk content, and bans use of age verification as a surveillance or censorship proxy.
International Dimensions and Coordination: The United States cannot govern AI and digital technology in isolation. AI development, deployment, and misuse are global phenomena. International cooperation is necessary to establish shared norms, prevent destabilizing arms races, address cross-border harms, and ensure that AI development aligns with democratic values rather than authoritarian control. This pillar supports international treaties to limit military AI, control exports of high-risk technologies, coordinate on platform regulation and synthetic media, and establish transparency and accountability standards. Where adversaries violate treaty obligations, the pillar provides for strong enforcement while maintaining core prohibitions on the most dangerous uses (nuclear AI, fully autonomous lethal targeting) even during noncompliance.
Implementation and Oversight Structures: The pillar establishes several cross-cutting implementation mechanisms: public registries of government AI systems; mandatory independent audits for high-risk systems; sunset and reauthorization requirements; rights to explanation, appeal, and human review; prohibition on using proprietary claims to avoid oversight; logging and forensic review capabilities; protected researcher access to study algorithmic harms; and explicit bans on circumvention through private vendors or commercial data purchases. These mechanisms ensure that AI governance is not aspirational but operationally enforceable.
Foundation Model Transparency and Safety: Foundation models — large-scale AI systems trained on broad datasets that can be adapted to multiple applications — represent a qualitatively different governance challenge from narrow AI applications. The EU AI Act (2024) established tiered obligations for general-purpose AI models, including documentation, transparency, and safety evaluation requirements calibrated to model capability.[9] The United States has no equivalent framework; voluntary commitments by AI developers — including the Frontier AI Safety Commitments signed at the 2023 UK AI Safety Summit — cover only the largest developers and are not legally binding. Pre-deployment registration, training data disclosure, and mandatory red-team safety evaluation are the minimum accountability infrastructure for a technology that affects hundreds of millions of people and presents risks that developers themselves acknowledge may be severe.
Neurological Data and Cognitive Privacy: Brain-computer interface technology is advancing rapidly from clinical applications into consumer and commercial markets. Several states have enacted neural data protection laws, recognizing that data derived from brain activity represents an unprecedented intrusion into cognitive privacy. The commercial collection of neural data for attention monitoring, emotional profiling, and consumer targeting — without informed consent and without regulation — risks creating a new category of intimate surveillance with profound implications for privacy, autonomy, and cognitive liberty.
Platform Portability and Interoperability: The EU's General Data Protection Regulation (Article 20) and Digital Markets Act have established portability and interoperability rights for digital platforms that the United States lacks. Platform lock-in — the inability to migrate data, contacts, and social graphs between competing services — is a primary mechanism through which dominant platforms maintain market power despite user dissatisfaction. Data portability rights and mandatory interoperability standards are essential complements to antitrust enforcement for addressing the structural monopoly power of large platforms.
AI Liability Frameworks: The EU AI Liability Directive (proposed) and Product Liability Directive reform address the gap between AI harm and legal accountability by establishing disclosure obligations and rebuttable presumptions of causation where AI opacity makes direct proof impossible. The United States has no AI-specific liability framework; harmed individuals must navigate general tort law designed for visible, traceable causes of harm. Strict liability for high-risk AI applications, combined with record retention and incident reporting requirements, would align developer incentives with safety outcomes and provide harmed individuals with meaningful legal remedies.
The 362 positions in this pillar reflect a comprehensive effort to ensure that artificial intelligence and digital systems enhance rather than undermine democracy, rights, fairness, and human dignity. They are grounded in the principle that technology must serve people and remain subject to democratic accountability, constitutional constraint, and human values.