Foundation II: Clean Democracy

Information & Media

★ ★ ★

To protect the flow of truthful information from power to the public while preventing coordinated manipulation, foreign interference, and the weaponization of information systems against democratic self-governance. This

57Total Positions
0Active
0Partial
57Proposed
Development Status
🟡 Active — under expansion
Press freedom and journalist protection rules present; substantial gaps in media ownership limits, platform accountability, net neutrality, and public media funding addressed in this audit cycle.
⚠ Content Gap: Media ownership concentration rules need detailed breakup and divestiture mechanisms
⚠ Content Gap: Platform liability framework and disinformation research citations needed

Purpose

To protect the flow of truthful information from power to the public while preventing coordinated manipulation, foreign interference, and the weaponization of information systems against democratic self-governance.

Core Principle

Free press and open information flow are essential to democracy, but they must be protected from systematic manipulation, foreign influence operations, and corporate or state weaponization.

The Problem It Solves

### Press Freedom Under Threat Journalists face retaliation, surveillance, and legal harassment for reporting on matters of public interest. Source confidentiality is routinely compromised through government subpoenas and surveillance tools. National security classification systems are misused to conceal wrongdoing rather than protect legitimate secrets. Independent and digital journalists often lack the institutional protections afforded to legacy media organizations. The result is a chilling effect on investigative journalism precisely when democratic accountability depends on it most. Whistleblowers who expose corruption face prosecution while the corruption itself goes unpunished. Government can compel disclosure of sources, communications, and unpublished materials without meaningful judicial oversight. The modern press operates in a surveillance environment where digital communications can be intercepted, analyzed, and used against journalists and their sources.

Key Reform Areas

### Press and Journalist Protections - Federal shield law protecting source confidentiality - Prohibition on retaliation, surveillance, or prosecution for public-interest reporting - Mandatory independent judicial review before compelled disclosure - Limits on use of subpoenas, warrants, or surveillance tools against journalists - Inclusion of independent and digital journalists in all protections - Transparency requirements when government targets journalists or news organizations - Prohibition on misuse of classification or national security systems to conceal wrongdoing

Design Logic — How These Positions Work Together

Press Freedom Architecture

This pillar establishes constitutional-level protections for journalists, sources, and the information flow between power and public. It operates on an activity-based definition of journalism (what you do, not who you are) to include independent and digital journalists. Protections include source confidentiality, limits on compelled disclosure, prohibitions on retaliation and surveillance, and mandatory judicial review before any government action targeting journalists or their communications.

The framework prevents classification and national security systems from being used to conceal illegality or wrongdoing. It requires transparency when government actions target journalists. It protects whistleblowers who provide information to the press about corruption or misconduct.

These protections are structural, not preferential. They exist because information flow from power to public is a system function necessary for democratic accountability. Without protected channels for revealing wrongdoing, corruption becomes self-concealing.

Full Policy Platform

Every rule in this pillar, organized by policy area. Active rules are current platform commitments. Partial rules are in development. Proposed rules are planned for future inclusion.

MED PRS — Press Freedom and Journalist Protection 0/9 active
MDIA-PRSS-0001 Proposed

This policy requires the federal government to protect journalists engaged in lawful news reporting and information gathering, so reporters can do their work without fear of government interference.

Protect Journalists

Protect journalists engaged in lawful newsgathering and reporting.

Establishes constitutional-level protection for journalistic activity. Protection is activity-based (what you do) rather than identity-based (who you are), ensuring coverage of independent and digital journalists.

MDIA-PRSS-0002 Proposed

This policy protects journalists' ability to keep their sources secret, ensuring that people who provide information to reporters cannot be forced into the open and will not face retaliation for speaking to the press.

Protect Sources

Protect confidential sources.

Federal shield law protection. Journalists cannot be compelled to reveal sources except under narrowly defined, high-threshold judicial standards. Includes anonymous sources and whistleblowers who provide information to the press.

MDIA-PRSS-0003 Proposed

This policy limits the circumstances under which the government or courts can force journalists to hand over notes, communications, or source information, preserving the confidentiality essential to investigative reporting.

Limit Compelled Disclosure

Limit compelled disclosure.

Government may not compel disclosure of sources, unpublished materials, or journalistic communications except under strict judicial review with high evidentiary threshold. Requires showing that information is essential, unavailable through other means, and that disclosure serves a compelling government interest that outweighs press freedom.

MDIA-PRSS-0004 Proposed

This policy prohibits retaliation against journalists for their reporting or professional activities, protecting reporters from being punished by governments, employers, or other parties because of their work.

Ban Retaliation Against Journalists

Ban retaliation against journalists.

Prohibits retaliation, prosecution, or adverse action against journalists for publishing information of public interest, except in clearly defined and constitutionally limited cases (e.g., direct incitement to imminent violence, genuine national security harm with judicial review). Prevents weaponization of legal system against the press.

MDIA-PRSS-0005 Proposed

This policy prevents the government from misusing national security and secrecy laws to silence or punish journalists, ensuring these laws are not turned into tools to suppress the free press.

Prevent Misuse of Secrecy Laws

Prevent misuse of secrecy laws.

Prohibits use of classification, national security designations, or secrecy systems to conceal wrongdoing, corruption, illegality, or abuse of power. Classification authority cannot be used to avoid accountability or prevent oversight. Requires that secrecy serve legitimate security purposes, not institutional self-protection.

MDIA-PRSS-0006 Proposed

This policy requires that before any journalist is forced to disclose source information, a judge must independently review the government's request and determine it is legally justified, preventing compelled disclosure without judicial oversight.

Judicial Review Before Disclosure

Judicial review before disclosure.

Mandatory independent judicial review before any government action to compel disclosure of journalistic materials, communications, or sources. Review must be by a court independent of the investigating or prosecuting authority. Requires adversarial process with representation for press interests.

MDIA-PRSS-0007 Proposed

This policy limits the government's ability to surveil journalists' communications and activities, protecting the confidential relationships reporters depend on to gather news from sensitive sources.

Limit Surveillance of Journalists

Limit surveillance of journalists.

Strict limits on use of surveillance tools, metadata collection, subpoenas, or warrants targeting journalists or their communications. Requires heightened judicial scrutiny, showing of compelling need, and exhaustion of alternative investigative methods. Prohibits backdoor access through third-party service providers without judicial oversight.

MDIA-PRSS-0008 Proposed

This policy ensures that press freedom protections extend to freelancers and independent journalists — not just staff reporters at major outlets — so all people engaged in journalism receive the same legal safeguards.

Include Independent Journalists

Include independent journalists.

Explicitly extends all press freedom protections to independent journalists, digital media creators, freelance reporters, and non-traditional news organizations. Closes the legacy-media loophole where protections apply only to established institutional press. Activity-based standard: if you engage in journalism, you receive journalism protections.

MDIA-PRSS-0009 Proposed

This policy requires the government to disclose when it has monitored or targeted journalists as part of any investigation, so the press and the public can know when press freedom is being threatened.

Transparency on Targeting Journalists

Transparency on targeting journalists.

Requires reporting and transparency when government actions target journalists, news organizations, or journalistic communications. Annual public reporting (with appropriate security redactions) on number and type of actions taken. Creates oversight mechanism and accountability for press-freedom impacts.

MED-OWN Media Ownership and Consolidation 0/8 proposed
MDIA-OWNS-0001 Proposed

This policy sets firm ownership limits in local news markets: no single company may own more than one daily newspaper, one TV station, and one radio station in the same area. The FCC must enforce these limits based on community diversity rather than financial arguments, and cannot grant waivers based solely on hardship claims.

Establish and enforce media ownership limits to prevent monopoly control of local news markets

No single entity may own more than one daily newspaper, one television broadcast station, and one radio station in the same local market; the FCC must apply and enforce structural media ownership limits based on principles of market diversity, local competition, and community voice rather than spectrum efficiency or commercial considerations; waivers may not be granted based solely on financial arguments.

The collapse of local news ownership diversity — driven by private equity acquisitions, chain consolidation, and regulatory weakening — has eliminated thousands of local news outlets and produced news deserts across the country. Restoring ownership limits is a structural prerequisite for restoring local press accountability.

MDIA-OWNS-0002 Proposed

This policy prohibits private equity funds, hedge funds, and similar investment firms — whose business model centers on cutting costs to maximize returns — from buying controlling stakes in local news outlets. Existing acquisitions that have already gutted local newsrooms must face a divestiture review, because journalism must serve communities, not investors.

Prohibit private equity and hedge fund acquisition of local news outlets

Financial entities whose business model is extractive cost-cutting — including private equity funds, hedge funds, and similar investors with no demonstrated commitment to journalism — may not acquire majority stakes in local news outlets; existing acquisitions that have led to documented closure or severe gutting of newsroom capacity must be subject to divestiture review; journalism assets must be treated as community resources, not investment vehicles.

Private equity acquisition of local newspapers has consistently resulted in severe staff reductions, conversion to low-quality content, and eventual closure. This pattern destroys community information infrastructure for short-term financial gain. It is a public harm that warrants structural prohibition rather than case-by-case review.

MDIA-OWNS-0003 Proposed

This policy requires every media outlet — print, broadcast, digital, and online — to publicly disclose who ultimately owns it, who funds it, and any financial ties to political or advocacy groups, in a searchable, machine-readable format maintained by the FCC. Undisclosed foreign ownership and funding from political entities are banned.

Require disclosure of media ownership, beneficial ownership, and funding sources

All media entities — print, broadcast, digital, and online — must publicly disclose their ultimate ownership, beneficial ownership chain, major funding sources, and financial relationships with political or advocacy organizations; disclosure must be in a publicly accessible, machine-readable format maintained by the FCC; undisclosed foreign ownership and undisclosed funding from political entities are prohibited.

Opacity in media ownership enables foreign influence operations, undisclosed political propaganda, and astroturfed "news" organizations. Disclosure requirements are the minimum precondition for informed public assessment of the credibility and interests of news sources.

MDIA-OWNS-0004 Proposed

This policy creates a publicly funded but independently governed journalism fund that provides grants and low-interest loans to nonprofit, employee-owned, and community-owned news outlets. The fund's board must be fully independent, and no commercial media company, political party, advocacy organization, or political candidate may receive or control these funds.

Create a public interest journalism fund to support local and investigative reporting

A federally chartered but independently governed public interest journalism fund must provide grants and low-interest loans to nonprofit, employee-owned, and community-owned news outlets; funding must be insulated from political interference through a multi-stakeholder independent board; no recipient organization may be owned or controlled by a commercial media company, political party, advocacy organization, or political candidate or their affiliates.

Public funding for journalism is essential to maintaining public interest coverage in markets that cannot support advertising-based news businesses. Independence from government editorial control can be achieved through structural design — as with public broadcasting in countries with more robust press freedom than the U.S.

MDIA-OWNS-0005 Proposal
🔵 Proposal — Under Review

This policy requires Congress to write firm media ownership caps into federal law, preventing any company from owning both a newspaper and a broadcast station in the same market, reaching more than 25% of the national TV audience, or owning more than three broadcast stations in a single market. These caps cannot be waived by the FCC, and companies that violate them must sell off stations within 24 months.

Enact statutory media ownership caps superseding FCC v. Prometheus Radio Project deregulation

Congress must enact statutory media ownership caps preventing any single entity from: (a) owning both a daily newspaper and any broadcast station in the same market; (b) reaching more than 25% of the national television audience through owned-and-operated broadcast stations; (c) owning more than three broadcast stations (no more than two in the same service — AM, FM, or TV) in any single market; waivers may not be granted based on financial hardship arguments; violations trigger mandatory divestiture with a 24-month compliance window; these caps must be codified in statute and are not subject to FCC waiver or rulemaking revision.

In FCC v. Prometheus Radio Project, 592 U.S. 414 (2021)[4], the Supreme Court upheld the FCC's 2017 relaxation of newspaper-broadcast cross-ownership rules, approving the agency's procedural compliance while not mandating further deregulation. The holding leaves ownership limits entirely at the FCC's discretion — structural vulnerability to administrative capture. Statutory ownership caps enacted by Congress cannot be undone by administrative action and are not subject to notice-and-comment reversal. These limits were in effect for decades; their removal has accelerated local news consolidation and news desert formation. A 25% national audience reach cap reflects the pre-2004 FCC standard. Reference: FCC v. Prometheus Radio Project, 592 U.S. 414 (2021); Napoli, P. M., Stonbely, S., McCollough, K., & Renninger, B. (2017). Local television news and the public interest. The International Journal of Press/Politics, 22(3), 338–363.

MDIA-OWNS-0006 Proposal
🔵 Proposal — Under Review

This policy requires local TV stations to display an on-screen disclosure — for the full duration of any segment — when that content was written or editorially directed by a national corporate owner or outside entity rather than local journalists. News-sharing agreements between co-owned stations must be filed with the FCC within 30 days of signing, and violations carry fines of $50,000 per occurrence.

Require on-screen disclosure of nationally produced "must-run" and "news-sharing" content presented as local journalism

When a local broadcast station airs content written, produced, or editorially mandated by an entity other than the local newsroom — including national corporate owners, syndication partners, or political operations — the station must display a visible on-screen disclosure identifying the content's originating organization for the full duration of the segment; "news-sharing" agreements between co-owned or affiliated stations in different markets must be publicly filed with the FCC within 30 days of execution; broadcasters may not present nationally produced content as original local journalism without disclosure; violations are subject to per-occurrence fines of $50,000 and license renewal conditions; the FCC must issue a rulemaking implementing this requirement within one year of enactment.

Sinclair Broadcast Group, which as of 2023 owned or operated more than 180 local television stations, required its stations to air nationally produced "must-run" segments — including politically charged editorial commentary and nationally coordinated news scripts — indistinguishable in format from local journalism. A 2018 compilation of Sinclair-owned local anchors reading identical scripts on "fake news" attracted widespread coverage. Existing FCC sponsorship identification rules (47 C.F.R. § 73.1212) require disclosure of sponsored content but have not been applied to must-run editorial mandates from corporate owners. This provision closes that gap. Reference: Chavez, C. (2018, March 7). Sinclair's forced news script outrages journalists. CNN Business; FCC Sponsorship Identification Rules, 47 C.F.R. § 73.1212.

MDIA-OWNS-0007 Proposal
🔵 Proposal — Under Review

This policy requires that when identical editorial content is broadcast simultaneously on two or more stations — such as when a corporate owner sends the same script to multiple local stations at once — viewers must be told the content did not originate in the local newsroom. Content sent to three or more stations for coordinated broadcast must be filed with the FCC as sponsored content within 72 hours, with fines of $100,000 per station per violation.

Require sponsored-content disclosure for simultaneous broadcast of identical editorial content across multiple stations

When identical or substantially identical editorial content is aired simultaneously or near-simultaneously across two or more broadcast stations under common ownership, shared service agreement, or third-party content distribution arrangement, that content must be disclosed to viewers as originating outside the local newsroom; content distributed to three or more stations for coordinated broadcast must be filed with the FCC as sponsored content within 72 hours of broadcast; the FCC must issue a rulemaking clarifying that coordinated content distribution constitutes sponsored programming subject to 47 U.S.C. § 317; violations carry per-instance fines of $100,000 per broadcast station with escalating license renewal conditions; Congress must codify this requirement by statute if the FCC fails to issue a final rule within 18 months of enactment.

The FCC's existing sponsorship identification requirements under 47 U.S.C. § 317 and 47 C.F.R. § 73.1212[5] require disclosure of any broadcast for which consideration has been received. When a national content operation directs stations to air coordinated content — through network affiliation, corporate mandate, or shared content arrangements — the existing statutory framework applies but has not been consistently enforced. The 2018 Sinclair synchronized-script episode, in which dozens of local anchors read identical text warning against "fake news," demonstrated how broadcast infrastructure can be used to distribute national political messaging under the credibility of local news. The anti-synchronization rule requires both disclosure and enforcement. Reference: 47 U.S.C. § 317; FCC Sponsorship Identification Rules, 47 C.F.R. § 73.1212.

MDIA-OWNS-0008 Proposal
🔵 Proposal — Under Review

This policy requires any investment firm — including private equity funds and hedge funds — that buys a local news outlet to maintain staffing at acquisition levels for at least five years and keep local coverage at no less than 80% of its pre-acquisition level. Existing employees must be given the right to buy the outlet themselves before it is sold or closed, and firms that previously cut newsroom staffing by more than 40% at any news property are presumptively banned from acquiring additional outlets.

Impose staffing maintenance requirements and employee right of first refusal on financial-entity acquisition of local news outlets

Any financial entity — including private equity funds, hedge funds, real estate investment trusts, and similar investment vehicles — that acquires a controlling interest in a local news outlet must: (a) maintain newsroom staffing at acquisition-level for a minimum of five years; (b) maintain local coverage at no less than 80% of the pre-acquisition level, measured by staff-hours devoted to coverage of the geographic community served; (c) submit to FCC public interest review before acquisition is consummated; (d) grant existing newsroom employees a right of first refusal to form an employee ownership trust at independently appraised fair market value before any decision to close or sell the outlet; financial entities that have reduced newsroom staffing by more than 40% in any acquired news property within the prior ten years are presumptively disqualified from acquiring additional news outlets absent demonstrated remediation.

Since 2005, roughly 2,900 local U.S. newspapers have closed — more than one-third of the total — leaving tens of millions of Americans in counties with no local newspaper of record.[6] Private equity and hedge fund acquisitions have accelerated this process: Alden Global Capital, as of 2023 one of the largest U.S. newspaper owners, reduced newsroom staffing by an average of 40–70% in acquired properties compared to pre-acquisition levels. The employee right-of-first-refusal provision enables employee-ownership transitions, a model that has preserved local journalism in documented cases. Reference: Abernathy, P. M. (2023). The news desert crisis. UNC Hussman School of Journalism and Media. https://www.usnewsdeserts.com; Navarrette, M., & Rau, N. (2022, November). How private equity gutted local news. ProPublica.

MED-NET Net Neutrality and Open Internet 0/3 proposed
MDIA-NETS-0001 Proposed

This policy requires internet service providers (ISPs) to treat all legal online traffic equally — they cannot slow down, block, or charge more for certain websites or services. Net neutrality would be written into federal statute rather than just agency rules, making it far harder for future administrations to roll it back.

Restore and permanently codify net neutrality in federal law

Internet service providers must treat all legal internet traffic equally without throttling, blocking, prioritizing, or charging differential rates based on content, source, destination, or service type; net neutrality must be codified in federal statute rather than FCC regulation, making it resistant to administrative reversal; ISPs may not establish paid "fast lanes" that advantage their own content or the content of business partners over competitors.

Net neutrality is a structural prerequisite for a free and open internet that serves as a public communications medium rather than a vehicle for ISP market power. Its repeal in 2017 created legal permission for ISPs to throttle and block content; restoring it through statute eliminates the regulatory-rulemaking cycle that has characterized the issue since 2015.

MDIA-NETS-0002 Proposed

This policy classifies broadband internet as a public utility — like telephone service — and subjects it to federal regulation to prevent monopoly abuse. In markets where one or two companies control internet access, this policy requires structural fixes such as sharing network infrastructure with competitors, and it prohibits states from blocking local governments from building their own public broadband networks.

Treat broadband internet access as a public utility

Broadband internet access must be classified and regulated as a common carrier telecommunications service subject to Title II of the Communications Act; monopoly and duopoly broadband markets must be subject to structural remedies including unbundling of the local loop, open access requirements, and municipal broadband authority; no state law may preempt a local government's ability to provide public broadband service to its residents.

In most U.S. markets, broadband competition is a fiction — most households have access to one or two providers. Without common carrier regulation, these monopolists can extract monopoly rents, restrict competing services, and impose arbitrary terms. Public utility status restores the regulatory framework appropriate to this market structure.

MDIA-NETS-0003 Proposed

This policy guarantees that every U.S. household has access to affordable, reliable broadband internet, treating it as essential infrastructure equivalent to electricity or water. Federal programs must fund deployment in rural and tribal areas, and subsidy programs must ensure that low-income households can afford to connect.

Guarantee universal affordable broadband access as essential public infrastructure

Every household in the United States must have access to affordable, reliable broadband internet service; federal policy must ensure deployment in rural and tribal areas and must include subsidy programs sufficient to ensure affordability for low-income households; no household may be treated as commercially unserved without a public alternative available; broadband is essential infrastructure equivalent to electricity, water, and telephone service.

Broadband access is as essential to modern economic participation, healthcare access, education, and civic engagement as telephone service was in the 20th century. The market has failed to provide universal service; public policy must guarantee it, as it has for other essential utilities.

MED-PLT Platform Accountability and Content Governance 0/7 proposed
MDIA-PLTS-0001 Proposed

This policy requires large digital platforms to publish clear, consistently applied content moderation rules and give users written explanations when their posts are removed or accounts suspended, along with a real appeals process. Platforms must also publish regular public reports showing how many moderation decisions they make, what categories they fall into, and whether enforcement falls harder on some communities than others.

Require transparency in platform content moderation policies and decisions

Digital platforms with significant user reach must publish clear, specific, and consistently applied content moderation policies; must provide users with written reasons for content removal or account suspension and a meaningful appeals process; and must publish regular transparency reports including the volume, category, and appeal outcome of moderation decisions, disaggregated in ways that enable analysis of disparate impact.

Opaque content moderation operates as informal censorship without due process. Consistency requirements and transparency reporting enable both users and researchers to identify systematic bias, disproportionate enforcement, and gaps in enforcement of stated policies. This is a procedural accountability requirement, not a content mandate.

MDIA-PLTS-0002 Proposed

This policy requires platforms that use algorithms to recommend or rank content to share how those systems work — including their goals, training data, and known harms — with regulators and independent researchers. Platforms cannot use trade-secret claims to block audits conducted in the public interest, including audits of election integrity, public health, or civil rights impacts.

Require platforms to publish and allow auditing of algorithmic amplification systems

Platforms that use algorithmic ranking, recommendation, or amplification systems must disclose to regulators and qualified researchers the design, training data, objective functions, and known failure modes of these systems; independent audits of algorithmic amplification for discriminatory impact, political bias, and manipulation of user behavior must be permitted; platforms may not claim trade secret privilege to evade audits that serve a public interest in election integrity, public health, or civil rights.

Algorithmic amplification systems shape what information billions of people see. The opacity of these systems — combined with their documented tendency to amplify outrage, misinformation, and extremism because such content drives engagement — poses direct threats to democratic deliberation and public health. Audit access is the minimum regulatory requirement for accountability.

MDIA-PLTS-0003 Proposed

This policy prohibits platforms from using manipulative design tactics — such as confusing privacy settings, addictive engagement features, autoplay without real user control, and difficult account-deletion flows — to steer user behavior against users' own interests. Platforms must offer genuine privacy-protective defaults and give users the option to see a simple chronological feed without algorithmic sorting.

Prohibit deceptive design practices that manipulate user behavior on information platforms

Platforms may not use deceptive design — including dark patterns that undermine informed consent, variable-reward engagement loops designed to maximize addictive use, autoplay systems without meaningful user control, and systems designed to make informed account deletion unreasonably difficult — to manipulate user behavior in ways that harm user wellbeing or undermine informed decision-making; platforms must provide genuine privacy-protective defaults and enable users to access chronological feeds without algorithmic curation.

Digital platforms systematically exploit psychological vulnerabilities through design choices optimized for engagement metrics rather than user wellbeing. These design choices — not just content — shape information consumption. Prohibiting the worst manipulative design practices protects user autonomy without requiring government to adjudicate content.

MDIA-PLTS-0004 Proposed

This policy requires platforms to place clear, persistent labels on AI-generated text, images, audio, and video so users always know whether content is real or computer-made. Deepfakes and synthetic media designed to deceive viewers about their source are prohibited when deception is the purpose, and platforms must deploy detection tools and cannot help creators conceal AI-generated content.

Require labeling of AI-generated content and synthetic media

Platforms must label AI-generated text, images, audio, and video with clear, persistent disclosures that are visible to users without requiring additional effort; AI-generated content designed to deceive viewers about its source or nature — deepfakes, voice cloning, synthetic media attributing statements to real people — must be prohibited when used in contexts where deception is the purpose; platforms must deploy reasonable detection tools and may not provide technical means to circumvent labeling requirements.

AI-generated synthetic media poses immediate threats to electoral integrity, public discourse, and individual reputation through the creation of convincing false content attributed to real people. Labeling requirements and deceptive-use prohibitions address this without restricting legitimate creative uses of AI-generated content.

MDIA-PLTS-0005 Proposal
🔵 Proposal — Under Review

This policy requires Congress to pass an Algorithmic Accountability Act making large platforms (over 50 million monthly U.S. users) conduct and publicly release annual audits of how their recommendation systems affect political polarization, misinformation, mental health, and civil rights. It also creates a federally chartered research access program, requires platforms to offer a chronological feed as the default option, and prohibits platforms from algorithmically amplifying content their own systems have already flagged as misinformation or coordinated fake behavior.

Require annual algorithmic impact assessments, researcher data access, and prohibition on platform-amplification of platform-flagged misinformation

Congress must enact an Algorithmic Accountability Act requiring: (a) platforms with more than 50 million monthly active U.S. users to conduct and publicly disclose annual algorithmic impact assessments examining the effects of recommendation and amplification systems on political polarization, misinformation spread, mental health outcomes, and civil rights; (b) the creation of a federally chartered Digital Research Access Program granting qualified independent researchers secure access to platform data under non-disclosure agreements, administered by a neutral body insulated from platform control over research findings; (c) platforms to offer users, as a default option, a chronological or user-curated feed free from algorithmic amplification; (d) prohibition on algorithmic amplification of content that the platform's own internal content classification systems flag with high confidence as misinformation, health disinformation, or coordinated inauthentic behavior — the platform's own flagging determination triggers the amplification prohibition, without requiring government determination of what constitutes truth.

In 2021, the Wall Street Journal published the "Facebook Files" — internal research documents showing that Facebook's own researchers had determined that the platform's algorithmic amplification of divisive content increased political polarization, and that Instagram's recommendation algorithms caused significant mental health harm to teenage girls, particularly around body image. These findings were suppressed internally and not disclosed to regulators or the public. Frances Haugen's congressional testimony on October 5, 2021, placed this evidence in the public record. The "own-flag" amplification prohibition is specifically designed to avoid government-mandated content moderation: it requires platforms to act on their own findings, not on government determinations of truth or falsity. Reference: Horwitz, J., & Seetharaman, D. (2021, May 26). Facebook executives shut down efforts to make the site less divisive. The Wall Street Journal; Haugen, F. (2021, October 5). Testimony before the Senate Commerce Committee, Subcommittee on Consumer Protection, Product Safety, and Data Security. U.S. Senate.

MDIA-PLTS-0006 Proposal
🔵 Proposal — Under Review

This policy requires Congress to ban manipulative user-interface tricks — called dark patterns — on any platform, app, or subscription service with more than one million U.S. users. Banned practices include guilt-trip cancellation prompts, confusing unsubscribe flows, hidden billing charges, and consent dialogs designed to steer users away from privacy-protective choices. The FTC may fine violators up to $50,000 per affected user per violation, and harmed users can sue for $500 per incident plus attorney fees.

Ban deceptive dark patterns on platforms, apps, and subscription services — FTC enforcement and private right of action

Congress must enact a Dark Patterns Prohibition Act that: (a) defines and bans deceptive user-interface patterns — including confirmshaming, roach-motel account-deletion flows, hidden unsubscribe mechanisms, forced continuity billing without clear advance notice, and consent-harvesting dialogs designed to obscure the privacy-invasive choice — on any platform, app, or subscription service with more than one million U.S. users; (b) directs the FTC to issue a rulemaking specifying prohibited pattern categories within one year of enactment and update the list by rulemaking as design evasion techniques evolve; (c) authorizes FTC civil penalties of up to $50,000 per violation per affected user; (d) creates a private right of action for any user harmed by a prohibited dark pattern, with statutory damages of $500 per occurrence plus attorneys' fees and costs; (e) applies without exception to platforms, mobile applications, subscription services, and e-commerce sites.

Dark patterns — deceptive interface designs that exploit cognitive biases to manipulate user choices — are pervasive across platforms, apps, and subscription services. The FTC's 2022 report Bringing Dark Patterns to Light documented their widespread use to harvest personal data, prevent account cancellation, and drive addictive engagement. The EU's Digital Services Act (2022), Art. 25, and Digital Markets Act include dark pattern prohibitions as baseline requirements. A private right of action is structurally necessary because FTC enforcement capacity is insufficient to address the full scope of violation; individual user harm requires individual standing. Reference: Federal Trade Commission. (2022). Bringing dark patterns to light. https://www.ftc.gov/reports/dark-patterns; Digital Services Act, Regulation (EU) 2022/2065, Art. 25.

MDIA-PLTS-0007 Proposal
🔵 Proposal — Under Review

This policy requires large platforms (over 50 million monthly U.S. users) to offer a fully chronological feed as the default — algorithmic ranking available only if users actively choose it — and to turn off autoplay on all video by default, resetting that setting each new session. Infinite scroll is banned for users under 18, parents must receive tools to set daily limits and review content categories for users under 16, and violations carry fines of up to $100,000 per day per prohibited design feature.

Mandate chronological feed as default; prohibit autoplay without session opt-in; ban infinite scroll for users under 18; require parental oversight tools

Platforms with more than 50 million monthly active U.S. users must: (a) offer a fully chronological feed as the system default for all users — algorithmic ranking is available only to users who affirmatively opt in per account and is clearly labeled as non-chronological; (b) disable autoplay on all video content by default — autoplay may be enabled only by explicit user opt-in that resets at the start of each new session; (c) disable infinite-scroll interfaces for users under 18, replacing them with paginated interfaces requiring affirmative user action to load additional content; (d) provide mandatory parental oversight tools allowing parents or guardians of users under 16 to set daily time limits, review categories of content accessed, and receive weekly usage summaries — without access to private message content; (e) prominently disclose to users when content is algorithmically curated rather than chronological; (f) the FTC may enforce violations with civil penalties of up to $100,000 per day per prohibited design feature; platforms are additionally liable through a private right of action by users or parents for violations affecting users under 18, with $1,000 statutory damages per violation plus attorneys' fees.

Internal Facebook research disclosed by Frances Haugen in 2021 showed that Instagram's recommendation algorithms caused significant mental health harm to teenage girls — including heightened risk for eating disorders and suicide ideation — and that the company knew and did not act.[10] Infinite scroll eliminates natural stopping points in content consumption and is a documented addictive design feature that exploits adolescent impulse-control vulnerabilities. Autoplay removes the friction that enables intentional content choices. Chronological feeds disrupt the outrage-amplification dynamic documented in platform recommendation research. The parental oversight tool requirement does not compromise user privacy for message content while providing families actionable usage data. Reference: Haugen, F. (2021, October 5). Testimony before the Senate Commerce Committee, Subcommittee on Consumer Protection, Product Safety, and Data Security. U.S. Senate; Harris, T., & Raskin, A. (2020). The social dilemma. Center for Humane Technology.

MED-PUB Public Media and Civic Information 0/3 proposed
MDIA-PUBL-0001 Proposed

This policy guarantees multi-year federal funding for public media — including CPB, NPR, PBS, and their local affiliates — through dedicated funding streams that cannot be held hostage in annual budget battles. Funding must be sufficient to support national, regional, and local public media service, and Congress is prohibited from using funding threats to influence what public broadcasters report.

Guarantee and expand public media funding insulated from political interference

Federal funding for public media — including the Corporation for Public Broadcasting, NPR, PBS, and their affiliates — must be guaranteed through multi-year appropriations or a dedicated funding stream independent of annual appropriations cycles; funding levels must be sufficient to maintain comprehensive national, regional, and local public media service; Congress may not use funding threats as leverage over editorial content, and the Corporation for Public Broadcasting board must remain structurally insulated from political appointment influence.

Public media in the United States is chronically underfunded relative to peer democracies, making it uniquely vulnerable to political pressure through appropriations threats. Multi-year guaranteed funding removes this vulnerability and enables the long-term programming and journalism commitments that define quality public media.

MDIA-PUBL-0002 Proposed

This policy requires all government information — data, reports, hearings, legislation, court records, and other public records — to be published in free, machine-readable formats with no paywalls or fees, because information produced with public money belongs to the public. The Federal Register, Congressional Record, and the federal court records system (PACER) must all be freely accessible, and the government cannot give private companies exclusive control over public information.

Require public agencies to make government information accessible without paywalls or proprietary restrictions

All government data, reports, hearings, legislation, court records, and other public records must be published in machine-readable, freely accessible formats without paywall, fee, or proprietary software requirement; government information produced with public funds belongs to the public; federal agencies may not privatize access to public information through exclusive licensing arrangements with commercial vendors; the Federal Register, Congressional Record, and PACER federal court system must be freely accessible.

Charging fees for access to public records — including federal court documents behind PACER's per-page fees — imposes barriers to civic participation, press accountability, and research that fall hardest on individual citizens, small news outlets, and independent researchers. Public information is a public good.

MDIA-PUBL-0003 Proposal
🔵 Proposal — Under Review

This policy requires Congress to increase Corporation for Public Broadcasting (CPB) funding to at least $10 per person in the U.S. — roughly $3.3 billion per year — phased in over five years, secured in a dedicated trust fund with a five-year advance appropriation that cannot be cut in annual budget fights. It also prohibits any government official from pressuring CPB's grant or editorial decisions, and gives any CPB-funded outlet the legal right to sue if that independence is violated.

Fund CPB through a dedicated statutory trust fund and prohibit executive branch interference in CPB grant decisions

Congress must: (a) increase annual CPB funding to no less than $10 per capita (approximately $3.3 billion at 2024 population levels), phased in over five years; (b) fund CPB through a dedicated trust fund with a five-year forward appropriation, making CPB's budget immune from annual appropriations cycles and executive impoundment; (c) prohibit by statute any executive branch official from directing, conditioning, or threatening CPB board appointments, grant decisions, or editorial decisions as a condition of funding or cooperation; (d) establish that the Corporation for Public Broadcasting Act's existing editorial independence provisions (47 U.S.C. § 396(g)(1)(A)) are enforceable by private right of action by any CPB grantee whose editorial independence has been violated; (e) authorize CPB to maintain a reserve fund equal to two years of operating expenses as a buffer against political defunding.

The Corporation for Public Broadcasting receives approximately $535 million annually — roughly $1.60 per American — compared to approximately $85 per capita for the BBC in the United Kingdom, $27 per capita for the CBC in Canada, and $100 or more per capita in most comparable democracies. President Trump's 2025 executive order directing CPB's Board to cease funding to NPR and PBS affiliates illustrated the structural vulnerability created by CPB's dependence on annual appropriations. A statutory trust fund with multi-year forward appropriations eliminates this leverage point. The private right of action extends to CPB grantees whose editorial independence is threatened, creating enforceable standing without requiring government enforcement. Reference: Corporation for Public Broadcasting Act, 47 U.S.C. § 396; Legg, J. (2020). Public media funding across democracies. USC Annenberg Center.

CHDSCHDS

MDIA-CHDS-0001

This policy prohibits platforms from showing behaviorally targeted ads — ads based on browsing history, interest profiles, or personal data — to users under 18. Platforms used by minors must also follow age-appropriate design rules, including privacy-protective settings on by default and a ban on manipulative techniques designed to maximize the time children spend on the platform.

Prohibit behavioral advertising targeting users under 18; require age-appropriate design standards on platforms used by

Prohibit behavioral advertising targeting users under 18; require age-appropriate design standards on platforms used by minors

MDIA-CHDS-0002

This policy prohibits platforms, apps, and digital services from selling or commercially sharing any personal data about users under 18 to anyone, including advertisers and data brokers. It also guarantees that any user under 18 has the unconditional right to permanently delete their account and all associated personal data — without needing a parent's or guardian's permission.

Prohibit sale of personal data about users under 18; guarantee minors an unconditional right to complete account deletio

Prohibit sale of personal data about users under 18; guarantee minors an unconditional right to complete account deletion without parental consent

DATADATA

MDIA-DATA-0001

This policy bans platforms from targeting ads at users based on sensitive personal information — such as health conditions, religion, political views, sexual orientation, immigration status, or financial difficulty — unless the user has specifically and separately opted in for each category. It also prohibits all behavioral advertising targeting anyone under 18.

Ban targeted advertising based on sensitive personal categories without explicit opt-in; prohibit all behavioral targeti

Ban targeted advertising based on sensitive personal categories without explicit opt-in; prohibit all behavioral targeting of users under 18

MDIA-DATA-0002

This policy requires platforms and apps to collect only the personal data actually needed to provide their service, and prohibits using data collected for one purpose for a different purpose — such as advertising or AI training — without fresh user consent. Any user may demand complete deletion of their data within 30 days, and users may sue for at least $1,000 per violation if a platform ignores their data rights.

Require platform data minimization; prohibit secondary data use without consent; mandate 30-day deletion compliance; pri

Require platform data minimization; prohibit secondary data use without consent; mandate 30-day deletion compliance; private right of action

MDIA-DATA-0003

This policy expands existing federal children's privacy law (COPPA) — which currently protects children under 13 — to cover all users under 16, applying the same parental consent and data minimization requirements to that broader age group. Platforms must verify users' ages without collecting biometric data like face scans or retaining government-issued IDs, and violations carry civil penalties of up to $100,000 per day.

Expand COPPA protections to age 16; require age verification without biometric data collection or government ID retentio

Expand COPPA protections to age 16; require age verification without biometric data collection or government ID retention

DISSDISS

MDIA-DISS-0001

This policy requires real-time public disclosure of all online political advertising — including who paid for it, how much was spent, who it targeted, and how many people saw it — in a publicly accessible federal database. Foreign nationals and foreign-controlled entities are completely banned from buying U.S. political advertising, and violations are a federal felony punishable by up to 10 years in prison.

Require comprehensive real-time disclosure of all online political advertising; impose criminal penalties for foreign po

Require comprehensive real-time disclosure of all online political advertising; impose criminal penalties for foreign political advertising

MDIA-DISS-0002

This policy requires any AI-generated or digitally manipulated media depicting a real person in a political context — such as a deepfake video of a candidate — to carry a clear, persistent label identifying it as synthetic. Distributing such content without the required label is a federal civil violation subject to fines up to $150,000 per incident, and any person depicted in unlabeled political synthetic media can sue for at least $10,000 per incident.

Mandate labeling of synthetic political media (deepfakes); create federal violation for unlabeled distribution; private

Mandate labeling of synthetic political media (deepfakes); create federal violation for unlabeled distribution; private right of action for depicted persons

MDIA-DISS-0003

This policy requires that any news article, political content, or advertisement created primarily by AI must display an 'AI-Generated Content' disclosure at the point of distribution — on the page, in social media previews, and in email and push notifications. Large platforms must build systems to detect and label AI-generated content, and news outlets that publish AI content without required disclosure face FTC fines of up to $50,000 per article.

Require labeling of AI-generated news, political content, and advertising at point of distribution; mandate platform det

Require labeling of AI-generated news, political content, and advertising at point of distribution; mandate platform detection and labeling at scale

MDIA-DISS-0004

This policy mandates that political advertisers publicly disclose in real time the identity of the funder, total spending, targeting criteria, and number of impressions for every online political ad, published in a federally maintained searchable database accessible to any member of the public. All foreign entities are banned from purchasing U.S. political advertising, and violations constitute a federal felony carrying up to 10 years in prison, because voters deserve to know who is trying to influence them.

Require comprehensive real-time disclosure of all online political advertising; impose criminal penalties for foreign po

Require comprehensive real-time disclosure of all online political advertising; impose criminal penalties for foreign political advertising

MDIA-DISS-0005

This policy requires synthetic political media — AI-generated or digitally altered audio, video, or images of real people in political contexts — to carry a persistent, machine-readable label stating that the content is synthetic, immediately visible to viewers. Any person depicted in unlabeled political deepfake content without their consent can sue for actual damages plus at least $10,000 per incident, and platforms that knowingly amplify unlabeled deepfakes after notice are jointly liable.

Mandate labeling of synthetic political media (deepfakes); create federal violation for unlabeled distribution; private

Mandate labeling of synthetic political media (deepfakes); create federal violation for unlabeled distribution; private right of action for depicted persons

MDIA-DISS-0006

This policy requires publishers and large platforms to label all AI-generated news, political commentary, and advertising with a clear 'AI-Generated Content' disclosure, including machine-readable provenance data embedded in the content. Platforms with more than 10 million monthly U.S. users must deploy detection systems with annual third-party accuracy audits, and political advertising made with AI tools must additionally identify both the AI tool used and the responsible human entity.

Require labeling of AI-generated news, political content, and advertising at point of distribution; mandate platform det

Require labeling of AI-generated news, political content, and advertising at point of distribution; mandate platform detection and labeling at scale

LNJSLNJS

MDIA-LNJS-0001

This policy requires large broadband providers — those with more than 100,000 subscribers — to contribute 1% of their annual broadband revenues to an independently governed Local Journalism Sustainability Fund. The fund distributes grants exclusively to local and regional news organizations based on journalistic output and community accountability coverage, with no government officials on the board and no national outlets or politically controlled media eligible.

MED-LNJ-001

MED-LNJ-001

MDIA-LNJS-0002

This policy gives local news organizations a refundable federal tax credit equal to 50% of wages paid to journalists who primarily cover local government, courts, schools, and public safety, capped at $25,000 per journalist per year. To qualify, an outlet must get at least 75% of its content from coverage of communities under 500,000 people, and organizations controlled by private equity, hedge funds, or national media chains are not eligible.

MED-LNJ-002

MED-LNJ-002

MDIA-LNJS-0003

This policy creates a refundable federal tax credit equal to 50% of a journalist's annual salary — capped at $50,000 per journalist per year — for employers who hire full-time reporters to cover local government and community affairs in designated news deserts. A news desert is a county with fewer than one local journalist per 10,000 residents or no local newspaper of record, and private equity and large chain-controlled organizations cannot claim the credit.

MED-LNJ-003

MED-LNJ-003

MDIA-LNJS-0004

This policy establishes a federal Local News Fellowship program — modeled on AmeriCorps VISTA — that places journalists in news-desert communities for two-year terms, with the federal government paying 60% of each fellow's salary. No federal employee or agency may influence fellows' editorial content, the program must grow to 10,000 positions annually within five years, and placement must prioritize rural communities, tribal lands, and communities of color with no local accountability journalism.

MED-LNJ-004

MED-LNJ-004

MDIA-LNJS-0005

This policy creates a dedicated 501(c)(n) tax-exempt category specifically for public interest journalism organizations, with a streamlined IRS approval process and clear eligibility rules separate from the generic nonprofit category. It also clarifies that reporting by nonprofit newsrooms on political candidates and legislative proceedings is protected journalistic activity — not political campaign activity — so newsrooms cannot lose their tax-exempt status for covering politics.

MED-LNJ-005

MED-LNJ-005

SECTSECT

MDIA-SECT-0001

This policy removes the legal shield (Section 230 immunity) that protects platforms from lawsuits when they accept payment to promote or amplify content that violates their own published rules — a platform that profits from distributing prohibited content cannot claim immunity for that choice. It also removes immunity for platforms that fail to respond within 48 hours to documented targeted harassment campaigns after the victim provides written notice with evidence.

Remove Section 230 immunity for paid promotion of policy-violating content; create private right of action for targeted

Remove Section 230 immunity for paid promotion of policy-violating content; create private right of action for targeted harassment campaigns platforms fail to address after notice

MDIA-SECT-0002

This policy removes Section 230 immunity for platforms that actively use their recommendation algorithms to amplify content that caused measurable harm, when the platform had prior notice the content type was dangerous. Prior notice can include internal research, past court findings, or published third-party research, but this rule applies only to content the platform affirmatively promoted — not content that spread organically.

MED-S230-002

MED-S230-002

MDIA-SECT-0003

This policy preserves Section 230 liability immunity for platforms that simply store and display content in chronological or user-controlled order, while removing immunity for algorithmic amplification of content the platform already knew was harmful. Platforms must annually disclose how their recommendation systems work and cannot use trade-secret claims to shield those systems from public interest audits.

MED-S230-003

MED-S230-003

MDIA-SECT-0004

This policy removes Section 230 immunity for platforms that profit from promoting policy-violating content through paid amplification, and creates a private right of action for harassment victims whose documented complaints go unaddressed by the platform. Victims can sue for $5,000 in statutory damages per incident plus attorney fees and injunctive relief when platforms fail to act within 48 hours of notice.

Remove Section 230 immunity for paid promotion of policy-violating content; create private right of action for targeted

Remove Section 230 immunity for paid promotion of policy-violating content; create private right of action for targeted harassment campaigns platforms fail to address after notice

Proposed Extensions

The following rules address gaps identified in this pillar's adversarial audit and are under review for inclusion in the next policy cycle.

MED-S230 Section 230 Conditional Liability Reform 0/4 proposed
MED-S230-001
🔵 Proposal — Under Review

This policy conditions Section 230 immunity — the federal law that shields platforms from lawsuits over user content — on large platforms (over 10 million monthly U.S. users) meeting minimum accountability standards. To keep their immunity, these platforms must publish clear moderation policies, give users written reasons for content removal with a real appeals process, and publish quarterly transparency reports on moderation volume, categories, and outcomes.

Reform Section 230 immunity to condition protection on platform compliance with transparency and due process standards
Section 230 of the Communications Decency Act (47 U.S.C. § 230) provides broad liability immunity to online platforms for third-party content; this immunity must be conditioned, for platforms with more than 10 million monthly active U.S. users, on meeting minimum accountability standards including: (1) publishing clear, specific, and consistently applied content moderation policies; (2) providing users written reasons for content removal and a meaningful appeals process; (3) publishing quarterly transparency reports on moderation volume, category, and appeal outcomes; platforms that fail to meet these conditions are not entitled to Section 230 immunity for claims arising from moderation decisions.
MED-S230-002
🔵 Proposal — Under Review

This policy removes Section 230 immunity when a platform has affirmatively amplified content through its recommendation algorithm that caused measurable harm, and the platform had prior notice that content of that type was dangerous. The prior-notice standard includes internal research, past court findings, regulatory determinations, or credible published research — but this rule applies only to content the platform actively promoted, not content that spread on its own.

Remove Section 230 immunity for algorithmically amplified content that causes documented harm
Section 230 immunity must not apply to content that a platform has affirmatively amplified through its algorithmic recommendation system when that amplification caused measurable harm and the platform had prior notice that the content category was harmful; the standard for "prior notice" includes: prior litigation, internal research findings, regulatory findings, or credible third-party research showing that algorithmic amplification of the specific content type has caused harm; this provision does not apply to organic distribution of content and is limited to affirmative algorithmic promotion.
MED-S230-003
🔵 Proposal — Under Review

This policy preserves Section 230 immunity for platforms that store and display content in chronological or user-controlled order, while removing it for algorithmic amplification of content the platform knew was harmful. Platforms that use recommendation systems must annually disclose how those systems work and share any internal research about their effects on users — they cannot invoke trade-secret protections to block public interest oversight of their amplification systems.

Preserve Section 230 immunity for content hosting while removing it for affirmative algorithmic amplification that causes documented harm
Congress must amend 47 U.S.C. § 230 to: (a) preserve immunity for platforms that store and transmit user-generated content in reverse-chronological order or in user-controlled presentation modes; (b) condition immunity for algorithmic amplification — defined as any system that affirmatively promotes, recommends, or boosts content to users who did not specifically request it — on annual disclosure of the amplification system's objective functions, training signals, and any internal research findings regarding the system's effects on user well-being, political polarization, or information quality; (c) remove immunity for algorithmic amplification of content that the platform had prior notice caused or was likely to cause measurable harm, where "prior notice" includes the platform's own internal research, prior litigation findings, regulatory findings, or published peer-reviewed research; platforms may not claim trade secret protection to shield algorithmic amplification systems from the disclosure requirements of this section. Reference: 47 U.S.C. § 230; Gonzalez v. Google LLC, 598 U.S. 617 (2023)[9]; Klonick, K. (2018). The new governors: The people, rules, and processes governing online speech. Harvard Law Review, 131(6), 1598–1670.[11]
MED-S230-004 Proposal
🔵 Proposal — Under Review

This policy removes Section 230 liability immunity for any platform that accepts payment to promote content that violates its own terms of service, because a platform that profits from distributing content it has already identified as prohibited cannot claim immunity for that decision. It also removes immunity for platforms that fail to respond within 48 hours to documented targeted harassment campaigns, and gives victims the right to sue for $5,000 in statutory damages per incident plus attorney fees.

Remove Section 230 immunity for paid promotion of policy-violating content; create private right of action for targeted harassment campaigns platforms fail to address after notice

Congress must amend 47 U.S.C. § 230 to: (a) remove liability immunity for any platform that accepts payment to promote, boost, or amplify content that violates the platform's own published terms of service — a platform that profits from disseminating content it has identified as prohibited may not claim immunity for that distribution; (b) remove immunity for platforms that fail to take reasonable action in response to documented targeted harassment campaigns — defined as coordinated conduct by three or more accounts directing repeated abusive contact at a specific individual — after the target provides written notice with evidence of the coordination; platforms must initiate meaningful review within 48 hours of notice and provide a written response to the complainant within five business days; (c) create a private right of action for victims of targeted harassment campaigns who have submitted required notice and been denied meaningful platform response, with statutory damages of $5,000 per incident plus attorneys' fees and injunctive relief; (d) create a private right of action for users whose content is suppressed by a platform that has accepted payment to promote the competing policy-violating content, with injunctive relief and actual damages. Reference: 47 U.S.C. § 230; Citron, D. K., & Franks, M. A. (2014). Criminalizing revenge porn. Wake Forest Law Review, 49, 345.

Section 230 creates an asymmetry: platforms can profit from hosting and amplifying content that violates their own stated rules without accountability. When a platform accepts payment to boost policy-violating content it has already identified as prohibited, it has made an affirmative choice to profit from that content — immunity for this conduct is indefensible. The harassment provisions address a documented failure mode: coordinated brigading, dogpiling, and sustained targeted harassment campaigns drive individuals — disproportionately women and marginalized users — off public platforms. The notice-and-respond mechanism creates enforceable accountability without requiring platforms to preemptively screen all content. Criminal enforcement by the DOJ for pattern harassment facilitated by platforms is available under existing statutes; this provision creates the civil enforcement backstop.

MED-DIS Foreign Disinformation and Information Operations 0/6 proposed
MED-DIS-001
🔵 Proposal — Under Review

This policy requires digital platforms to label confirmed state-sponsored foreign disinformation content within 24 hours of formal government notification, and to reduce algorithmic amplification of labeled content. The requirement applies to content already published — platforms must report the reach of that content to the government but may not remove it without government agreement, except to enforce their own terms of service against illegal content.

Mandate platforms label confirmed state-sponsored foreign disinformation within 24 hours of government notification
When CISA or an equivalent federal authority formally notifies a digital platform that a piece of content or an account has been identified as part of a confirmed state-sponsored foreign disinformation operation, the platform must: (1) apply a clear, visible label within 24 hours; (2) reduce algorithmic amplification of labeled content; and (3) disclose to CISA the reach of the content prior to notification; platforms may not remove labeled content without government agreement except to comply with their own terms of service for illegal content; the labeling requirement applies to content already published and does not require platforms to preemptively screen content.
MED-DIS-002
🔵 Proposal — Under Review

This policy requires large digital platforms (over 10 million monthly U.S. users) to cooperate in good faith with CISA — the federal Cybersecurity and Infrastructure Security Agency — during the 90 days before through 30 days after any federal election. Platforms must designate a contact person for election-related information sharing, respond within 24 hours to CISA notices about confirmed foreign influence operations, and share aggregate data on foreign activity — but the government may not direct content moderation decisions.

Require platform cooperation with CISA on election-period disinformation operations
Digital platforms with more than 10 million monthly active U.S. users must cooperate with CISA in good faith during federal election periods (90 days before any federal election through 30 days after certification) by: (1) designating a senior point of contact for election-related information sharing; (2) responding within 24 hours to CISA notifications about confirmed foreign influence operations targeting the election; (3) providing CISA with aggregate data on identified foreign influence activity affecting the election; cooperation requirements must not be construed to authorize government direction of content moderation decisions or platform editorial choices.
MED-DIS-003
🔵 Proposal — Under Review

This policy prohibits digital platforms from showing political ads targeted specifically based on a person's race, ethnicity, religion, national origin, gender, sexual orientation, or disability status. All political advertising must be archived in a publicly accessible, real-time database that records the advertiser, amount spent, targeting criteria used, and demographic breakdown of who actually saw it.

Prohibit micro-targeted political advertising based on protected characteristics
Digital platforms may not serve political advertisements targeted on the basis of race, ethnicity, religion, national origin, gender, sexual orientation, disability, or inferred characteristics derived from these protected categories; platforms must publish a real-time, publicly accessible archive of all political advertisements, including the targeting criteria used, the spending amount, the geographic reach, and the demographic breakdown of actual impressions; "political advertisements" includes paid content that promotes or opposes a candidate, party, ballot measure, or political position on a contested public policy issue.
MED-DIS-004 Proposal
🔵 Proposal — Under Review

This policy requires comprehensive real-time public disclosure of all online political advertising in a federally maintained database, including the legal identity of the funder, total spending, targeting parameters, and number of impressions — with requirements applying to all platforms regardless of size. Foreign nationals, foreign governments, and U.S. entities acting as pass-throughs for foreign funds are banned from buying political advertising, with violations punishable as federal felonies by up to 10 years in prison.

Require comprehensive real-time disclosure of all online political advertising; impose criminal penalties for foreign political advertising

Congress must enact an Online Political Ads Transparency Act requiring: (a) any platform, publisher, or distributor of online political advertising — defined as paid content that promotes or opposes a federal candidate, political party, ballot measure, or political position on a contested public policy issue — to disclose in real time to a publicly accessible federal database: the legal identity of the funder, total amount spent, targeting parameters used (including age ranges, geographic areas, behavioral categories, and lookalike audience specifications), number of impressions, and the full text or URL of the ad; (b) disclosure requirements apply to all platforms regardless of size, revenue, or user count; (c) any foreign national, foreign government, foreign entity, or U.S. entity acting as a conduit for foreign funds is prohibited from purchasing or facilitating political advertising — violation is a federal felony punishable by up to 10 years' imprisonment and a fine of twice the advertising value; (d) platforms must implement Know Your Customer identity-verification procedures for political advertisers before publication; (e) the FEC must administer the real-time public database with a public API. Reference: Honest Ads Act, S. 1356, 117th Cong. (2021); 52 U.S.C. § 30121 (prohibition on foreign national contributions).

The Internet Research Agency's use of social media advertising in the 2016 presidential election — documented by the Senate Intelligence Committee — demonstrated that foreign entities could purchase targeted political advertising under false identities on U.S. platforms with no disclosure. Unlike broadcast and print political advertising, which has been subject to disclosure requirements under the Federal Election Campaign Act since 1972, online political advertising operates with minimal transparency. The Honest Ads Act has been introduced in multiple Congresses but not enacted. Criminal penalties for foreign political advertising are necessary because the conduct constitutes election interference, not merely a campaign finance violation.

MED-DIS-005 Proposal
🔵 Proposal — Under Review

This policy requires all synthetic political media — AI-generated or digitally manipulated audio, video, or images of real people in political contexts — to carry a clear, persistent, machine-readable label stating the content is synthetic, visible to users without any extra effort. Distribution of unlabeled political synthetic media is a federal civil violation with penalties up to $150,000 per incident; any depicted person can sue for actual damages plus at least $10,000 per incident, and platforms that knowingly amplify unlabeled content after notice are jointly liable.

Mandate labeling of synthetic political media (deepfakes); create federal violation for unlabeled distribution; private right of action for depicted persons

Congress must enact a Deepfake Accountability Act providing: (a) any synthetic media — audio, video, or images in which a real person's likeness, voice, or words have been materially altered or fabricated using AI or other digital means — depicting a real person in a political context must bear a clear, persistent, machine-readable label stating that it is synthetic or AI-generated, visible to the user without additional action; (b) "political context" includes: depictions of political candidates or elected officials, content about elections or ballot measures, content about legislative or executive proceedings, and content designed to influence a voter's choice; (c) distribution of political synthetic media without required labeling is a federal civil violation subject to penalties of up to $150,000 per incident plus injunctive relief; (d) any real person depicted in unlabeled political synthetic media without their consent has a private right of action for actual damages plus $10,000 statutory damages per incident, attorneys' fees, and injunctive relief; (e) an exception applies to satire where the satirical intent is evident from context AND the label "SATIRE — AI-GENERATED" is prominently displayed — satire without the label is not protected; (f) platforms that knowingly host or algorithmically amplify unlabeled political synthetic media after notice are jointly liable. Reference: DEFIANCE Act of 2024, Pub. L. No. 118-103; Chesney, R., & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753–1820.

Deepfake audio and video — AI-generated synthetic media depicting real people saying or doing things they never said or did — present immediate threats to electoral integrity. A deepfake of a candidate conceding, committing a crime, or making an inflammatory statement can be generated in minutes and distributed to millions before any fact-check can reach the same audience. As of 2024, at least 14 states have enacted deepfake legislation of varying scope; a federal floor ensures cross-state distribution parity and criminal-penalty uniformity. The satire exception is narrowly drawn: satirical intent must be both evident from context and affirmatively labeled — satirical framing alone does not exempt synthetic media from disclosure requirements.

MED-DIS-006 Proposal
🔵 Proposal — Under Review

This policy requires all AI-generated news, political commentary, and advertising to carry a prominent 'AI-Generated Content' disclosure at the point of distribution, including standardized machine-readable provenance metadata in addition to a human-visible label. Publishers and platforms with more than 10 million monthly U.S. users must deploy content detection systems with annual third-party accuracy audits, and political advertising created with AI tools must additionally identify both the AI tool and the responsible human entity.

Require labeling of AI-generated news, political content, and advertising at point of distribution; mandate platform detection and labeling at scale

Congress must enact AI Content Provenance requirements providing: (a) any news article, political commentary, editorial, or advertisement generated entirely or primarily by an AI system must bear a clear, prominent disclosure stating "AI-Generated Content" at the point of distribution — on the article page, in social media preview cards, and in any email or push-notification delivery; (b) disclosure must be machine-readable using standardized provenance metadata (including C2PA content credentials or equivalent open standard) in addition to human-visible disclosure; (c) publishers and platforms with more than 10 million monthly active U.S. users must implement detection and labeling systems for AI-generated content published on their platforms, with annual accuracy audits by independent third parties; (d) the FTC may establish minimum detection accuracy standards by rulemaking; (e) the disclosure requirement does not prohibit AI-generated content — it requires transparency about source; (f) news organizations that publish AI-generated content without required disclosure are subject to FTC civil penalties of up to $50,000 per article; (g) political advertising created using AI generation must carry additional disclosure identifying both the AI tool and the human entity responsible for the content. Reference: Content Authenticity Initiative. (2021). Content credentials specification. Adobe, Inc.; Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (2023), § 4.5.

AI-generated news content is being distributed at scale by content farms, partisan networks, and foreign information operations. Research by NewsGuard documented hundreds of AI-generated news sites producing low-quality or false content without disclosure as of 2023. Without labeling requirements, readers cannot distinguish AI-generated content from original journalism, undermining media literacy and the economic sustainability of original reporting. The C2PA (Coalition for Content Provenance and Authenticity) standard — backed by Adobe, Microsoft, Google, and the BBC — provides an existing technical framework for machine-readable provenance metadata that this requirement mandates at platform scale.

MED-OWN Media Ownership Structural Protections 0/3 proposed
MED-OWN-001 Proposal
🔵 Proposal — Under Review

This policy prohibits private equity funds, hedge funds, and other financial-return-driven investment vehicles from acquiring any controlling interest in a local U.S. newspaper, making any such acquisition legally void and enforceable by the DOJ and FTC through mandatory sell-off. Employee ownership trusts and nonprofit entities that buy a newspaper to prevent it from falling to financial buyers qualify for a 30% tax credit on the purchase price, treating local newspapers as community resources rather than investment vehicles.

Ban private equity acquisition of local newspapers; require DOJ review for chain acquisitions; incentivize ESOP and nonprofit conversions with tax credits

Congress must enact a Local Newspaper Preservation Act providing: (a) private equity funds, hedge funds, real estate investment trusts, and similar financial-return-driven investment vehicles are prohibited from acquiring a controlling interest in any newspaper that primarily covers a U.S. geographic community — any such acquisition is void ab initio and subject to mandatory divestiture enforceable by the DOJ and FTC; (b) any acquisition of a newspaper with average paid circulation under 100,000 by an entity that already owns or controls five or more other newspapers requires: (i) prior notification to the DOJ Antitrust Division; (ii) a 90-day review period; (iii) an affirmative public interest determination that the acquisition will not reduce coverage of the community served — the acquiring entity bears the burden of demonstrating no public interest harm; (c) tax credits equal to 30% of the purchase price are available to employee ownership trusts (ESOPs) and nonprofit entities that acquire a newspaper that would otherwise be sold to a disqualified financial entity or face closure; (d) nonprofit news organizations that acquire a local newspaper are eligible for 501(c)(3) status without political activity restrictions on original reporting; (e) the FTC must issue rules defining "financial-return-driven investment vehicle" within one year of enactment; the DOJ and FTC jointly have criminal referral authority for willful acquisitions by prohibited entities. Reference: Abernathy, P. M. (2023). The news desert crisis. UNC Hussman School of Journalism and Media. https://www.usnewsdeserts.com; 26 U.S.C. § 1042 (employee stock ownership plan tax treatment).

Since 2005, private equity and hedge fund acquisitions of local newspapers have been the primary structural driver of newsroom collapse in the United States. Alden Global Capital reduced average newsroom staffing by 50–70% in acquired properties within five years of acquisition. The DOJ review threshold — entities owning 5+ papers acquiring papers under 100K circulation — targets the consolidation pattern most associated with newsroom gutting. ESOP and nonprofit conversions are proven models: the Philadelphia Inquirer, converted to nonprofit ownership, has expanded its newsroom while formerly PE-owned papers in comparable markets have eliminated coverage. The acquisition ban is a structural remedy, not a content regulation — it targets ownership structures incompatible with sustainable local journalism, not editorial choices.

MED-OWN-002 Proposal
🔵 Proposal — Under Review

This policy writes the editorial independence of NPR, PBS, and their local member stations into federal law, making it a federal violation for any government official to direct, condition, or threaten the editorial content, programming decisions, or grant-making of the Corporation for Public Broadcasting or its grantees. It also establishes a statutory minimum CPB funding floor of $5 billion per year — indexed to inflation — secured in a dedicated five-year advance trust fund that cannot be cut in annual budget fights or blocked by executive action.

Codify PBS and NPR structural independence; require bipartisan staggered CPB board appointments; establish $5B statutory funding floor

Congress must amend the Corporation for Public Broadcasting Act (47 U.S.C. § 396)[8] to: (a) codify by statute the editorial independence of NPR, PBS, and their member stations, providing that no federal official may direct, condition, threaten, or attempt to influence the editorial content, programming decisions, or grant-making of CPB or its grantees; violations are enforceable by private right of action by any CPB grantee, public radio or television station, or journalism organization whose editorial independence has been threatened or violated; (b) require CPB board appointments on a bipartisan basis — no more than five of the ten board members may be from the same political party — with staggered six-year terms, ensuring no single president can replace a majority of the board within a single term; (c) establish a statutory minimum CPB funding floor of $5 billion per year, indexed to inflation, phased in over five years; (d) fund CPB through a dedicated trust fund with a five-year advance appropriation, making the funding immune to annual appropriations riders and executive impoundment; (e) prohibit CPB from conditioning grants on content or editorial balance requirements beyond the existing statutory obligation of objectivity and balance in 47 U.S.C. § 396(g)(1)(A). Reference: Corporation for Public Broadcasting Act, 47 U.S.C. § 396; Executive actions affecting CPB funding (2025).

The United States spends approximately $1.60 per capita annually on public media — compared to approximately $85 per capita for the BBC (United Kingdom) and $27 per capita for the CBC (Canada). Public media in peer democracies is insulated from annual political pressure through multi-year funding commitments and board structures that prevent any single administration from capturing the organization. The 2025 executive action directing CPB to cease funding NPR and PBS affiliates illustrated the structural vulnerability of annual appropriations and a presidentially appointed board. Staggered, bipartisan board appointments mirror the structural protections used for the Federal Reserve, the FTC, and the SEC. The private right of action for editorial independence violations extends enforcement beyond DOJ-initiated action.

MED-OWN-003 Proposal
🔵 Proposal — Under Review

This policy makes the federal program identifying news deserts — communities that have lost local news coverage — permanent and funds it at no less than $25 million per year, with annual public reporting broken down by rural status, income level, and race. It also directs the FCC to require any federally funded broadband provider to make local news websites and apps accessible to subscribers at no additional charge, treating access to local accountability journalism as a public interest obligation tied to federal broadband funding.

Make NTIA news desert mapping program permanent; treat rural broadband access to local news as a federally enforceable public interest obligation

Congress must: (a) make the NTIA's news desert identification and mapping program permanent and fully funded at no less than $25 million per year, with annual public reporting on counties classified as news deserts or partial-coverage communities, disaggregated by rural/urban status, median household income, and racial demographics; (b) direct the FCC to treat broadband access to local news content as a public interest obligation in all rural broadband grant conditions — any broadband infrastructure funded through BEAD, E-RATE, ReConnect, or successor programs must include enforceable conditions that local news content is accessible to subscribers at no additional charge; (c) require the FCC to include local news ecosystem health — defined as the presence of at least one local news organization providing regular coverage of local government and courts — as a weighted factor in broadband service area priority scoring for federal infrastructure grants; (d) establish a Public Information Infrastructure Fund at $200 million per year to support cooperative technical infrastructure — shared publishing platforms, distribution tools, and content management systems — for local news organizations in the same market; (e) require any federally funded broadband provider to zero-rate data charges for access to local news websites and apps operated by organizations qualifying under the Local Journalism Sustainability Act. Reference: Infrastructure Investment and Jobs Act (2021), Broadband Equity, Access, and Deployment Program, 47 U.S.C. § 1702; Abernathy, P. M. (2023). The news desert crisis. UNC Hussman School of Journalism and Media. https://www.usnewsdeserts.com.

News deserts are unevenly distributed — rural communities, low-income communities, and communities of color face the most acute absence of local accountability journalism. The NTIA mapping program provides the data infrastructure for targeted intervention but is subject to annual appropriations. Making it permanent ensures continuity of the evidence base across administrations. The broadband-access-to-news provision treats local journalism as public infrastructure: federally subsidized broadband providers are appropriate vehicles for ensuring that residents have cost-free access to the local news serving their community — a function analogous to the public interest obligations already imposed on broadcast licensees.

MED-LNJ Local Journalism Sustainability 0/5 proposed
MED-LNJ-001
🔵 Proposal — Under Review

This policy requires large broadband providers — those with more than 100,000 subscribers — to contribute 1% of their annual broadband service revenues to a Local Journalism Sustainability Fund administered by an independent, non-governmental board. Grants go only to local and regional news organizations based on journalistic output, local journalist employment, and community accountability coverage — with national outlets, cable news, and politically controlled media ineligible.

Create a Local Journalism Sustainability Fund supported by broadband provider contributions
ISPs and broadband providers with more than 100,000 subscribers must contribute 1% of annual broadband service revenues to a Local Journalism Sustainability Fund administered by an independent, non-governmental board; the Fund must distribute grants exclusively to local and regional news organizations covering defined geographic communities, excluding national outlets, cable news organizations, and outlets controlled by political parties or candidates; grants must be awarded based on journalistic output, employment of local journalists, and demonstrated community accountability coverage, not on content.
MED-LNJ-002
🔵 Proposal — Under Review

This policy allows qualifying local news organizations to claim a refundable federal payroll tax credit equal to 50% of wages paid to journalists who primarily cover local government, courts, schools, and public safety, up to $25,000 per journalist per year. To qualify, an outlet must get at least 75% of its content from coverage of defined geographic communities under 500,000 people, and organizations controlled by private equity, hedge funds, or national media chains are ineligible.

Provide payroll tax credits to local news organizations that employ journalists covering local government and courts
Local news organizations — defined as outlets that derive at least 75% of content from coverage of defined geographic communities with populations under 500,000 — may claim a refundable payroll tax credit equal to 50% of wages paid to journalists primarily covering local government, courts, schools, and public safety; the credit is capped at $25,000 per journalist per year; eligible organizations include for-profit, nonprofit, cooperative, and employee-owned structures; organizations controlled by private equity funds, hedge funds, or national media chains are ineligible.
MED-LNJ-003
🔵 Proposal — Under Review

This policy creates a refundable federal tax credit equal to 50% of a journalist's annual salary — capped at $50,000 per journalist per year — for employers who hire full-time reporters to cover local government and community affairs in designated news deserts. A news desert is a county with fewer than one local journalist per 10,000 residents or no local newspaper of record, and organizations controlled by private equity, hedge funds, or large media chains cannot claim the credit.

Establish a Local Journalism Investment Tax Credit for employers hiring reporters in designated news deserts
Employers who hire full-time journalists employed primarily in reporting on local government, courts, schools, and community affairs in counties classified as "news deserts" — defined as counties with fewer than one local journalist per 10,000 residents or no local newspaper of record — are entitled to a refundable federal tax credit equal to 50% of the journalist's annual salary for each of the first five years of employment, capped at $50,000 per journalist per year; the credit applies to for-profit news organizations, nonprofit news organizations, employee-owned cooperatives, and public benefit corporations; organizations controlled by private equity funds, hedge funds, national media chains reaching more than 5 million weekly readers, or entities with documented patterns of newsroom gutting are ineligible; "news desert" classification is determined annually by the FCC using Census data and newsroom employment data; the Local Journalism Sustainability Act (H.R. 3940, 117th Congress)[7] established the legislative precedent for this approach. Reference: Local Journalism Sustainability Act, H.R. 3940, 117th Cong. (2021); Abernathy, P. M. (2023). The news desert crisis. UNC Hussman School of Journalism and Media. https://www.usnewsdeserts.com.
MED-LNJ-004
🔵 Proposal — Under Review

This policy establishes a federal Local News Fellowship program — modeled on the AmeriCorps VISTA service program — that places journalists in news-desert communities for two-year terms, with the federal government covering 60% of each fellow's salary and no federal employee permitted to influence fellows' editorial content. The program must reach 10,000 annual fellowship positions within five years and must prioritize rural areas, tribal lands, and communities of color with no local accountability journalism.

Create a federal Local News Fellowship program placing journalists in news-desert communities, modeled on AmeriCorps VISTA
Congress must authorize a federal Local News Fellowship program, modeled on the AmeriCorps VISTA program, placing recent journalism graduates and experienced journalists in qualifying local news organizations in news-desert communities for two-year terms; the federal government must pay 60% of each fellow's salary, with the host organization providing the remaining 40%; the program must be administered by a nonprofit intermediary with editorial independence — no federal employee or agency may have authority over the editorial content produced by program participants; Congress must authorize 5,000 fellowship positions annually within three years of enactment and 10,000 annually within five years; the program must prioritize placement in rural communities, tribal lands, and communities of color with documented absence of local accountability journalism; the editorial independence guarantee is structurally modeled on the independence protections in the Corporation for Public Broadcasting Act, 47 U.S.C. §§ 396–399b. Reference: Knight Foundation. (2022). Local news: What we know and what it means. https://knightfoundation.org; Corporation for Public Broadcasting Act, 47 U.S.C. §§ 396–399b.
MED-LNJ-005
🔵 Proposal — Under Review

This policy creates a new 501(c)(n) tax-exempt category specifically for public interest journalism organizations, with a streamlined IRS approval process and clearer eligibility rules than the generic 501(c)(3) nonprofit category. It also clarifies by statute that reporting on political candidates and legislative proceedings by nonprofit newsrooms is protected journalistic activity — not political campaign activity — so these organizations cannot lose their tax-exempt status for covering politics.

Create a dedicated 501(c)(n) public interest journalism tax category and clarify Johnson Amendment non-applicability to news reporting
Congress must: (a) create a new 501(c)(n) category specifically for public interest journalism organizations, with a streamlined IRS approval process and clear eligibility criteria — including requirements for editorial independence, local or national accountability journalism focus, and prohibition on political campaign activity — distinct from the generic 501(c)(3) category; (b) clarify by statute that reporting on candidates, legislation, and political events by nonprofit news organizations constitutes protected journalistic activity and is not "political campaign activity" triggering loss of tax-exempt status under the Johnson Amendment (26 U.S.C. § 501(c)(3)); (c) expand the charitable contribution deduction to include subscriptions and individual donations to qualifying local news organizations, up to $250 per year per individual; (d) allow communities to establish local news endowments as tax-exempt charitable funds with the same legal standing as community foundations. The Johnson Amendment ambiguity creates a chilling effect on nonprofit investigative journalism about politicians and campaigns; the proposed 501(c)(n) category addresses this by creating purpose-specific rules with clearer boundaries. Reference: Waldman, S. (2011). The information needs of communities. Federal Communications Commission; 26 U.S.C. § 501(c)(3).
MDIA-MDIA Media Independence and Local Journalism 0/4 proposed
MDIA-MDIA-0001 Proposal
🔵 Proposal — Under Review

This policy requires Congress to create a Local Journalism Sustainability Tax Credit offering local news organizations up to $25,000 per full-time local journalist per year and offering individuals up to $2,500 per year for qualifying local news subscriptions. It also creates a $1 billion annual Public Interest Journalism Fund — run by an independent board with no government members — providing grants to nonprofit newsrooms, investigative journalism centers, and communities that have lost all local news coverage.

Congress Must Establish a Local Journalism Tax Credit and a Public Interest Journalism Fund to Reverse the Collapse of Local News

Congress must: (1) establish a Local Journalism Sustainability Tax Credit allowing local news organizations to claim a federal tax credit of: (a) up to $25,000 per year per journalist employed full-time covering local news — defined as news about specific local communities, government, or institutions; (b) up to $2,500 per year per local news subscriber for individuals subscribing to qualifying local news outlets; (2) create a $1 billion annual Public Interest Journalism Fund — administered by an independent board with no government officials as members — that provides grants to: (a) nonprofit local news organizations; (b) investigative journalism centers; (c) news deserts — communities that have lost all local news coverage — for startup news operations; (3) streamline 501(c)(3) status for nonprofit newsrooms — the IRS must approve applications from qualifying journalism nonprofits within 90 days; (4) prohibit any recipient of Public Interest Journalism Fund grants from: being acquired by a hedge fund or private equity firm, consolidating with a competitor in the same geographic market, or ceasing local news coverage for more than 6 months; and (5) require DOJ Antitrust Division to review and may block any merger involving local news organizations that would reduce the number of independently owned news outlets in a market by more than 50%.

More than 2,500 local newspapers have closed in the United States since 2005, leaving approximately one-third of U.S. counties with no local newspaper. News deserts are associated with lower voter turnout, higher municipal borrowing costs, and increased government corruption.

MDIA-MDIA-0002 Proposal
🔵 Proposal — Under Review

This policy requires the FCC to reinstate the ban on any entity owning both a daily newspaper and a broadcast TV or radio station in the same market — a rule eliminated in 2017 — and caps any single entity's national TV reach at 30% of U.S. households and 50% of any single market. License renewal reviews must weigh local news coverage hours, ownership diversity, and community engagement, and hedge funds and private equity firms may not hold more than 25% of broadcast licenses in any single market.

The FCC Must Reinstate and Strengthen Rules Prohibiting Newspaper-Broadcast Cross-Ownership and Concentration of Local Media Markets

Congress must direct the FCC to: (1) reinstate the newspaper-broadcast cross-ownership ban — prohibiting any entity from owning both a daily newspaper and a TV or radio station in the same market — which was eliminated in 2017; (2) establish a media market concentration cap: no single entity may own television or radio stations that together reach more than 30% of U.S. television households, or more than 50% of any single media market; (3) require FCC license renewal review to include a public interest standard that considers: local news coverage hours, ownership diversity (including race, gender, and small-business ownership), and community engagement; (4) restore the FCC's Diversity Index — a scoring system measuring the diversity of independent media voices in each market — and require the FCC to deny license renewals for dominant incumbents in markets scoring below the minimum threshold; (5) prohibit hedge funds and private equity firms from acquiring more than 25% of the outstanding licenses in any single media market; and (6) establish criminal liability for any entity that knowingly misrepresents local ownership or local content obligations to the FCC during license renewal proceedings.

The top 25 broadcasting companies now own over half of all TV stations in the U.S. The FCC's relaxation of media consolidation rules has accelerated the hollowing out of local newsrooms.

MDIA-MDIA-0003 Proposal
🔵 Proposal — Under Review

This policy requires the FTC to mandate that all 'native advertising' — paid content designed to look like real editorial journalism — display an unavoidable 'PAID ADVERTISEMENT' label in a font no smaller than the surrounding text, placed at the very top before readers encounter any substantive content. Platforms with more than one million U.S. users must maintain a publicly searchable archive of all paid promotional content, and violations carry civil fines of $50,000 per violation per day, with consumers able to sue for $500 per deceptive advertisement.

The FTC Must Require Clear and Conspicuous Disclosure of All Paid Content and Ban Advertising Designed to Mimic the Format of Editorial Journalism

Congress must direct the FTC to: (1) require all "native advertising" — paid content designed to resemble editorial journalism, news reporting, or independent reviews — to display a clear, conspicuous, and unavoidable disclosure label reading "PAID ADVERTISEMENT" in a font size no smaller than the surrounding content, placed at the top of the content before the reader encounters any of the substantive material; (2) prohibit any publication, website, or digital platform from publishing native advertising that: (a) mimics the visual format, headline style, or byline conventions of the outlet's editorial content; (b) uses a journalist's name or likeness without explicit consent; or (c) makes health, safety, financial, or legal claims without substantiation meeting FTC advertising standards; (3) require digital platforms with more than 1 million U.S. users to maintain a publicly accessible, searchable database of all paid promotional content published on the platform, including the advertiser, amount paid, and targeting criteria; (4) impose civil penalties of $50,000 per violation per day for non-compliant native advertising, with treble damages for violations that cause consumer harm; and (5) establish a private right of action for consumers deceived by native advertising, with statutory damages of $500 per violation.

Native advertising revenue now exceeds traditional display advertising for many digital publishers. Studies show the majority of adults cannot reliably distinguish native advertising from editorial content.

MDIA-MDIA-0004 Proposal
🔵 Proposal — Under Review

This policy requires foreign governments and their agents to register and disclose under the Foreign Agents Registration Act (FARA) — the federal law requiring disclosure of work done on behalf of foreign principals — any paid digital advertising, social media accounts, or algorithmic promotion targeting U.S. audiences on elections, domestic policy, or social issues. Platforms with more than 5 million U.S. users must label foreign-government-controlled content, archive all political advertising in real time, and ban political ad sales to foreign governments — with criminal liability for executives who knowingly allow such advertising without disclosure.

Foreign Governments and Their Agents May Not Use Social Media Algorithms or Paid Advertising to Influence U.S. Elections or Public Opinion Without Full FARA Disclosure

Congress must: (1) amend the Foreign Agents Registration Act (FARA) to require registration and disclosure by any foreign principal or their agent that: (a) pays for digital advertising targeted at U.S. audiences on any topic related to U.S. elections, domestic policy, or social issues; (b) operates social media accounts or websites that receive more than 10,000 unique U.S. visitors per month and are controlled, funded, or editorially directed by a foreign government; or (c) purchases algorithmic promotion — including boosted posts, sponsored recommendations, or paid search placement — targeting U.S. audiences for political or social content; (2) require all social media platforms with more than 5 million U.S. users to: (a) label all content originating from foreign-government-controlled accounts; (b) maintain and publish a real-time public archive of all political and issue advertisements, including buyer identity, amount paid, and targeting criteria; (c) prohibit the sale of political or social issue advertising to foreign governments or their agents; (3) direct the DOJ FARA Unit to audit social media platform compliance annually and publish findings; (4) impose criminal liability on platform executives who knowingly allow foreign-government-controlled advertising without disclosure; and (5) establish civil penalties of $100,000 per violation per day for FARA non-compliance by digital platforms.

The Senate Intelligence Committee found that Russian Internet Research Agency operations reached an estimated 126 million Americans on Facebook alone during the 2016 election cycle.

MED-DAT Data Privacy in the Media and Platform Context 0/3 proposed
MED-DAT-001 Proposal
🔵 Proposal — Under Review

This policy bans any platform, publisher, data broker, or ad network from targeting users with ads based on sensitive personal data — including health conditions, mental health status, religion, political views, sexual orientation, immigration status, or financial distress — unless the user has separately and affirmatively opted in for each sensitive category. It also bans all behavioral advertising targeting any user under 18, and users who receive a prohibited sensitive-category ad can sue for $500 in statutory damages per incident plus attorney fees.

Ban targeted advertising based on sensitive personal categories without explicit opt-in; prohibit all behavioral targeting of users under 18

Congress must enact a Surveillance Advertising Prohibition Act providing: (a) no platform, publisher, data broker, ad network, or advertising intermediary may serve a targeted advertisement based on inferred or explicit data about a user's health conditions, mental health status, religion or religious practices, political views or party affiliation, sexual orientation or gender identity, immigration status, precise location, or financial distress indicators — unless the user has provided freely given, specific, unambiguous, affirmative opt-in consent for each sensitive category, separate from any general terms-of-service consent; (b) no behavioral advertising — defined as advertising targeted based on inferred user interests, browsing history, content consumption patterns, or behavioral profiles — may be served to any user the platform knows or has reason to know is under 18; contextual advertising based solely on the content of the page currently being viewed is permitted without opt-in; (c) the FTC may enforce violations with civil penalties of up to $100 per affected user per violation; (d) any user who receives an advertisement based on a prohibited sensitive category has a private right of action for $500 in statutory damages per incident plus attorneys' fees; (e) the prohibition applies equally to inferred sensitive category data — platforms may not circumvent this rule through probabilistic inference targeting without explicit consent. Reference: American Data Privacy and Protection Act, H.R. 8152, 117th Cong. (2022), § 202; GDPR, Regulation (EU) 2016/679, Art. 9 (special categories of personal data).

Surveillance advertising — advertising targeted based on detailed behavioral profiles, inferred health conditions, political views, and precise location — causes documented specific harms: insurers and employers using health data inferences from ad profiles; political operatives targeting voters based on inferred religious and political profiles; domestic violence perpetrators using location targeting to surveil partners. The sensitive-categories approach follows GDPR Article 9's framework for special categories of data requiring heightened protection. The contextual advertising exception preserves a viable, privacy-respecting advertising model: advertisers may target based on what a user is currently reading or watching, not who they are across time and platforms.

MED-DAT-002 Proposal
🔵 Proposal — Under Review

This policy requires platforms and digital services to collect and keep only the personal data that is actually necessary to run the service a user requested — collecting extra data for advertising, profiling, or future uses the user did not request is prohibited without explicit opt-in consent. Users can demand complete deletion of all their personal data within 30 days and export a machine-readable copy within 14 days, with a private right of action for at least $1,000 per violation when those rights are denied.

Require platform data minimization; prohibit secondary data use without consent; mandate 30-day deletion compliance; private right of action

Congress must enact platform data minimization requirements providing: (a) platforms, apps, and digital services may collect, retain, and process only personal data that is reasonably necessary to provide the specific service the user has requested — collection of data that is useful for advertising, profiling, or future services the user has not requested is prohibited absent explicit opt-in consent; (b) personal data collected for one stated purpose may not be used for a materially different secondary purpose — including sale to data brokers, use to train AI models, or use for behavioral advertising — without separate, specific, affirmative consent obtained at the time of the secondary use; (c) any user may request complete deletion of all personal data held by a platform — the platform must comply within 30 days; deletion must include data shared with third-party processors unless the platform can demonstrate that technical recall is impossible; (d) platforms must provide users a machine-readable export of all personal data held about them within 14 days of a valid portability request; (e) any user whose data minimization or deletion rights have been violated may bring a private right of action for actual damages or $1,000 statutory damages per violation, whichever is greater, plus attorneys' fees and injunctive relief; (f) the FTC must promulgate rules defining "reasonably necessary" by rulemaking within one year of enactment. Reference: American Data Privacy and Protection Act, H.R. 8152, 117th Cong. (2022); California Consumer Privacy Act, Cal. Civ. Code § 1798.100 et seq. (2018).

U.S. federal law contains no general right to data deletion or data minimization requirement applicable to commercial platforms. California's CCPA (2018) and CPRA (2020) established state-level deletion rights, but enforcement has been limited and the rights do not apply nationally. The EU GDPR data minimization principle (Art. 5(1)(c)) and right to erasure (Art. 17) provide the policy framework; the U.S. federal gap leaves hundreds of millions of Americans without equivalent protection. Data minimization is both a privacy protection and a security measure: data that platforms do not retain cannot be breached. The private right of action is essential because FTC enforcement capacity is structurally insufficient to address individual violations at platform scale.

MED-DAT-003 Proposal
🔵 Proposal — Under Review

This policy expands the Children's Online Privacy Protection Act (COPPA) — which currently covers children under 13 — to protect all users under 16, applying the same requirements for parental consent, data minimization, and deletion rights to that broader age group. Platforms must verify users' ages without collecting biometric data or retaining government-issued IDs, civil penalties increase to up to $100,000 per day per violation, and any minor or their parent can sue for at least $1,000 per violation.

Expand COPPA protections to age 16; require age verification without biometric data collection or government ID retention

Congress must amend the Children's Online Privacy Protection Act (15 U.S.C. § 6501 et seq.) to: (a) extend COPPA's protections — currently applicable to users under 13 — to cover all users under 16; all COPPA requirements regarding parental consent, data minimization, prohibition on conditioning service access on data collection, and deletion rights apply to users under 16; (b) require platforms with more than 500,000 U.S. users to implement age verification sufficient to identify users under 16, subject to the following constraints: (i) age verification may not require biometric data collection — no facial recognition, fingerprint, or voiceprint data may be collected for this purpose; (ii) age verification may not require a government-issued ID to be uploaded or retained by the platform — verification methods must be privacy-preserving and may include third-party age verification services that return only a binary "over 16 / under 16" signal without transmitting personal identification to the platform; (iii) if a platform cannot implement compliant age verification without biometric or government ID data, it must apply COPPA protections to all users; (c) platforms must not deny service to a minor solely because the minor declines to provide data prohibited under COPPA; (d) civil penalties for violations are increased to up to $100,000 per day per violation; (e) any minor — or parent or guardian of a minor — whose COPPA rights have been violated has a private right of action for actual damages or $1,000 statutory damages per violation, whichever is greater. Reference: Children's Online Privacy Protection Act, 15 U.S.C. § 6501; Kids Online Safety and Privacy Act, Pub. L. No. 118-186 (2024).

COPPA was enacted in 1998 and last updated in 2013 — before TikTok, Instagram Reels, YouTube Shorts, and the algorithmic recommendation systems that define contemporary adolescent social media. The age-13 threshold has no basis in contemporary developmental research: 13-to-15-year-olds face identical developmental vulnerabilities to addictive design and surveillance advertising as younger children. The EU's GDPR sets the data protection age at 16 (with member state discretion to lower to 13) for this reason. The biometric ban on age verification reflects documented privacy and security risks of collecting biometric data at scale and the discriminatory impacts of facial recognition age-verification systems on users of color.

MED-CHD Children's Online Protection and Addictive Design 0/2 proposed
MED-CHD-001 Proposal
🔵 Proposal — Under Review

This policy prohibits platforms, apps, games, and digital services used by minors from showing behavioral advertising — ads based on inferred interests, browsing history, or behavioral profiles — to any user under 18. Platforms used by minors must also comply with an Age-Appropriate Design Code requiring privacy settings at the highest protection level by default, no commercial profiling of minors, no manipulative engagement techniques, and prominent tools for reporting harmful content — with FTC fines up to $100,000 per day per design feature violation.

Prohibit behavioral advertising targeting users under 18; require age-appropriate design standards on platforms used by minors

Congress must enact a Children's Online Design Protection Act providing: (a) no platform, app, game, or digital service knowingly used by users under 18 may serve behavioral advertising — advertising based on inferred interests, behavioral profiles, browsing history, or cross-site tracking — to any user it knows or has reason to know is under 18; contextual advertising based solely on current page content is permitted; (b) platforms knowingly used by minors must comply with an Age-Appropriate Design Code including: (i) privacy settings at the highest available protection level by default for known minor users; (ii) no use of personal data of minors to profile them for any commercial purpose; (iii) no nudging or persuasive techniques designed to encourage minors to extend time on platform, purchase digital items, or share data beyond what is necessary for the service; (iv) prominent tools enabling minors to report harmful content or contacts; (c) a "platform knowingly used by users under 18" is defined as any platform where users under 18 constitute at least 5% of monthly active U.S. users, or where content, features, or marketing materials would be reasonably expected to attract users under 18; (d) the FTC may enforce violations with civil penalties of up to $100,000 per day per design feature violation; (e) parents and guardians of minors harmed by violations have a private right of action for actual damages plus $2,000 statutory damages per violation plus attorneys' fees. Reference: California Age-Appropriate Design Code Act, Cal. Civ. Code § 1798.99.28 et seq. (2022); UK Children's Code (Age Appropriate Design Code), UK Information Commissioner's Office (2021).

Children under 18 are systematically targeted by behavioral advertising and addictive design features on platforms they use daily. The UK Age Appropriate Design Code (Children's Code), which took effect in 2021, established that platforms must apply child-protective design standards to services used by children — the default must be protective, not opt-in. California enacted its own version in 2022 (AB 2273). The behavioral advertising prohibition addresses documented harms of advertising targeting to children: body image, eating disorders, unhealthy product promotion, and the use of children's emotional states to serve manipulative ads. A federal standard prevents a patchwork of state law and ensures national protection.

MED-CHD-002 Proposal
🔵 Proposal — Under Review

This policy prohibits any platform, app, or digital service from selling, licensing, or commercially sharing any personal data about users under 18, and bans data brokers from purchasing or retaining any dataset that includes records identifiable as belonging to a person under 18. Any user under 18 has an unconditional right to permanently delete their account and all associated personal data without needing a parent's or guardian's permission, with the platform required to complete deletion within 30 days.

Prohibit sale of personal data about users under 18; guarantee minors an unconditional right to complete account deletion without parental consent

Congress must: (a) prohibit any platform, app, or digital service from selling, licensing, sharing for value, or commercially transferring in any form any personal data about a user that it knows or has reason to know is under 18 — to data brokers, advertisers, analytics providers, or any third party; (b) prohibit data brokers from purchasing, retaining, or selling any dataset that includes records identifiable as belonging to a person under 18; (c) any user under 18 must have an unconditional right to permanently delete their account and all associated personal data without requiring parental or guardian consent — the right to delete one's own data does not require a guardian's authorization; (d) platforms must provide a clear, single-step account deletion pathway for users under 18 completing full deletion within 30 days; (e) platforms may not retain any personal data about a deleted minor user for more than 90 days following deletion, except where retention is required by law; (f) the FTC may enforce violations with civil penalties of up to $50,000 per affected user per violation; (g) any minor — or parent or guardian of a minor — whose rights under this section have been violated has a private right of action for $2,500 statutory damages per violation plus attorneys' fees; (h) data brokers found to have purchased records identifiable as belonging to minors are subject to mandatory disgorgement of all revenue derived from those records. Reference: California Delete Act, Cal. Civ. Code § 1798.99.80 et seq. (2023); COPPA 2.0, S. 1628, 118th Cong. (2023).

Children's data is uniquely valuable to commercial data brokers — behavioral profiles built during adolescence persist into adulthood and capture formative patterns in health, relationships, finances, and beliefs. The prohibition on data sales to third parties closes the primary monetization pathway that creates financial incentives for surveillance of minors. The unconditional right to account deletion without parental consent addresses a specific documented barrier: platforms have conditioned minor users' account deletion on parental action, creating a chilling effect on minors' ability to exit platforms on their own terms. COPPA's existing deletion rights apply only to children under 13 and require parental action, not the minor's independent action.

Research & Context

The tension between press freedom and government secrecy has intensified in the digital age. Journalists' communications are now inherently surveilled unless actively encrypted. Metadata analysis allows tracking of source contacts even without content access. Digital subpoenas and cloud-stored communications make it easier for government to access journalistic materials without the journalist's knowledge. National security classification has expanded to cover not just military secrets but policy debates, legal interpretations, and evidence of misconduct.

The collapse of local news ownership diversity — driven by private equity acquisitions, chain consolidation, and FCC deregulation — has eliminated thousands of local news outlets and produced news deserts across the country.[1] Between 2005 and 2022, the United States lost approximately one-third of its newspapers and two-thirds of its newspaper journalists. The communities hardest hit are rural areas, small cities, and communities of color — precisely the communities that most depend on local accountability journalism and have the fewest alternative information sources.

Algorithmic amplification on social media platforms compounds the information crisis. Recommendation algorithms optimized for engagement systematically amplify sensational, outrage-inducing, and emotionally charged content, including misinformation, because such content drives engagement metrics. Researchers have documented that false news spreads faster and further than true news on social media.[3] These systems are not neutral information conduits; they are active shapers of public discourse, and their design choices have documented effects on political polarization, public health (vaccine hesitancy, COVID denial), and democratic participation.

The project's brainstorm branch documents emphasize the difference between protecting press freedom and creating government "truth police." The approach deliberately avoids content-based regulation or government determination of what information is "true." Instead, it protects the structural channels through which information flows from insiders to public. This is a procedural protection, not a content judgment.

The framework explicitly rejects the "limit protections for for-profit news media" framing because it invites attacks on press freedom. Instead, it strengthens transparency and anti-corruption rules that apply to all media entities—requiring disclosure of funding sources, conflicts of interest, and paid political content—without creating differential protection tiers that government could exploit.[2]

References

  1. Abernathy, P. M. (2020). News deserts and ghost newspapers: Will local news survive? University of North Carolina Press. https://www.usnewsdeserts.com/
  2. Free Press. (2023). Media consolidation. https://www.freepress.net/issues/media-consolidation
  3. Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559
  4. FCC v. Prometheus Radio Project, 592 U.S. 414 (2021).
  5. FCC Sponsorship Identification Rules, 47 C.F.R. § 73.1212.
  6. Abernathy, P. M. (2023). The news desert crisis. UNC Hussman School of Journalism and Media. https://www.usnewsdeserts.com
  7. Local Journalism Sustainability Act, H.R. 3940, 117th Cong. (2021).
  8. Corporation for Public Broadcasting Act, 47 U.S.C. § 396.
  9. Gonzalez v. Google LLC, 598 U.S. 617 (2023).
  10. Haugen, F. (2021, October 5). Testimony before the Senate Commerce Committee, Subcommittee on Consumer Protection, Product Safety, and Data Security. U.S. Senate. https://www.commerce.senate.gov
  11. Klonick, K. (2018). The new governors: The people, rules, and processes governing online speech. Harvard Law Review, 131(6), 1598–1670.