An AI Whistleblower Bill is Urgently Needed

Last year, thirteen brave AI whistleblowers issued a letter titled “A Right to Warn about Advanced Artificial Intelligence,” risking retaliation for highlighting rampant concerns around internal safety and security protocols on products that are built shielded from proper oversight being released to and unleashed upon the public. Legislation is needed to help workers responsibly report the development of high-risk systems that is currently occurring without appropriate transparency and oversight.
The concerns of these AI whistleblowers, in combination with the documented attempts of AI companies to stifle whistleblowing, underscore the urgent need for Congressional action to pass a best-practices whistleblower bill that specifically addresses AI employees.
Like the powerful developing sectors preceding it, insiders in the Artificial Intelligence industry do not have explicit access to safeguards for reporting until legislation is passed. There is historical precedent for sector-based protections: Congress has enacted whistleblower protection laws in past decades that covered employees across relevant industries, including nuclear energy in the 1978 Energy Reorganization Act, airlines under AIR21 in 2000, the federal government in the 1989 Whistleblower Protection Act, and Wall Street under Dodd-Frank in 2010. This legislation helps ensure that workers in such specified fields are able to speak out on issues endangering the public. As its emergence in popular consciousness is recent — ChatGPT was initially released November of 2022 — legislation is behind technological advancement, with executives urging legislators to employ “light touch” regulation. Employees working for AI companies are left without any specialized whistleblower protections.
Why We Need an AI Whistleblower Bill
The whistleblowers’ letter cited public claims from leading scholars, advocates, experts and AI companies themselves pointing to the significant potential harms of AI technology when released into the market without the proper safety protocols. These concerns included further entrenchment of existing inequalities, media manipulation and misinformation, and loss of control of autonomous AI systems. These companies themselves have even published reports on their models’ concerning and risky behavior, but continue to deploy their products for public, business, government, and military use. Specific points the group noted last year that companies, governments, and advocacy groups have made on the matter include:
- Serious risk of misuse, drastic accidents, and societal disruption … we are going to operate as if these risks are existential (OpenAI)
- Toxicity, bias, unreliability, dishonesty (Anthropic)
- Offensive cyber operations, deceive people through dialogue, manipulate people into carrying out harmful actions, develop weapons (e.g. biological, chemical) (Google DeepMind)
- Exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security (US Government – White House)
- Further concentrate unaccountable power into the hands of a few, or be maliciously used to undermine societal trust, erode public safety, or threaten international security … [AI could be misused] to generate disinformation, conduct sophisticated cyberattacks or help develop chemical weapons (UK Government – Department for Science, Innovation & Technology)
- Inaccurate or biased algorithms that deny life-saving healthcare to language models exacerbating manipulation and misinformation (Statement on AI Harms and Policy (FAccT))
- Algorithmic bias, disinformation, democratic erosion, and labor displacement. We simultaneously stand on the brink of even larger-scale risks from increasingly powerful systems (Encode Justice and the Future of Life Institute)
- Risk of extinction from AI…societal-scale risks such as pandemics and nuclear war (Statement on AI Risk (CAIS))
With guidance from the scientific community, policymakers, and the public, these risks can be adequately mitigated. However, AI companies have financial incentives to avoid effective oversight.
Also in 2024, whistleblowers brought to light broad confidentiality and non-disparagement agreements which were used to muzzle current and former employees from voicing their concerns. OpenAI whistleblowers filed a complaint with the SEC detailing that OpenAI utilized employment agreements which included:
- Non-disparagement clauses that failed to exempt disclosures of securities violations to the SEC;
- Requiring prior consent from the company to disclose confidential information to federal authorities;
- Confidentiality requirements with respect to agreements, that themselves contain securities violations;
- Requiring employees to waive compensation that was intended by Congress to incentivize reporting and provide financial relief to whistleblowers.
While OpenAI claims to have addressed its non-disclosure agreements, the chilling effect of these threats remains in company culture. It is highly concerning that OpenAI whistleblowers with inside knowledge on what oversight is needed have no explicit legal federal protections whatsoever. As it stands, a whistleblower working for major AI companies could be fired for raising concerns around issues such as venues for misuse, internal and external security concerns.
Without information from whistleblowers, the ability of the U.S. government to police and regulate this newly developing technology is curtailed, risking heightening the technology’s risks to public health, safety, national security, and more. Insiders must be able to disclose potential violations safely, freely, and appropriately to law enforcement and regulatory authorities.
What an AI Whistleblower Bill Needs to Include
Legislation must send the message to the AI industry, and to the tech industry at large, that violations on the right of employees to report wrongdoing will not be tolerated. Potential whistleblowers at AI companies must have comprehensive avenues to report even potential violations, instances of misconduct, and safety issues occurring throughout the field. Effective whistleblower laws require that such complaints be welcomed and rewarded as a matter of law and policy, not discouraged by companies sending direct or indirect messages to employees that chill their speech that has resulted in so many catastrophes in the past.
It is critical that an AI whistleblower law pass through the 119th Congress. Any such law must follow the solid precedents used in recent whistleblower legislation that has been passed, either unanimously or without any controversy, by Congress. The most recent example of a private sector whistleblower law that incorporated the basic due process requirements necessary to protect whistleblowers was the Taxpayer First Act, 26 U.S. Code § 7623(d). This law includes the following basic procedures, all of which need to be incorporated into any AI whistleblower law:
- Due Process Protections: This includes the right to file a retaliation case in federal court and a jury trial, if requested.
- Protection Against Retaliation: Anti-retaliation language that establishes that no employer, individual, or agent of an employer may fire, demote, blacklist, threaten, discriminate or harass an employee/former employee/applicant for employment and/or contractor who has engaged in protected activities covered under the law, which would include providing truthful information to state or federal law enforcement or regulatory authorities.
- Appropriate Damages: A whistleblower who prevails in a retaliation case must be afforded a full “make whole” remedy, including (but not limited to) reinstatement and restoration of all of the privileges of his or her prior employment, back pay, front pay, compensation for lost benefits, compensatory damages, special damages, and all attorney fees, costs, and expert witness fees reasonably incurred. Some laws also provide double back pay or punitive damages, which should also be considered. Moreover, a court must have explicit jurisdiction to afford all equitable relief, including preliminary relief.
- An Adequate Definition of a Protected Disclosure: Protected whistleblower disclosures should cover reports made, both internally to corporations and to other appropriate authorities, including Congress, and/or state law enforcement or regulatory authorities. Disclosures covered should include reporting threats AI may pose to national security, public health and safety, and financial frauds.
- Anonymous and confidential reporting to a company’s internal compliance program.
- Prohibition against contractual restrictions on the right to blow the whistle, including barring of mandatory arbitration agreements that would restrict an employee from filing a complaint under the whistleblower law.
- No federal preemption or interference with the right to file claims under other state or federal law.
Modern anti-fraud and public safety laws uniformly include whistleblower protections similar to those outlined above. These include the laws such as the aforementioned Taxpayer First Act, as well as the Food Safety Modernization Act, Sarbanes-Oxley Act, the Anti-Money Laundering Act, and the National Transportation Security Act. Given the potential threats posed by AI, how companies have mishandled deployment, and the importance that emerging AI technology is safely developed, there is an urgent need for insiders working in the AI sector to be properly protected when they lawfully report threats to the public interest.