As the capabilities, implementation, and impact of Artificial Intelligence (AI) continues to grow, there has been increased attention to what whistleblower protections are afforded to employees of AI companies.
While there is no AI-specific whistleblower law, AI whistleblowers are protected and can blow the whistle on certain misconduct under corporate whistleblower programs such as the Securities and Exchange Commission (SEC) Whistleblower Program and Commodity Futures Trading Commission (CFTC) Whistleblower Program.
AI whistleblowers can file anonymously and confidentially with the SEC or CFTC and may be eligible for whistleblower awards. These agencies have broad jurisdictions over publicly traded companies as well as both publicly traded and privately owned entities that trade in commodities. Even if the company is not publicly traded, they may still be covered by the SEC program if they have filed a Form D, used to file a notice of an exempt offering of securities with the SEC.
What AI Violations Can an AI Employee Blow the Whistle On?
Employees of AI companies may blow the whistle under the SEC Whistleblower Program on violations of securities laws.
Potential violations related to AI companies may include:
- Restrictive agreements (non-disclosure, non-disparagement)
- Proof of discrepancy between public statements and internal reality
- Underlying safety concerns that may be of interest to any federal, regulatory, or law enforcement agency, including but not limited to national security issues and safety protocols
- The failure to disclose to potential investors and government agencies and/or the general public evidence of major risks
Can an AI Employee Blow the Whistle if they Signed a Non-Disclosure Agreement (NDA)?
Yes, you can blow the whistle even if you have signed an NDA which prohibits you from doing so.
Under SEC Rule 21F-17(a), non-disclosure agreements that prohibit individuals from reporting potential violations of the law to the SEC are not only unenforceable but also illegal. Both the SEC and the CFTC (which has a similar rule) have increased enforcement efforts around restrictive NDAs in recent months. See:
Can an AI Employee Blow the Whistle Anonymously?
Under both the SEC and CFTC Whistleblower Programs, individuals may blow the whistle anonymously if they are represented by an attorney. Their attorney can file the disclosure on their behalf and handle all communications with agency staff. However, in order to claim a whistleblower award, a whistleblower will eventually need to disclose their identity to the SEC or CFTC.
Both the SEC and CFTC Whistleblower Programs also provides robust confidentiality protections to whistleblowers. The Commissions will publicly disclose any potentially identifying information about whistleblowers.
For more information on blowing the whistle anonymously read: How to File Claims Anonymously As An SEC Whistleblower
Can an AI Employee Qualify for a Whistleblower Award?
Under both the SEC and CFTC Whistleblower Programs, if a whistleblower voluntarily provides original information which leads to a successful enforcement action with at least $1 million in sanctions or penalties, then they are eligible to receive a monetary award of 10-30% of the monies collected in the case.
For more information read: Can I Qualify For A Whistleblower Reward?
What are the Risks of AI Technology?
The risks of AI are manifold and have been laid out by a number of sources, including both AI companies and governments.
Potential risks include:
- Serious risk of misuse, drastic accidents, and societal disruption … we are going to operate as if these risks are existential (OpenAI)
- Toxicity, bias, unreliability, dishonesty (Anthropic)
- Offensive cyber operations, deceive people through dialogue, manipulate people into carrying out harmful actions, develop weapons (e.g. biological, chemical) (Google DeepMind)
- Exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security (US Government – White House)
- Further concentrate unaccountable power into the hands of a few, or be maliciously used to undermine societal trust, erode public safety, or threaten international security … [AI could be misused] to generate disinformation, conduct sophisticated cyberattacks or help develop chemical weapons (UK government – Department for Science, Innovation & Technology)
- Inaccurate or biased algorithms that deny life-saving healthcare to language models exacerbating manipulation and misinformation (Statement on AI Harms and Policy (FAccT))
- Algorithmic bias, disinformation, democratic erosion, and labor displacement. We simultaneously stand on the brink of even larger-scale risks from increasingly powerful systems (Encode Justice and the Future of Life Institute)
- Risk of extinction from AI…societal-scale risks such as pandemics and nuclear war (Statement on AI Risk (CAIS))
Does the U.S. Government Have an Official Policy to Address AI?
Yes. In October of 2023, a White House Executive Order laid out a set of policies and principles that executive departments and agencies must adhere and ensure adherence to. AI companies must conduct themselves in accordance with the Executive Order.
- Artificial Intelligence must be safe and secure.
- Promoting responsible innovation, competition, and collaboration will allow the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges.
- The responsible development and use of AI require a commitment to supporting American workers.
- Artificial Intelligence policies must be consistent with my Administration’s dedication to advancing equity and civil rights.
- The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected.
- Americans’ privacy and civil liberties must be protected as AI continues advancing.
- It is important to manage the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans.
- The Federal Government should lead the way to global societal, economic, and technological progress, as the United States has in previous eras of disruptive innovation and change.
With this Executive Order, the President directs actions to protect Americans from the potential risks of AI systems and ensure the responsible deployment of AI.
Our Firm’s Cases
$125 Million in Awards
We have successfully represented a number of SEC whistleblowers, preserving their anonymity and securing sizable whistleblower rewards. In one case, we helped our client receive one of the ten largest whistleblower awards ever granted by the SEC.
Relevant FAQs
Latest News & Insights
Former SEC officials lead the firm’s new group, representing whistleblowers who report financial fraud and legal violations to the SEC, CFTC, DOJ, FinCEN, and the IRS.