Deepfakes have changed how we perceive truth and accuracy on the internet.
What started as novelty filters and voice-changing apps has turned into a significant problem. Today, people can create incredibly believable videos or photos of other people saying or doing things they never did.
Are deepfakes really illegal though?
There isn’t a clear answer because the laws surrounding this type of issue are evolving. While there are already some privacy laws, harassment laws and other types of laws that may apply to this issue, there are still several gray areas.
In order to fully explore the current status of laws regarding deepfakes and what the Take It Down Act means for victims, creators and platforms, let’s dive deeper.
Legal Status of Deepfakes at the Federal and State Levels
Key takeaways:
Deepfakes can be illegal depending on what’s in the video or photo, who made it, and why.
At the federal level, the United States has had child pornography laws, civil remedies for “revenge porn” and communications laws for decades. Until recently, however, there were no federal laws that directly referenced “deepfakes” or digital forgeries.
With the Take It Down Act in 2025, that changed.
The Take It Down Act was passed by the U.S. Senate Committee on Commerce, Science & Transportation. It established new federal criminal penalties for the publication of non-consentual intimate imagery whether the imagery is real or generated using artificial intelligence.
The law defines “digital forgery” as a realistic image or video of a person’s intimate areas or sexual activity that was created or altered using technology.
If someone shares or posts such content knowing it does not have the subject’s consent, and intends to cause harm, then the individual can be charged with a federal offense, and potentially receive fines and/or imprisonment.
State Laws Surrounding Deepfakes
Now here’s where it becomes confusing.
Every state has its own laws, and some have included deepfakes in their revenge porn or non-consentual intimate imagery statues, and others have done nothing.
Examples of states with new deepfake laws:
HB 1999 in Washington (2025), prohibiting sharing of nonconsensual sexual deepfakes;
Texas, Virginia and California, which have civil and criminal penalties for sharing non-consentual intimate imagery; and
Other states do not have any specific laws addressing the issue of deepfakes, therefore, prosecutors have to rely on old laws of harassment, defamation or stalking to prosecute a deepfake case.
As a result, this creates inconsistent results for victims. Some are able to get the justice they deserve, and others cannot.
Why Does it Matter?
Laws generally differentiate between adult and child imagery.
Child pornography laws, which carry felony penalties, apply to deepfakes of children.
Adult deepfakes require evidence of intent to harm, or an invasion of privacy to be prosecuted.
That is a high burden of proof. Furthermore, until the passage of the Take It Down Act, no single federal law applied uniformly throughout the country. The Take It Down Act has now established a uniform national standard for non-consentual intimate imagery, regardless of whether it is real or AI-generated.
New Legislation Regarding Deepfakes
The main legislation is the Take It Down Act. It is the first federal law that addresses the creation and distribution of deepfakes on the internet.
What Does the Take It Down Act Do?
The Take It Down Act makes it a federal crime to knowingly distribute or share non-consentual intimate images or deepfakes. This would include AI-created sexually explicit images designed to appear to be an actual person.
The Act applies to:
Non-consentual deepfakes
Revenge porn
Publishing fabricated intimate images
AI-created explicit images
The Act does not end at just the creators of the deepfakes.
Platforms and websites are also held accountable.
The Act establishes that the following must occur:
Create a complaint process for victims to file complaints about fabricated images;
Remove the images within 48 hours of receiving a legitimate complaint; and
Take reasonable actions to prohibit the redistribution of the images.
Additionally, the Federal Trade Commission (FTC) can enforce penalties for unfair or deceptive business practices on platforms that fail to comply with the Act.
That is a large departure from the current state of affairs. As a result, platforms will be required to proactively address complaints regarding fabricated images.
Civil Remedies and Criminal Charges
Victims can pursue civil remedies, filing a suit against the perpetrator(s) for compensatory and punitive damages related to emotional distress.
Conversely, offenders can be charged with federal crimes related to the distribution of non-consentual intimate content.
For the first time, the Act treats AI-generated deepfake pornography similarly to “revenge porn,” or image-based sexual abuse.
While the Act specifically deals with sexual content, it also sets a precedent for future laws dealing with deepfakes (e.g., impersonation, fraud and misinformation).
Why this Law Matters
Prior to 2025, the U.S. had a patchwork of state laws and federal loopholes to deal with non-consentual AI imagery.
However, the enactment of the Take It Down Act has provided a unified approach to non-consentual AI imagery.
While the law is new, enforcement and interpretations of the law will develop over time.
Victim Assistance and Civil Remedies
While federal protection is available for victims, the path to relief is typically a long one.
Therefore, civil remedies and victim assistance programs are critical components of the response to deepfake-related harms.
What Victims Can Do
Victims can file a civil action against perpetrators or platforms. Typical causes of action include:
Intentional infliction of emotional distress
Invasion of privacy
Defamation
Compensatory and punitive damages for reputation damage
Pursuant to the Take It Down Act, victims can also bring a federal action against perpetrators for monetary damages or injunctive relief to compel the removal of the offending images.
Victims should seek the advice of a personal injury attorney or cyber-harassment attorney to assist with gathering evidence and pursuing compensation.
Useful Organizations and Resources
Several organizations provide direct assistance to victims:
Cyber Civil Rights Initiative (CCRI) — Provides victims with information on applicable laws, crisis counseling, and advocacy services.
National Center on Sexual Exploitation — Works on legislative reform and maintains a network of professionals that provide referrals for victims.
StopNCII.org — Enables victims to upload a digital fingerprint of their intimate content to aid in detecting and removing re-posted images.
Internet Crime Complaint Center (IC3) — Allows victims to formally file a complaint regarding cyber-harassment, or if deepfakes are part of the crime.
Take It Down — Part of the same federal effort to assist victims of child and adult exploitation by providing a tool for victims to submit verified removal requests.
Why Civil Remedies Matter
While criminal convictions may serve to penalize the perpetrator, they do not necessarily aid the victim in healing.
Civil actions provide victims with potential financial damages, injunctive relief, and a sense of vindication. Additionally, they provide victims with a tangible means of showing that the online abuse they suffered has real-world consequences.
Support networks for victims fill the void between law enforcement and victim recovery.
Obstacles to Enforcement and Prosecution
So, if laws exist to combat deepfakes, why is enforcement/ prosecution so difficult?
Because deepfakes are designed to deceive, and tracing who created or posted them can be nearly impossible.
Identifying the Perpetrator
Law enforcement agencies must identify both the creator of the deepfake and the individual(s) who posted the deepfake. Many times, the creator and poster utilize false identities or anonymity through encryption.
Identifying the individuals responsible for creating and posting deepfakes consumes law enforcement resources (personnel, equipment, etc.), and can consume months or years of investigative work.
Additionally, even when the suspect(s) are identified, prosecutors must meet the evidentiary burden of:
The image is non-consensual;
The image was posted/distributed with the intent to harm; and
The victim sustained demonstrable (physical, psychological, reputational, economic) harm.
This is a heavy burden to meet.
Technological and Evidentiary Issues
Determining whether an image/video is a deepfake requires specialized equipment and/or software. AI can recognize digital artifacts or inconsistencies in a deepfake, but this is not foolproof.
Criminal defense attorneys often contest the technical evidence presented by prosecutors arguing that the “forgery” has not been conclusively proved.
Even when the evidence is strong, there continues to be a disparity in how deeply prosecutors take these cases. In some jurisdictions, deepfake pornography is a misdemeanor; in others, it is a felony.
Due to this disparity, it is difficult to collect national data on the number of deepfake prosecutions.
Each conviction – regardless of frequency – builds precedents and aids courts in understanding how to process deepfake evidence.
Prosecutorial Obstacles
Prosecutors are generally not experts in investigating and prosecuting deepfake pornography cases.
Furthermore, varying levels of punishment make national data collection difficult.
Regardless, every conviction — although rare — will establish a precedent and assist courts in processing deepfake evidence.
The Tension Between Protection and Censorship
The U.S. Congress and state legislatures continue to propose bills intended to limit the ability of individuals to communicate their opinions, thoughts and ideas via the internet. The issue has always been where is the line drawn to protect people from being harmed by others versus censoring the public’s right to free speech.
This is exactly the tension that exists in the regulation of “deep fakes,” AI-generated images or videos that are created to look and/or sound like a person.
Free Speech Debates
Even though the First Amendment of the U.S. Constitution prohibits the government from limiting free speech, many cases have established that the government can limit certain forms of free speech, including hate speech, obscenity, and incitement to violence.
In Ashcroft v. Free Speech Coalition, 535 U.S. 234 (2002), the U.S. Supreme Court held that an attempt by Congress to prohibit the creation of virtual child pornography violated the First Amendment because the law prohibited the creation of all virtual child pornography, regardless of whether the children depicted were actually minors. Today, the Ashcroft decision continues to influence the debates surrounding the regulation of deep fakes.
Some critics of attempts to regulate non-consensual intimate deep fakes fear that such regulation could lead to broad censorship of lawful pornography, LGBTQ+ content and artistic expressions.
Liability and Platforms
Under the Take It Down Act, online platforms are required to take down sexually explicit material within 48 hours after receiving a notice that the material depicts someone without their consent.
However, this requirement creates the risk of platforms monitoring communications for sexually explicit content to comply with the Act. This is problematic because it risks infringing upon users’ constitutional right to free speech.
Digital rights organizations also contend that requiring platforms to rapidly remove content can result in the removal of legitimate content created by artists, comedians, writers, or other communicators, and/or those criticizing the government who rely on satire and parody.
Platforms may also err on the side of caution to avoid being liable for failing to remove content quickly enough, and therefore remove more content than necessary. This can erode the public’s trust in online platforms, and reduce the amount of public discourse that occurs online.
Finding the Right Balance
Most people agree that protecting the victims of non-consensual intimate deep fakes is crucial, but finding a way to do so without stifling free speech is a much greater challenge.
Lawmakers are working closely with free speech advocates, victim rights organizations, and organizations representing LGBTQ+ communities to define clear boundaries between unprotected harms and protected expressions.
Potential solutions include encryption, verification of consent prior to posting, and defining terms to limit overreaching.
What to Remember About Deep Fakes and Laws
Remember these key points:
• Deep fakes are potentially unlawful — especially if they portray someone in a sexually explicit manner without his/her consent.
• The Take It Down Act provides a federally protected path to justice for victims of non-consensual intimate deep fakes and requires online platforms to take down such content.
• State laws vary significantly regarding the types of deep fakes that are considered to be unlawful, and the consequences of violating those laws; consequently, victims of deep fakes face inconsistent protection based on the location in which the harm occurred.
• Civil lawsuits provide victims of deep fakes the most direct means of obtaining remedies — including monetary damages and/or court-ordered removal of offending content.
• Prosecuting deep fake offenses is difficult due to anonymity of perpetrators, technological limitations, and high burden of proof.
• The ongoing debate regarding free speech and the regulation of AI-generated content continues; policymakers and regulators will likely spend years developing regulations that adequately balance the need to protect victims with the need to protect free speech.
What the Changes Mean For Your Online Reputation
For anyone concerned about maintaining a positive online presence or reputation, this represents a significant development.
If you find out that there is a fake image or video of yourself circulating online:
• Document the fake content and save screenshots, links and timestamp information associated with each document.
• Immediately report the fake content to the platform(s) where the content is located, using the reporting mechanisms provided by each platform.
• When requesting removal of the fake content, cite the Take It Down Act or the applicable state law.
• Contact a reputable reputation management firm or attorney specializing in AI-based defamation.
• File a complaint with the Internet Crime Complaint Center (IC3).
Protecting your online reputation is no longer solely about search engine optimization (SEO); protecting your digital identity is the emerging norm.
While AI-based deep fakes make creating fake content easier, the Take It Down Act and similar legislation represent a major step forward in providing recourse for victims of such behavior.
Final Thoughts
Are deep fakes illegal?
Yes, they can be — and increasingly so.
The passage of the Take It Down Act was a pivotal moment for the U.S. law as it relates to AI-generated sexually explicit content. Rather than viewing such content as “creative expression” that is beyond regulatory reach, the Act recognizes that such content is capable of causing harm to its subjects.
Despite the fact that the Take It Down Act has been passed, enforcement of the law will likely take time. Victims of deep fakes need education, legal assistance and ready access to tools for removing offending content.
Balancing free speech and protection of victims from harm will likely remain one of the greatest digital rights challenges of the coming decade.
For now, the most effective defense against deep fakes is knowing your rights, having access to the tools needed to defend them and acting quickly when your image or the image of someone else is used without consent.