The FTC and AI Marketing Regulations
The ongoing trend in marketing is to leverage AI as often as possible, from video generators to e-commerce. As AI becomes more and more powerful, so do the dangers associated with the use of AI increase. Recently, the Federal Trade Commission is more involved in the use of AI in ways that can harm consumers. What do these new AI marketing regulations mean for marketers or other content creators that use AI to supplement their content and copy?
New AI regulations and Inquiries
On February 9, 2024, the FTC released a data packet and press release that shows the over $10 billion that consumers have reported lost due to fraud in 2023. Since 2022, email scams have displaced text scams as the most common method that scammers reach out to their victims. Imposter scams, where the scammers pose as legitimate businesses or government entities, took the top spot; business imposters specifically caused over $700 million in losses, soaring from the $438 million in 2021.
In response to this, the FTC is taking taking proactive steps to protect consumers and hopefully decrease levels of fraud in 2024. At the forefront is the “largest-ever crackdown on illegal telemarketing,” but more of the efforts of the FTC are focused on the soaring levels of imposter scams and confronting new forms of fraud based in AI.
The concerns about the use of AI to deceive and defraud aren’t specific to the FTC. There is a growing revolution of individuals, businesses, and other entities who are taking steps to get in front of harmful uses of AI. The recent “TikTok Timeout” by Universal Music Group is the most popular example that is directly affecting consumers and their use of media. The FTC, however, is the major government entity that is dedicated to making protections against harmful AI into legislation instead of guidelines.
Beyond the use of harmful AI to commit fraud, the FTC is also concentrating on the possibility of large AI monopolies by tech and development companies. They began an inquiry into companies with AI partnerships, such as Alphabet, Inc. (Google’s parent company), Amazon.com, Inc., Anthropic PBC, Microsoft Corp., and OpenAI, Inc. This inquiry was designed to determine if certain dominant companies are attempting to distort or sway innovation in their favor and, more importantly in this author’s opinion, undermine fair competition. The FTC launched the inquiry on January 25, 2024 and gave the companies 45 days to respond.
The Issue
At the forefront of these efforts, the FTC is currently accepting public comment on an extension of the recently finalized Trade Regulation Rule on Impersonation of Government and Business. This extension concerns AI marketing regulations and explicitly bans
(1) calling, messaging, or otherwise contacting a person or entity while posing as an individual or affiliate thereof, including by identifying an individual by name or by implication; (2) sending physical mail through any carrier using addresses, identifying information, or insignia or likeness of an individual; (3) creating a website or other electronic service or social media account impersonating the name, identifying information, or insignia or likeness of an individual; (4) creating or spoofing an e-mail address using the name of an individual; (5) placing advertisements, including dating profiles or personal advertisements, that pose as an individual or affiliate of an individual; and (6) using an individual’s identifying information, including likeness or insignia, on a letterhead, website, e-mail, or other physical or digital place.
Included in this revision is also a provision that would make it unlawful for an AI company to offer a service that is knows is being used to harm consumers through impersonation, such as a voice cloning software that is consistently and publicly used to create deepfakes. From FTC Chair Lina M. Khan: “Our proposed expansions to the final impersonation rule would do just that, strengthening the FTC’s toolkit to address AI-enabled scams impersonating individuals.
These actions by the FTC represent an extension to a long running campaign by the FTC to introduce AI marketing regulations that limit the usage of AI or AI-related tools in order to deceive consumers. Early 2023 saw the FTC blog posting several articles related to the use of the term AI in marketing and the use of AI in the workplace. Among the recommendations by the FTC writers were warnings to producers to make sure that the services being offered under the AI umbrella were actually AI tools and that the producers weren’t lying about or exaggerating their capabilities. These warnings serve as an eerily accurate prelude that the actions currently being taken.
AI Regulations and Marketers
Even if the AI can make it look like Taylor Swift attended your company party, it would be considered against the new FTC regulations to use those videos or images to attempt to make people believe that the images are real. While this is something that not many marketers would tend to do for actual marketing content, there have been trends in the past featuring some versions of this very action, mostly for comedic effect.
The current version of the FTC rules do not specify that the rules regarding likeness only apply in the case of scammers using AI to commit fraud. They are quite clear that no one, regardless of whether they are marketers trying to make trendy content or scammers trying to commit fraud, can use AI to deceive people into thinking that people are affiliated with things that they aren’t.
The AI deepfake threat isn’t just from the consumers. It’s from the AI platforms themselves as well. In their original campaign against improper use of AI in the consumer world, the FTC warned against companies allowing their products to create deepfakes. OpenAI has taken proactive steps to prevent the AI from creating likenesses of celebrities, writing in the style of known writers, and other risky generations.

The FTC warns that “your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal.” At the time of writing, it was seemingly impossible to get ChatGPT to generate an image of someone like Taylor Swift, but getting images of other celebrities was more than possible. ChatGPT did not allow branded photo-realistic images, however.

The above image of a celebrity holding a sign generated by AI would be a harsh violation of the new FTC AI guidelines were the sign properly branded.
Final Thoughts
As the power of AI and prevalence in the marketing world increases, so do marketers need to be more and more and aware of the limitations of how they should use it. Above, ChatGPT was able to be persuaded to create images of a celebrity holding a branded sign, but the ability to generate does not translate into ability to use. Even for the purposes of this blog post, it would have been inappropriate to put an actual brand behind any of these celebrities.
As the FTC AI marketing regulations evolve and become more comprehensive, check back here for guidance on how to understand them from a marketer’s perspective. If you’re a business wondering how to incorporate this into your own marketing, check out our services page and let us do the leg work for you or subscribe to our newsletter. At Online Optimism, we’ll harness the power of technology while preserving those personal touches. Here, we keep it human.