In recent developments within the tech industry, Meta Platforms Inc., previously known as Facebook Inc., has taken a definitive stand on the usage of generative artificial intelligence (AI) for political advertising. Meta has decided to restrict political campaigns and advertisers from utilising its generative AI ad tools, a move aimed at curbing the spread of misinformation during elections. This decision is a critical step in the effort to address the complexities introduced by AI in the realm of digital political discourse.
Meta’s policy adjustment was announced after concerns were raised about the potential for AI-generated content to exacerbate the spread of misinformation, particularly in the context of political campaigns. The policy update was disclosed through Meta’s help centre, where the company clarified its stance on the use of AI in advertising for sectors that are heavily regulated and sensitive to misinformation, such as politics, health, and finance.
The tech giant’s newly implemented guidelines state that ads falling under the categories of housing, employment, credit, social issues, elections, politics, health, pharmaceuticals, and financial services are prohibited from using generative AI features. This initiative is part of Meta’s broader strategy to better comprehend the potential risks and establish robust safeguards for AI-generated content, especially considering the nuanced and influential nature of advertisements in regulated industries.
Meta’s move to restrict generative AI in ads aligns with actions taken by other major players in the digital advertising space. Alphabet Inc.’s Google, the world’s largest digital advertising platform, has also introduced its version of generative AI ad tools while simultaneously planning to block “political keywords” from being utilised as prompts in the creation of such ads. Google has further planned policy updates that mandate election-related ads to disclose the use of “synthetic content,” a term for media that may not authentically depict actual people or events.
The proactive steps taken by these tech behemoths reflect the industry’s recognition of generative AI’s power and the responsibility that comes with harnessing it. Political ads, with their capacity to influence voter perception and behaviour, are particularly sensitive to the authenticity of their content. By restricting the use of AI-generated tools in creating such ads, Meta aims to maintain the integrity of political advertising and prevent the erosion of trust in democratic processes.
Nick Clegg, Meta’s top policy executive, has highlighted the necessity of updating rules related to generative AI and political advertising. His comments underscore the need for a regulatory reevaluation in anticipation of future elections, including the U.S. presidential race in 2024. The focus is on ensuring that election-related content that circulates across different platforms does not become a vehicle for misinformation due to AI-generated distortions.
Beyond these restrictions, Meta is also working on marking AI-generated content with watermarks and has taken a firm stance against misleading AI-generated videos. The company’s independent Oversight Board is currently reviewing Meta’s approach to AI-generated content, including cases involving manipulated videos of public figures, to assess whether the existing policies adequately address the challenges posed by AI.
As Meta and other companies continue to unveil AI-driven advertising products and virtual assistants, the tech industry’s policy decisions, such as the one by Meta to limit generative AI tools for political ads, will likely serve as precedents. These decisions will shape the role of AI in advertising and, by extension, its impact on public discourse and democracy. The convergence of AI and advertising remains a contentious issue, with the potential to either enhance the marketplace of ideas or disrupt it with unprecedented forms of misinformation. The attention given to this issue by Meta and its peers marks a critical junction in the responsible deployment of generative AI technologies.