Professor Nahal Kazemi’s op-ed is based on her research article that originally appeared in the Fordham Law Voting Rights and Democracy Project forum.

 

After an unprecedented month of upheaval in the presidential race, with the election still over three months away, we can expect online disinformation to again become a major concern. Much attention has been paid to the ability of new AI technologies to generate “deep fakes,” or very sophisticated, artificially generated video and audio that present entirely false information.  Additionally, we have seen widespread proliferation on social media of “cheap fakes,” or deceptively cropped and edited videos that paint a misleading image.

In this rapidly changing landscape, many are asking if the government should have any role in combating political disinformation online, and if so, what governmental action would be consistent with the First Amendment.  Others are asking if social media companies are improperly or unfairly silencing certain viewpoints.  But we also need to ask what we, as citizens and media consumers, should be demanding of both the government and social media companies to protect the integrity of our elections, inform us about how our data is used to target content to us, and to respect our privacy as we seek information.

The Supreme Court heard a trio of important cases on the First Amendment and government limitations on speech this term, NRA v. Vullo, Murthy v. Missouri, and the NetChoice cases. In these cases, the Court reaffirmed important rights and limits on efforts to regulate speech. It held that the government cannot threaten regulatory action against third parties for their association with a speaker advancing an unpopular political viewpoint.  The Court also overturned a lower court’s decision to prohibit the federal government from communicating with social media platforms to encourage them to enforce their own content moderation policies against disinformation.  Finally, in a decision sending the case back to the lower courts for further proceedings, the Court reaffirmed social media companies’ right to moderate compiled content (like in newsfeeds), free from government attempts to make them “viewpoint neutral.”

Both government and social media companies have a legitimate interest in combating disinformation (intentionally false information) and misinformation (information that is false, but that the speaker may legitimately believe to be true).  While government cannot censor or punish political speech (even most false political speech), it can advance its own viewpoints and try to persuade others, so long as that effort does not become coercive.  Social media platforms have their own speech rights to moderate content to create communities consistent with their missions. They also have strong financial incentives to make sure their platforms are not unpleasant, untrustworthy, or downright hateful environments to better attract and retain users and advertisers.

Social media companies are not merely deciding what can and cannot be posted on their platforms, however; they are also continuously gathering user data and making that information usable by advertisers to specifically target their ads, often without the users’ knowledge of how their information is gathered, stored, analyzed, or sold.  These data create a potent tool for both political campaigns and those seeking to peddle disinformation.  And unlike many other advanced countries, the United States lacks a data privacy law to protect internet users from the misuse of their data, or to require transparency about how that data is used.

While many platforms are either downgrading political content in users’ feeds (Facebook) or refusing to carry political advertisements outright (TikTok), still others are now permitting a broader range of political content and advertising (like X, formerly known as Twitter).  Much of the paid political advertising you will see on platforms will have been targeted at you, based on all sorts of data gathered about you from the internet, including your shopping and reading habits, your location, age, occupation, gender, and sexual orientation.  While some U.S. states allow users to opt-out of having their data used this way, most do not.

Additionally, some of the so-called news you will see on these platforms will actually be part of foreign disinformation campaigns from adversaries including Russia, China, Iran, and North Korea, geared toward amplifying divisions in American society, sowing distrust, and confusing voters.  They use that same data gathered by social media companies to selectively target their audiences for maximum impact.  State and Federal governments need to continue to monitor these foreign disinformation campaigns to protect the integrity of our elections.  Government and the private sector will need to be able to communicate with each other in order to effectively combat these threats.

What we as citizens and consumers of content should demand of all platforms is (1) greater transparency about who is paying for political advertising to ensure foreign adversaries are not illegally influencing our elections; (2) enhanced transparency on how advertisements (including political advertisements) are targeted to users; (3) the opportunity to opt out of having our data harvested for the purpose of targeting advertisements; (4) clear and transparent content moderation guidelines that are fairly enforced; and (5) explanations if and when our content is taken down for violating a platform’s terms of service.  We should also demand that Congress and the President enact legislation requiring platforms to increase transparency and give users more control over how their data is used.

These changes are essential not only to protecting our privacy, but to protecting our democracy as well.