Are We Entering a New Era of Social Media Regulation?

Are We Entering a New Era of Social Media Regulation?

by Bloomberg Stocks
0 comments 73 views
A+A-
Reset

The violence at the U.S. Capitol — and the ensuing actions taken by social media platforms — suggest that we may be at a turning point as far as how business leaders and government bodies approach social media regulation. But what exactly will this look like, and how will platforms balance supporting free speech with getting a handle on the rampant misinformation, conspiracy theories, and promotion of fringe, extremist content that contributed so significantly to last week’s riots? The author argues that the key is to understand that there are fundamental structural differences between traditional media and social media, and to adapt approaches to regulation accordingly. The author goes on to suggest several areas of both self-regulation and legislative reform that we’re likely to see in the coming months in response to both recent events and ongoing concerns with how social media companies operate.

After years of controversy over President Trump’s use of social media to share misleading content and inflame his millions of followers, social media giants Facebook and Twitter finally took a clear stand last week, banning Trump from their platforms — Facebook indefinitely, and Twitter permanently. Could this indicate a turning point in how social media companies handle potentially harmful content shared on their platforms? And could it herald a new era of social media reforms, through both government policies and self-regulation?

For many, Facebook and Twitter’s bans were long-awaited. But it’s not so cut and dry, as many others have decried these decisions as infringements on free speech. To be clear, the First Amendment only protects individuals’ speech from U.S. governmental oppression — there is nothing illegal about a private firm censoring people on its platform. But still, even if it’s not a legal issue with respect to the First Amendment, the question of when and how it’s appropriate for private companies to “de-platform” people — especially notable public figures like Trump — is not so obvious. Many Americans have suggested that freedom of speech aside, these actions clearly illustrate the inherent bias they feel mainstream media holds against conservative voices. Even German Chancellor Angela Merkel has validated these concerns, with her spokesman noting that the “right to freedom of opinion is of fundamental importance,” and that as such, it is “problematic that the president’s accounts have been permanently suspended.”

At the same time, even those who feel the bans were appropriate acknowledge that simply banning a single account is hardly an adequate solution to address the deep-rooted issues that led to the events of January 6. No doubt, Trump’s violence-inciting posts were a significant factor, but social media platforms’ broader tendency to promote and amplify conspiracy theories, fringe groups, and other problematic content must also be addressed.

One of the key reasons that these issues are so difficult to untangle is that social media is fundamentally different from traditional media (that is, newspapers, radio, and broadcast networks) — and so traditional approaches to regulation have largely fallen short. There are a few key dimensions worth considering: First, traditional cable news (and to a lesser extent, other traditional news media) are defined by limited bandwidth. There are a limited number of news media networks, and a limited number of primetime windows and headline slots with which to influence as large an audience as possible. In contrast, social media platforms offer essentially infinite bandwidth, with millions of accounts that can each target much narrower audiences.

Second, traditional news content is produced with editorial oversight: A set of producers with executives above them determine the personalities and viewpoints to be broadcasted across their networks or given coveted publication space. This means that it’s easier for companies to supervise the content that is shared on their platforms, and it’s also easier for third parties to hold companies accountable. This is in contrast to social media, in which platforms are merely conduits for user-generated content that’s subject to much less moderation.

Finally, in general, viewers and readers of traditional news media must proactively choose the content they consume — whether that’s a show they choose to watch or a column they choose to subscribe to. Social media users, on the other hand, have almost no control over the content they see. Instead, platforms use complex algorithms to serve content they think will keep users scrolling, often exposing them to more radical posts that they may never have sought out on their own.

Importantly, social media platforms and many traditional media companies are profit-driven — that is only natural, and it isn’t inherently problematic. But their strategies for maximizing profits are fundamentally different, and so applying the same regulatory frameworks across both just doesn’t work. Specifically, while the traditional media business model can lead to significant polarization, the limited bandwidth and editorial oversight generally incentivizes these companies to attempt to reach broad(er) markets, keeping them from publishing extremely fringe content. The social media business model, however, relies on leveraging individual users’ data to push highly-personalized content in order to maximize scroll time, incentivizing more customized, and thus potentially more extremist, content. Politically polarized media isn’t a new issue, but the kind of hyper-individualized polarization made possible (and indeed, made inevitable) by current social media models poses a uniquely dangerous threat. And the violence at the Capitol last week graphically illustrated that danger.

A possible silver lining of those horrifying events, however, is that they highlighted the ongoing problem so clearly that they could serve as a real turning point in efforts to work towards a solution. Indeed, Facebook and Twitter’s unprecedented bans of President Trump suggest that a new era of social media regulation (enforced both externally and internally) may be close at hand. There are a few key areas where we can expect to see effective, systemic reform in the coming weeks and months:

First, the voluntary actions taken by Facebook and Twitter highlight the important role of self-regulation from within the industry. In addition to their Trump bans, Twitter has institute a number of additional changes, including banning over 70,000 accounts associated with the QAnon conspiracy theory group, while Facebook has begun banning posts with the phrase “stop the steal.” Other platforms that have implemented various content takedowns and internal reforms since last week include YouTube, which has taken down what it describes as violence-inciting videos on Trump’s account and instituted a one-week ban on new uploads to his account; Snapchat, which locked Trump’s account; and Stripe, which stopped processing payments for Trump’s campaign website.

That said, the fact that mainstream social media platforms like Facebook and Twitter took so long to censor Trump has raised serious questions over whether the violence the world witnessed last Wednesday could have been avoided entirely if companies had done more to protect against algorithmic political polarization in the first place. In fact, some have suggested that these companies’ recent actions are little more than self-preservation, an attempt at winning over the support of the incoming Democratic administration (which is likely to be tougher on social media regulation) rather than a true acknowledgement of the harm their platforms can cause. To elicit real change, it will be essential that business and government leaders not simply use this as partisan opportunity to take down a single actor or further a single political cause, but rather, that reforms are enacted to address the root causes at play. To that end, self-regulation will be an important component of effective reforms, but government support will almost certainly be needed as well in order to achieve real change.

More notably, the U.S. government is currently in the process of determining what exactly should happen to Section 230 of the Communications Decency Act (the federal law that gives internet companies protection from liability for user-generated content disseminated on their platforms). Recent events as well as ongoing antitrust concerns suggest that early in the Biden-Harris administration, we can expect a robust examination of how the regulation could be adapted to better protect the public from harmful content. Congress might, for instance, push for social media platforms to be required to meet certain standards concerning transparency and data protection in order to qualify for Section 230 protection — in fact, bipartisan legislative reforms along these lines have already been introduced for consideration. Alternatively, Congress could propose carveouts from Section 230 liability protection, so that social media companies could be held liable for user-generated disinformation or hateful content. Such measures would be similar to the carveout approach already applied in the recent FOSTA-SESTA legislative package, which reduced protections for online platforms that enable trafficking.

In addition, now that Democrats have won the presidency and both Congressional chambers, we’re likely to see robust reforms in a slate of technology regulation areas including privacy, market competition, and algorithmic transparency. While the Obama administration’s baseline privacy proposal effectively stalled out in a gridlocked Congress, the Biden administration will have the support of a Democratic-majority House and Senate, likely enabling them to advance comprehensive privacy regulations. For example, a potential low-hanging fruit is the Honest Ads Act, the digital political ad transparency bill spearheaded by Senator Mark Warner but stymied by a Republican Congress. If the bill is re-introduced, the now-Democratic Congress will most likely support it, offering a potential quick win for privacy advocates.

With democracy at stake, how companies and regulators act today will determine the future of public discourse. Social media firms and tech companies more broadly must all now make a critical decision: Do they continue to engage all customers without limitation and risk stringent regulatory intervention (not to mention the moral hazard of enabling the proliferation of harmful content), or will they preemptively curb extremism through more aggressive self-moderation (such as the actions many took in the last week)? There are no easy answers — but recent events have shown that one way or another, the status quo cannot persist.

Read More

You may also like

Leave a Comment