Tech companies face the real threat of government regulation in reaction to socially harmful content they are publishing. To avoid this, tech companies should form an industry coalition and take proactive steps to self-regulate now. Our research into industries including television, movies, and video games suggests effective self-regulation can benefit both society and companies, and keep government regulators at bay.
The world witnessed the worst example of the impact digital platforms can have on society with the debacle at the U.S. Capitol on January 6, 2021. Not only did supporters of Donald Trump try to disrupt the certification of the Electoral College votes, but this deplorable incident was, in large part, fomented over social media.
In the past, Twitter and Facebook have been reluctant to censor posts about conspiracy theories and fake news. Digital platforms also have benefitted from a 1996 law, Section 230 of the Communications Decency Act, that grants them immunity from liabilities related to third-party hosted content. Nevertheless, prompted by false accusations of rigged elections and other fake news, the leading digital platforms in social media recently began tagging some posts as unreliable or untrue and removing some videos. Following the January 6th insurrection attempt, Twitter and Facebook also banned Trump from using their platforms because promotion of violence and criminal acts violates their terms of service. For similar reasons, Apple and Google removed the alternative Parler social media platform from their app stores, and Amazon stopped hosting the service.
How did we get into this mess?
Digital platforms can be highly profitable businesses that connect users and other market actors in ways not possible before the internet. When they are successful, they generate powerful feedback loops called network effects and then monetize them by selling advertisements. But what happened at the U.S. Capitol illustrates how digital platforms can be double-edged swords. Yes, they have generated trillions of dollars in wealth. But they have also enabled the distribution of fake news and fake products, manipulation of digital content for political purposes, and promotion of dangerous misinformation on elections, vaccines, and other public health matters.
The social dilemma is clear: Digital platforms can be used for evil as well as good.
What’s the solution? Should platform companies wait for governments to impose potentially intrusive controls and respond defensively? Or should they act pre-emptively?
Governments will inevitably get more engaged in oversight. However, we believe that platforms should become more aggressive at self-regulation now. To explore the feasibility of self-regulation, we researched the history of self-regulation before and after the widespread adoption of the internet. We found that companies have often risked creating a “tragedy of the commons” when they put their short-term, individual self-interests ahead of the good of the consuming public or the industry overall, and, in the long term, destroy the environment that made them successful in the first place.
Before the internet era, several industries, such as movies, video games, broadcasting content, television advertising, and computerized airline reservation systems, faced similar issues and managed to self-regulate with some success. At the same time, these historical examples suggest that self-regulation worked best when there were credible threats of government regulation. The bottom line: Self-regulation may be the key to avoiding a potential tragedy of the commons scenario for digital platforms.
What is “self-regulation”? This refers to the steps companies or industry associations take to preempt or supplement governmental rules and guidelines. For an individual company, self-regulation ranges from self-monitoring for regulatory violations to proactive “corporate social responsibility” (CSR) initiatives. Leaving it up to companies to monitor and restrain themselves can sometimes devolve into a self-regulatory or regulatory “charade.” But that doesn’t need to be the case.
For many decades, companies in the business of producing movies, video games, and television shows and commercials all have faced issues around the appropriateness of “content” in a way that resembles today’s social media platforms. To keep regulators at bay, the movie and video games industries resorted to a self-imposed and self-monitored rating system, still in operation today. The broadcasting and advertisement sectors in the 1950s and 1960s faced pushback on the appropriateness of advertisements, with issues resembling what we see today in online advertising. Launched in 1960, the airline reservation industry, led by American Airlines’ Sabre system, introduced self-preferencing in search results, similar to complaints made against Google and Amazon. Self-regulation in these cases often delivered effective and inexpensive guidelines for company operations as well as forestalled more intrusive government intervention.
History provides several lessons for today’s digital platforms.
First, our leading technology companies need to anticipate when government regulation is likely to become a key factor in their businesses. In movies, radio and television broadcasting, airline reservations via computers, and other new industries, there often occurs a vacuum in regulation in the early years. Then, after a kind of “wild west” environment, governments step in to regulate or pressure firms to curb abuses. To avoid problematic government regulation, platform companies need to introduce their own controls on behavior and usage before the government revokes all Section 230 protections, which is currently under debate in Congress. Technology that exploits big data, artificial intelligence, and machine learning, with some human editing, will increasingly give digital platforms the ability to curate what happens on their platforms. The issue is really to what extent the big platforms have the will to self-regulate. The decisions by Facebook, Twitter, Amazon, Apple, and Google during the first week in January 2021 were steps in the right direction.
Second, we find that firms in new industries tend to eschew self-regulation when the perceived costs imply a significant reduction in revenues or profits. Managers rarely like industry regulations that appear “bad for business.” However, this strategy can be self-defeating. If bad behavior undermines consumer trust, then digital platforms will not continue to thrive. Look closely at Section 230. It states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This act gave online intermediaries broad immunity from liability for user-generated content posted on their sites. Executives and company lawyers should have felt comfortable making reasonable curation decisions under Section 230. However, they generally resisted and argued that their legal and political positions would be more secure if they avoided potentially controversial curation. Internal debates ranging from free speech versus censorship to how much curation can the firm perform before it crosses the line from platform to “publisher” led most social media companies to resist aggressive curation until very recently. However, Section 230 also included a “good Samaritan” exception. This allowed platforms to remove or moderate content deemed obscene or offensive, as long as it was done in good faith. There have been growing calls from both Democrats and Republicans to repeal Section 230 because of accusations of bias (i.e., not acting in good faith) and very little curation over the prior decade by Twitter, Facebook/Instagram, and other platforms. More explicit and transparent self-regulation, like we observed after the U.S. Capitol debacle, might well produce a better outcome for social media platforms, at least compared to leaving their fate up to Congress.
Third, proactive self-regulation was often more successful when coalitions of firms in the same sector worked together. We saw this coalition-type of activity in movie and video-game rating systems limiting violent, profane, or sexual content; television advertisements rules curbing unhealthy products like alcohol and tobacco; and computerized online airline reservations giving equal treatment to airlines, without favoring the system owners. Similarly, social media companies implemented codes of conduct on terrorist activity. Since individual firms may hesitate to enact self-regulation if they incur added costs that their competitors do not, industry coalitions have the benefit of reducing free-riding. Now is the ideal time for more “coopetition,” where platforms compete as well as cooperate with rivals.
Fourth, we found that firms or industry coalitions get serious about self-regulation primarily when they see a credible threat of government regulation, even if it may hurt short-term sales and profits. This pattern occurred with tobacco and cigarette ads, airline reservations, social media ads for terrorist group recruitment, and pornographic material. That threat should be clear and obvious to digital platforms in 2021.
In sum, history suggests that modern digital platforms should not wait for governments to impose controls; they should act decisively and pro-actively now. While the costs of government action in the internet era have been modest so far, the regulatory environment is changing fast. Given the increasing likelihood of government action, the goal of self-regulation should be to avoid a tragedy of the commons, where a lack of trust destroys the environment that has allowed digital platforms to thrive. Going forward, governments and digital platforms will also need to work together more closely. Since more government oversight over Twitter, Facebook, Google, Amazon, and other platforms seems inevitable, new institutional mechanisms for more participative forms of regulation may be critical to their long-term survival and success.