The ubiquity of the internet today—namely of social media networks and large search engines—has complicated the ways in which content is produced and received, deeply altering how society thinks about the rights to free speech, freedom of the press, and freedom of expression. Public speech can now easily be spread worldwide while remaining anonymous. Nazi-related content liable to prosecution in Europe is safely hosted on US servers where it is protected by the First Amendment. On top of this, our access to content has become completely mediated by algorithms designed to maximize profits. This transformation in the production, distribution, and consumption of content has become the inexorable backdrop of contemporary debates on the basic right to freedom of speech.
As automation emerges as a problem affecting all spheres of production, we are increasingly confronted with its ethical implications. This is especially true for the discussions spurred by the new digital contexts that shape public opinion. The automation of decision-making processes put in motion by digital platforms in sensitive areas such as editing, moderating, advertising, and circulating information is at the source of many controversies. When the ways in which information and opinion are produced and disseminated become open to manipulation, we are forced to deal with the consequences—an unregulated platform that takes advantage of the same capitalist logic that undermines society in so many other ways.
In this new piece, Frank Pasquale, affiliate fellow at Yale Law School’s Information Society Project and author of The Black Box Society: The Secret Algorithms That Control Money and Information, argues that powerful interest groups build their dominance with the help of a regulatory regime that permits secrecy and complexity. The study tackles events surrounding the recent US elections as well as other cases in which online interventions (or lack thereof) have allowed for the spread of hateful ideologies in the broader public. Presenting a series of legal and educational steps, the author shows how we may curtail the effects of the pathologies that the contemporary automated public sphere creates.
Disrupting secretive business strategies, preventing the concentration of power and money into a few hands, and introducing more instances of human intervention are initiatives that put the focus on regulating the power held by big tech companies. However, other proposed measures trigger crucial ethical considerations around this regulatory apparatus. How can we ensure that regulation does not turn into censorship, suppression, and thereby into another tool for manipulation and control of society?
Beyond fully endorsing these proposals or necessarily opposing all such forms of regulation, we believe that as progressive actors—many times on the losing end of digital control and/or harassment—we need to reconsider our strategies and methods under new contexts. How can we rethink regulation to make it work in both fair and transparent ways everywhere? Can we devise ways to regulate users, who play an important part in producing and distributing content, without falling into brute censorship? Are these desirable, forward-looking options, or rather desperate reactions to the reality of today’s digital world? Ultimately, we need to ask, what is the role of the internet in society today and how can we improve the digital environment for all of us?
Juliet Lu and Erik Myxter-Iino