Should robots control what we read?

By Matti Pohjonen|July 4, 2019|AI, Digital cultures, Extreme speech, India, Research, Social media|0 comments

For somebody who has been following digital politics globally for more than a decade now, it is sometimes uncanny how hateful, violent and misleading communication – or at least the public and political controversies and moral panics around them – now dominates the global political landscape. Digital media, it seems, is imagined in mostly terms of the dangers it poses: violent extremist propaganda run amok; democratic processes corrupted by disinformation and fake news; the social fallout of toxic hate speech.

But what is also often perhaps forgotten from the debates is that all this hate online is also now a big business. Technology companies are busy innovating with new technologies that could act as the final solution to the ill-effects of contemporary digital communication. Facebook, Twitter and Google have experimented with the use of artificial intelligence to remove “bad content” before it becomes public, thus removing hundreds of thousands of accounts and hundreds of millions of pieces of content aided by new algorithmic systems. There is also now a booming global trade in such new systems and services used to monitor, analyse and act upon online and social media data.  Not surprisingly, the same technological solutions developed to identify terrorist content or hate speech can be easily tweaked to identify political dissent in countries where there are fewer safeguards against their misuse. As such new algorithms used to monitor, analyse and act upon online and social media conversations become more sophisticated with rapid developments in machine learning and AI, their significance is only predicted to grow. 

In response these complex questions raised by this unholy trinity of toxic politics, freedom of speech and technological innovation globally, I wrote a short piece for the “Internet Speech: Perspectives on Regulation and Policy” workshop held around the Indian elections in 2019, (an event that I, unfortunately, could not attend myself). The piece explored at some of the challenges raised by the growing use of AI for monitoring and removing digital content. I argued that:

 

Perhaps the question we should be asking in addition to what types of expressions and speech should be permitted into the sphere of legitimate political debates, and the associated problems of freedom of expression raised by this, is how the technological solutions and business logics used by companies such as Facebook also factor into the creation of the problem of extreme speech globally? And what can be done about this?

… Social media companies have been already experimenting with using AI, or artificial intelligence, to filter and remove “bad content” before it becomes public. As these algorithms become more sophisticated with breakneck developments in machine learning and AI, and as countries push through legislation to make automatic filtering of content a legal requirement, the significance of such algorithmic systems will only grow.

… From an international perspective, leaving the decisions for what types of content should be allowed in public and political discussions to such proprietary AI algorithms of technology companies is something that I do not feel very comfortable with, in India or elsewhere.

As this digital clamour around extreme speech and its regulation become an increasingly defining feature of global communication in the 21st century, these “algorithmic mediations of media” in crisis need to be made the focus of critical research.

You can read the full article here: 

Indeed, one of the key research questions I am tackling at the Centre for Global Media and Communication, SOAS, is how we can I then best research what I foresee to be one of the most pressing questions for understanding global digital communication in the near future. 

Meanwhile, many thanks for Scroll India, the Centre for Internet and Society and the Digital Dignity project for helping us get one step closer to this goal.

Share this Post:

Leave a Comment

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>
*
*