Aurora Goodwin argues that we can protect ourselves from disinformation by questioning our own biases when facing divisive online material.
It started with the elections and shows no sign of deceleration; with each high-profile incident and contentious debate comes fresh discussions of disinformation and nefarious online actors.
As Prof. Martin Innes recently pointed out, here in Wales we are not exempt from these problems. For us to prevent Welsh media and citizens becoming increasingly exposed to disinformation, we first need to understand what disinformation is and the ways it can present itself online.
As part of my research into online identity performance, I found that disinformation accounts perform identity in ways that reinforce pre-existing stereotypes. In Wales, we should therefore be aware of online constructions that emphasise harmful generalisations of Welsh communities and identities.
Disinformation is the intentional spread of information known to be false. Online, the spread of disinformation can be rapid and far-reaching for various reasons, including a lack of fact-checking requirements and the tendency for us to favour information that fits our pre-existing worldview.
My research sample helps to illustrate the potential power and influence of disinformation accounts. Before they were removed from Twitter, some accounts had over 100,000 followers, were featured in the mainstream media as evidence of public opinion and were engaged with by prominent figures like then US President, Donald Trump, and former US national security adviser, Michael Flynn.
Identifying Disinformation
While the spread of disinformation is commonly associated with the online sphere, the impact is not restricted to the web. From the Welsh Government mistakenly sharing advice on Twitter that tenor choir singers are more likely to spread COVID than sopranos and altos, to unfounded Facebook rumours that primary school children are being forced to pray to Allah in Merthyr Tydfil, it appears that online and offline spheres should no longer be viewed as different worlds, but as extensions of one another. This idea is supported by a review of Merthyr Tydfil Borough Council that finds important issues are being overshadowed by councillors’ participation in arguments online.
‘It appears we cannot judge a book by its cover when it comes to identifying potentially harmful accounts.
My research focuses specifically on accounts identified as belonging to the Internet Research Agency, a Russian disinformation organisation indicted by the US Government for ‘engaging in operations to interfere with elections and political processes’, specifically, the 2016 presidential election. The accounts largely posed as American citizens and ‘breaking news’ accounts. My research aims to investigate how these accounts managed to present themselves as ‘real’ Americans and news pages.
While the accounts were targeted at the US, the UK was not exempt from their interference efforts. In one case, several British newspapers used an Islamophobic tweet from a disinformation account to suggest that there were social divisions following the 2019 Westminster Terror Attack.
My research shows that the profiles tended to appropriate the norms that we associate with ordinary usage: accounts belonging to ‘people’ used selfies as profile pictures and human names as their handles and usernames, while ‘news’ accounts tended to be linked to specific locations and use logos as their profile pictures. In short, there were no accounts that looked classically ‘suspicious’; those with long series of numbers after their handles and the default Twitter profile picture. On the surface, it appears we cannot judge a book by its cover when it comes to identifying potentially harmful accounts.
Having said this, a closer investigation of accounts posing as ‘people’ reveals that they tend to reflect very stereotypical ideas, often lacking the nuances of identity performance that we see in everyday contexts. For example, the data set included presentations of conservative Texan cowboys, and self-described ‘woke af’ liberals. While we commonly accept that human beings cannot be put into boxes, these accounts tend to characterise people as static, separating them into neat categories based predominantly on their political affiliation and their race.
These findings are reinforced by research I am currently undertaking, focussing on how a smaller sample of popular Internet Research Agency accounts responded to the violence at the Charlottesville Unite the Right rally. So far, findings suggest that accounts engage with other users in ways that reinforce the personas performed in their profile pictures and usernames, constructing very clear in/out groups based on political affiliation. In relation to Charlottesville, debate was strongly centred around who was to blame for the violence, with conservatives blaming liberals, and vice versa. These findings, when combined with previous research into the same disinformation organisation, suggest that the accounts operated by the Internet Research Agency became popular due to their ability to tap into popular American discourses and take polarised stances in response to unfolding issues.
While this sample was targeted specifically at the US, these findings can still help us to understand how disinformation threats may manifest within a Welsh context. Firstly, we should be aware of accounts that pose as ‘types’ of people that appear almost too stereotypical to be true and be mindful of what the intentions of such accounts might be.
Based on the propensity for previous campaigns to tap into divisive national discourses and stereotypes, we should also think about how issues in Wales might be politicised and wonder what stereotypical identity presentations of ‘one side’ or another might look like.
Reducing the Risk
To further understand the intersections between identity, language, and disinformation, I hosted an interdisciplinary workshop in April this year, bringing together researchers from various fields including psychology, politics, computer science, and linguistics. Despite the different research aims, methods, and findings, recurring issues continued to emerge throughout presentations.
‘Users susceptible to falling for misinformation already possess invaluable scrutiny skills; users’ critical reflection and scepticism of mainstream media sources is exactly the kind of crucial inspection required when evaluating online sources.
Firstly, it became apparent that there exists a tension between mainstream and non-mainstream media sources, particularly in relation to public trust. Where social media users may feel disillusioned with mainstream media and the possible agendas of different news broadcasters, citizen-generated news online can sometimes be viewed as a reliable substitute.
To go some way to reducing the risk of exposure to disinformation sources associated with getting news online, education and development of digital literacy can make a difference, but participants acknowledged that this can be time-consuming and not everyone may be motivated to engage.
Although additional education on the topic is desirable, users susceptible to falling for misinformation already possess invaluable scrutiny skills; users’ critical reflection and scepticism of mainstream media sources is exactly the kind of crucial inspection required when evaluating online sources. In other words, when on social media, we should remember that accounts may have agendas just as other media outlets do.
Finally, participants acknowledged the need to not assume that disinformation can be ‘fixed’ with a one-size-fits-all approach. Instead, the exchange of ideas across disciplines and the use of mixed method approaches should be encouraged where possible to allow for a compromise between the challenges associated with different models.
While we may not be able to solve the issue of disinformation overnight, nor are we able to identify nefarious accounts with a glance, we can use our pre-existing critical skills to question not only the authenticity and accuracy of online sources, but also our own biases. Disinformation accounts thrive on their ability to exacerbate social tensions and maximise division. To prevent Wales from becoming increasingly exposed to this kind of division, we should remember that in so many cases, there are more things that unite us than divide us, and we should question material that tries to convince us otherwise.
For information about the Disinformation, Language and Identity Workshop, and to access recorded presentations, please visit https://dliw2021.wixsite.com/website
All articles published on the welsh agenda are subject to IWA’s disclaimer.