Bushfires, Bots and the Spread of Disinformation

 In Australia, Environment, Information

As fire con­tin­ues to wreak havoc across large parts of the coun­try, online Australia is bat­tling anoth­er crisis: the waves of mis­in­for­ma­tion and dis­in­for­ma­tion spread­ing across social media. Much of the media report­ing on this has referred to ‘bots and trolls’, citing a study by Queensland University of Technology researchers which found that about a third of the Twitter accounts tweet­ing about a par­tic­u­lar bush­fire-relat­ed hash­tag showed signs of inau­then­tic activ­i­ty.

We can’t fight dis­in­for­ma­tion with mis­in­for­ma­tion, how­ev­er. It is impor­tant to be clear about what is, and what is not, hap­pen­ing.

There’s no indi­ca­tion as yet that Australia is the target of a coor­di­nat­ed dis­in­for­ma­tion ‘attack’. Instead, what we’re seeing online is a reflec­tion of the chang­ing infor­ma­tion envi­ron­ment, in which high-pro­file nation­al crises attract inter­na­tion­al atten­tion and become fuel for a wide array of actors look­ing to pro­mote their own nar­ra­tives — includ­ing many who are pre­pared to use dis­in­for­ma­tion and inau­then­tic accounts.

As online dis­cus­sion of the bush­fire crisis becomes caught up in more and more of these tan­gled webs, from con­spir­a­cy the­o­ries to Islamophobia, more and more dis­in­for­ma­tion gets woven into the feeds of real users. Before long, it reach­es the point where some­one who starts off look­ing for infor­ma­tion on #AustraliaFires winds up 10 min­utes later read­ing about a UN con­spir­a­cy to take over the world.

The find­ings of the QUT study have been some­what mis­con­strued in some of the media report­ing (by no fault of the researchers them­selves). There are a few fac­tors to keep in mind.

First, a cer­tain amount of inau­then­tic activ­i­ty will be present on any high-pro­file hash­tag. Twitter is full of bot accounts which are pro­grammed to iden­ti­fy pop­u­lar hash­tags and use them to sell prod­ucts or build an audi­ence, regard­less of what those hash­tags are. Using a small sample size as the QUT study did (315 accounts) makes it dif­fi­cult to deter­mine how rep­re­sen­ta­tive that sample is of the level of inau­then­tic activ­i­ty on the hash­tag as a whole.

Second, the QUT study relied on a tool called Bot or Not. This tool and others like it — which, as the name sug­gests, seek to auto­mat­i­cal­ly deter­mine whether an account is a bot or not — are useful, but it’s impor­tant to under­stand the trade-offs they make when you’re inter­pret­ing the results. For exam­ple, one factor which many bot-detec­tion tools look at is the age of the accounts, based on the assump­tion that newer accounts are more likely to be bots. That may in gen­er­al be a rea­son­able assump­tion, but it doesn’t nec­es­sar­i­ly apply well in a case like the Australian bush­fire crisis.

Many legit­i­mate users may have recent­ly joined Twitter specif­i­cal­ly to get infor­ma­tion about the fires. On the flip­side, many bot accounts are bought and sold and repur­posed, some­times over sev­er­al years (just search ‘buy aged Twitter accounts’ on Twitter for your­self to see how many are out there). Both of these things will affect the accu­ra­cy of a tool like Bot or Not. It’s not that we shouldn’t use tools which claim to detect bots auto­mat­i­cal­ly, but we do need to inter­pret their find­ings based on an informed appre­ci­a­tion of the fac­tors which have gone into them.

Finally, there isn’t nec­es­sar­i­ly a link between bots and dis­in­for­ma­tion. Disinformation is often, and arguably most effec­tive­ly, spread by real users from authen­tic accounts. Bots are some­times used to share true, help­ful infor­ma­tion. During California’s wild­fires in 2018, for exam­ple, researchers built a bot which would auto­mat­i­cal­ly gen­er­ate and share satel­lite imagery time-lapses of fire loca­tions to help affect­ed com­mu­ni­ties.

There’s clear­ly a sig­nif­i­cant amount of dis­in­for­ma­tion and mis­lead­ing­ly framed dis­cus­sion being spread on social media about the bush­fires, par­tic­u­lar­ly in rela­tion to the role of arson­ists in start­ing the fires.

However, the bulk of it doesn’t appear to be coming from bots, nor is it any­thing so straight­for­ward as an attack. Instead, what appears to have hap­pened is that Australia’s bush­fire crisis — like other crises, includ­ing the burn­ing of the Amazon rain­for­est in 2019 — has been sucked into mul­ti­ple over­lap­ping fringe right-wing and con­spir­a­cy nar­ra­tives which are gen­er­at­ing and ampli­fy­ing dis­in­for­ma­tion in sup­port of their own polit­i­cal and ide­o­log­i­cal posi­tions.

For exam­ple, fringe right-wing web­sites and media fig­ures based in the United States are ener­get­i­cal­ly dri­ving a nar­ra­tive that the bush­fires are the result of arson (which has been resound­ing­ly reject­ed by Australian author­i­ties) based on an ide­o­log­i­cal oppo­si­tion to the con­sen­sus view on cli­mate change. Their arti­cles are ampli­fied by pre-exist­ing net­works of both real users and inau­then­tic accounts on social media plat­forms includ­ing Twitter and Facebook.

QAnon con­spir­a­cy the­o­rists have inte­grat­ed the bush­fires into their broad­er con­spir­a­cy that US President Donald Trump is waging a secret battle against a pow­er­ful cabal of elite can­ni­bal­is­tic pae­dophiles. Believers in the ‘Agenda 21/Agenda 2030’ con­spir­a­cy theory see it as proof of ‘weaponised weath­er con­trol’ aimed at con­sol­i­dat­ing a United Nations – led global takeover. Islamophobes are blam­ing Muslim arson­ists—and get­ting thou­sands of likes.

And that’s not even touch­ing the issue of mis­lead­ing infor­ma­tion that’s been spread by some Australian main­stream media.

It’s not just the cli­mate that has changed. The infor­ma­tion ecosys­tem in which nat­ur­al dis­as­ters play out, and which influ­ences the atti­tudes and deci­sions the public makes about how to respond, is fun­da­men­tal­ly dif­fer­ent from what it was 50, 20 or even five years ago. Disinformation is now, sadly, a normal, pre­dictable ele­ment of envi­ron­men­tal cat­a­stro­phes, par­tic­u­lar­ly those large enough to cap­ture inter­na­tion­al atten­tion. Where once we had only a hand­ful of Australian news­pa­pers, now we have to worry about the kind of inter­na­tion­al fringe media out­lets which think the US gov­ern­ment is putting chem­i­cals in the water to make frogs gay.

This prob­lem is not going away. It will be with us for the rest of this crisis, and the next, and the next. Emergency ser­vices, gov­ern­ment author­i­ties and the media need to col­lab­o­rate on strate­gies to iden­ti­fy and counter both mis- and dis­in­for­ma­tion spread­ing on social media. Mainstream media out­lets also need to behave respon­si­bly to ensure that their cov­er­age — includ­ing their head­lines — reflects the facts rather than opti­mis­ing for clicks.

It would be easy to dis­miss worry about online dis­in­for­ma­tion as insignif­i­cant in the face of the enor­mous scale of this crisis. That would be a mis­take. Social media is a source of news for almost all Australians, and increas­ing­ly it is the main source of news for many. Responding to this crisis and all of the crises to come will require nation­al cohe­sion, and a shared sense of what is true and what is just lies, smoke and mir­rors.

Source: Australian Strategic Policy Institute

Recommended Posts

Start typing and press Enter to search