New system uses AI to beat fake news

Macquarie University/The Lighthouse
A new system using AI to create personalised link recommendations that divert users away from fake news, will be presented to thousands of researchers at a prestigious international computing event The Web Conference in France, this week.

Social media platforms are known to be a hotbed of fake news and misinformation, with sometimes deadly consequences – such as vaccine hesitancy and mask rejection during the COVID-19 pandemic.

Fact v fiction: Fake news has had deadly consequences during the COVID-19 pandemic.

But a new artificial intelligence (AI) model and algorithm developed by a team at Macquarie University’s Smart System and Data Engineering Research Group could help to reduce the spread of fake news.

Led by Professor Yan Wang, the team has developed a highly accurate model to identify people’s news reading interests and to shift their choices towards verified ‘true’ news.

The model can be incorporated into an app or web software (including news or social media sites) and can reduce the spread of fake news by offering links to relevant ‘true’ news that aligns with the interests of each user.

Personalised recommendations

“When you read or watch news online, often news stories about similar events or topics are suggested for you using a recommendation model,” says Dr Shoujin Wang, a data scientist at Macquarie University’s School of Computing who plays a key role in the research.

He says the team’s new method – dubbed Rec4Mit (Recommendation for Mitigation) – not only improves the accuracy of recommendation models used by social media, entertainment and news sites, but can also filter out fake news from the content that pops up next on the feed.

Fake news comes with a range of social harms, we believe this system can actively reduce its impact.

“Our model looks at a user’s news reading history to identify the topic or event that the user is interested in, and then recommends ‘true’ news for that topic or event that best meets the user’s reading preferences in a responsible way,” he says.

“Interestingly, if a user has read some fake news on a topic, the model can recommend corresponding true news that will match their reading interests while also reducing their exposure to fake news.”

Current methods don’t stop fake news spreading

A June 2021 report on news and online disinformation from the Australian Communications and Media Authority (ACMA) found that 82 per cent of adults surveyed experienced misinformation about COVID-19 over an 18-month period.

More and more users read news recommended by social media platforms such as Twitter – and these recommendation models can influence and even change what users read, says Dr Wang.

Existing recommendation models use methods like content filters or collaborative filters to match a user to the topic they are interested in, but do not assess whether the suggested ‘next content’ links contains true or fake news.

“Most current methods to reduce fake news are based on information diffusion models on a whole social network rather than trying to stop individual users from reading and sharing fake news,” says Professor Yan Wang.

Professor Wang says because these methods focus on the whole network and don’t cater for the wide variety of behaviours and interests of individual users, and the many thousands of topics that make up news, they can be less effective.

How the program spots fakes

Dr Wang says that true news and fake news for the same event often use different styles of content, confusing computer models into treating them as news for different events.

Professor Yan Wang

Next steps: Professor Yan Wang (pictured) says the team hopes to partner with large technology companies to bring its innovation to more people.

Macquarie University’s model ‘disentangles’ the information of each news item into two parts: the signs showing whether the news item is fake, and the event-specific information showing the topic or event the news story is about.

The model then looks for patterns in how users shift between different news pieces, to predict which news event the user may be interested in reading next.

Finally, by combining the user’s reading history with a veracity classifier, the model launches a “next-news predictor,” to recommend a piece of true news most likely to appeal to the user.

The research team trained the model on a widely used public dataset of fake news published on GitHub called FakeNewsNet, which stores fake news from PolitiFact and GossipCop along with such data as news content, social context, user reading histories and time and location data.

Dr Shoujin Wang

Step up: Dr Shoujin Wang (pictured) says the team’s new method improves the accuracy of recommendation models used by social media.

“Using external datasets of fake news means the module can be re-trained on new information, and will still be relevant as news styles change,” says Dr Wang.

“Our model out-performed nine different state-of-the-art news recommendation methods in its ability to both spot fakes and recommend personalised true news,” says Professor Wang.

He says that the team hopes to partner with large technology companies to bring this innovation to a wide range of audiences.

“Fake news comes with a range of social harms, we believe this system can actively reduce its impact,” Professor Wang says.

Dr Shoujin Wang and Professor Yan Wang will present their work virtually at ACM’s Web Conference 2022 on April 28

/Public Release. View in full here.