TruthNest - Verifying Social Media content | ATC innovation Lab
Back to Top

TruthNest - Verifying Social Media content

Dishonesty in politics is nothing new; but the manner in which some politicians now lie, and the havoc they may wreak by doing so, are worrying.
The post-truth world - Yes, I’d lie to you, The Economist, 10 September 2016

In previous years one would have to struggle to make a strong case about the necessity of introducing content verification tools in social media. However, the impact of social media and the possibility to provide enormous publicity to obviously fake claims (as seen in recent world-events, such as the Brexit referendum and US elections), led even Oxford Dictionaries to acknowledge 'post-truth' as the 2016 international word of the year. Nowadays, there is a general acknowledgement that such content verification tools are indeed necessary, especially in social media.

What happens is posted. But has what's posted really happened?

In our times we can safely claim that if something newsworthy happens, there will be someone posting it online, usually in the first minutes, if not seconds, after it happens. The problem is that we cannot safely reverse this claim, as information posted online about an alleged event does not necessary mean that the event has really happened. This is the problem we are trying to alleviate with TruthNest.

Having worked for many years on the subject and collaborated with top research and media organisations we have released a service that helps users decide whether a posted claim is to be trusted or not. The claim can be passed to the system either via a Tweet ID, or textually, with a set of keywords and/or hashtags. In both cases, TruthNest analyses a particular Tweet containing the claim, calculating 15 different metrics around three categories (the three Cs, as we call them):

  • Contributor – This involves metrics relevant to the source of information, such as its history, its reputation, its connections and interactions along with other information that can assist in the profiling of any contributor of content.
  • Content – This includes analysis methods that can provide clues about the credibility of linked content (such as photos and linked web content) indicating possible manipulations and fraudulent use.
  • Context – Under this dimension we investigate contextual relations (as cross-checking and provenance) which strengthen or weaken the confidence built around the involved concepts.

Every metric provides clues to the users, which they can use to decide themselves if they trust the content or not. This decision is always left to the users as we believe that human judgement will always be needed, despite advances in data analysis. What TruthNest provides is instant access to a lot of information that can guide and support the human decision.

If you wish to try the service please visit the  TruthNest Web Site and send us a message to arrange an online demo.

In the meantime, we continue working on a collaborative platform that will extend the functionailities of TruthNest allowing many users to work together. This is done in collaboration with Deutsche Welle and with support from the Google Digital News Initiative fund. Stay tuned!