Manage Learn to apply best practices and optimize your operations.

Researchers work on AI algorithms to detect fake news

Listen to this podcast

A new challenge to identify fake news will test the boundaries of AI technology and offer a proving ground for innovative new approaches to the technology.

Fake news has dominated headlines for months, but now, a group of artificial intelligence researchers is trying to do something about it. So how far will they and their AI technology get in solving the problem?

In this edition of the Talking Data podcast, we discuss the answer to this question and more. It all starts with Carnegie Mellon University adjunct professor Dean Pomerleau, who recently started the Fake News Challenge. He's offering up to $2,000 to five research teams if they can come up with AI algorithms capable of accurately spotting examples of fake news.

Some of the big players, like Google and Facebook, are also working on the problem, developing AI algorithms in-house to try to clamp down on fraudulent stories.

But researchers are up against some major challenges. Unlike other areas, where AI algorithms are successful, such as image recognition or natural language processing, there are fewer patterns to rely on with fake news. The very nature of the problem makes it resistant to pattern. Therefore, any AI tool developed will need to have a component of judgment, something today's AI isn't quite capable of achieving.

Beyond the problem of fake news, the competition could be a good proving ground for more advanced forms of AI than we currently have today. That is, of course, if researchers are able to develop tools that function significantly better than what we see currently.

Listen to this podcast to learn more about some of the challenges and opportunities researchers will face as they look to tackle the problem of fake news, and how their successes or failures could say a lot about the state of AI today.

Next Steps

Industry stakeholders weigh in on future development of AI technology

Artificial intelligence is changing the face of technology as we know it

AI algorithms need to be a part of your enterprise's future

Join the conversation

5 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How effective do you think today's AI algorithms will be when taking on the problem of fake news?
Cancel
Ed, I think one could easily create an algorithm that could help the community value all information, fake news included.
Cancel
Some fake news are easy to spot.
1. Statements about the future that are worded as fact.  Nobody knows the future.  Statements of probability from defects in a software application to who will win the election are obviously fake. Politifact makes this mistake of assuming the future is certain in rating things as fact.

2. Statements about the present that are worded as fact when they are probabilities based on sampling, or based on a wild hunch.  In the present the same polling organization would report Clinton is up by 10.  Followed by Clinton is down by 2.  There was no way the electorate was making such wild swings.

3. Statements about the past.  John Lewis says he attended every inauguration when that is a verifiable lie as he boycotted the Bush inauguration.  Many lies about the past are accepted as fact, even by history books.  Eg the Vietnam war was unpopular.  But in '68 Humphrey easily beat McCarthy.  Then the more anti-communist and personally unpopular Nixon easily beat Humphrey with the even more anti-communist Wallace taking a big chunk.   Same in '72.  The hawk took 48 of 50 states.  The dove only won 2 small states.  The war was only unpopular with a few who knew how to get their protests in the media.

A big issue for algorithms is how to decide which "facts" to use to compare against the current statement in question as there is much fake history.  It comes down to which "facts" will the algorithm cherry pick.
Cancel
Spintree; Excellent post.  I specifically like your algorithm of looking at the facts and whether they are about the present, past, or future.  One can easily determine the "tense" of a statement.

I have an idea that adds on yours.  It answers your question on which facts to include.  You divide the audience up into groups and let them pick the facts they think are the most important facts that would lead to a specific conclusion.

For example - A conclusion might be - "John Lewis is a liar because he made a verifiable lie."  Then you list the facts that support that conclusion.  Another conclusion might be - "John Lewis is making an important point about Donald Trump."  and then list the facts that support that conclusion.

Then the community could look at both conclusions and decide which are reasonable conclusions.

One more point.  All conclusions must point to a recommeded action.  So, a conclusion of "John Lewis is a liar" may have an recommended action of "So what, everyone lies in a political context.  Trump lies in a similar context."  Or another action might be "see all liberals are liars."  

The community, as with conclusions, would then say which ones they believe.

Does this make sense?  
Cancel
where news_source not in ('CNN','FOX','MSNBC')
Cancel

-ADS BY GOOGLE

SearchDataManagement

SearchAWS

SearchContentManagement

SearchCRM

SearchOracle

SearchSAP

SearchSQLServer

SearchSalesforce

Close