A new challenge to identify fake news will test the boundaries of AI technology and offer a proving ground for innovative new approaches to the technology.
Fake news has dominated headlines for months, but now, a group of artificial intelligence researchers is trying to do something about it. So how far will they and their AI technology get in solving the problem?
In this edition of the Talking Data podcast, we discuss the answer to this question and more. It all starts with Carnegie Mellon University adjunct professor Dean Pomerleau, who recently started the Fake News Challenge. He's offering up to $2,000 to five research teams if they can come up with AI algorithms capable of accurately spotting examples of fake news.
Some of the big players, like Google and Facebook, are also working on the problem, developing AI algorithms in-house to try to clamp down on fraudulent stories.
But researchers are up against some major challenges. Unlike other areas, where AI algorithms are successful, such as image recognition or natural language processing, there are fewer patterns to rely on with fake news. The very nature of the problem makes it resistant to pattern. Therefore, any AI tool developed will need to have a component of judgment, something today's AI isn't quite capable of achieving.
Beyond the problem of fake news, the competition could be a good proving ground for more advanced forms of AI than we currently have today. That is, of course, if researchers are able to develop tools that function significantly better than what we see currently.
Listen to this podcast to learn more about some of the challenges and opportunities researchers will face as they look to tackle the problem of fake news, and how their successes or failures could say a lot about the state of AI today.
Industry stakeholders weigh in on future development of AI technology
Artificial intelligence is changing the face of technology as we know it
AI algorithms need to be a part of your enterprise's future