Search overlay

Search form

People

    Programs

      Events

        Harbert Magazine

        Fake News

        March 5, 2018 By Harbert Magazine

        All News

        Harbert College researcher says algorithms hold key to fighting fake news.

        The term has been invoked by everyone from the president to Pope Francis. You’ve seen examples of it on social media in the form of memes promoting conspiracy theories or stories shared from websites of dubious origins. We’re talking, of course, about “fake news.”

        Once the provenance of Weekly World News and other supermarket tabloids (we miss Bat Boy), pseudo news now resembles a pandemic, spread by bots, propagandists, and the gullible via Twitter, Facebook, and other social media platforms. Fake news may or may not sway elections, depending upon which reports you believe, but there’s no denying its ability to move markets and threaten brands large and small.

        Last year, cryptocurrency platform Ethereum lost $4 billion—20 percent of its market value—after social media users fanned a false rumor that one of its co-creators had died in a car crash. Similarly, Starbucks took quick action to shut down a false promotional campaign that emerged from 4chan’s online mischief-makers before it could cause any financial damage. These days, the potential exists for artificial intelligence to be fooled by artificial news as well. When a false story suggesting Google planned to buy Apple for $9 billion circulated on the Dow Jones Newswire, Apple’s stock price experienced a momentary surge thanks to trading controlled by computer algorithms that scan Twitter and news headlines for insights.

        “Vigilance and speed of response are essential in fighting fake news,” Signal Media founder David Benigson says. But what if companies and news consumers didn’t have to be reactive combatants?

        That’s where Harbert College business analytics associate professor Ashish Gupta enters the fray. Gupta and Harbert graduate students have been building a large repository of Twitter data—nearly 4 terabytes thus far—and taking that “treasure trove of data” to develop algorithms that flag falsities.

        “Fake news is a controversial topic,” Gupta says, “but it’s worthy of research and consideration.”

        Gupta is interested in patterns rather than politics, and using “deep learning” to unmask untruths. Gupta says API—application, programming, and interface—tools help eliminate “noise” (advertisements and comments) from social media postings as a means of determining veracity. The analysis of “native language” identifies tell-tale rhetorical signs—construction of the message, certain phrases—that can flag a tweet as a purveyor of fake news.

        According to the Pew Research Center, 64 percent of US adults say fabricated news stories cause confusion about the basic facts of current events and issues. While 39 percent feel confident they can recognize fake news, 23 percent admit to having shared a false story.

        Gupta’s work may provide two outcomes—providing companies with a new way to protect their reputations and bottom lines, and producing better-informed social media users.