hfng - Fotolia

Manage Learn to apply best practices and optimize your operations.

Deep learning models hampered by black box functionality

A lack of transparency into how deep learning models work is keeping some businesses from embracing them fully, but there are ways around the interpretability problem.

Deep learning models have a potentially big problem -- a lack of interpretability -- that could keep some enterprises from getting much value from them.

One of the great things about deep learning is that users can essentially just feed data to a neural network, or some other type of learning model, and the model eventually delivers an answer or recommendation. The user doesn't have to understand how or why the model delivers its results; it just does.

But some enterprises are finding that the black box nature of some deep learning models -- where their functionality isn't seen or understood by the user -- isn't quite good enough when it comes to their most important business decisions.

"When you're in a black box, you don't know what's going to happen. You can't have that," said Peter Maynard, senior vice president of enterprise analytics at credit scoring company Equifax.

Regulations sideline deep learning models

The financial services industry has been slow to embrace deep learning for fear of this black box, Maynard said. Nobody doubts that advanced techniques, like neural networks, could help financial companies make better decisions. But, for Equifax, regulations force them to give verbal explanations for why people receive the credit scores that they do. Simply saying, "because a neural net said so," isn't good enough.

So Maynard and his team have developed a new type of neural network model that gives reason codes along with scores. It took some engineering to get it to work; obviously, the team couldn't just pull some Python code off the internet, but the model is now to the point where it's providing good results and reasons for its determinations.

"If you're Amazon, you don't care why your model works," Maynard said. "They care that it works. In marketing and IoT [internet of things], they don't care about reasons. But for us, everything has to be transparent."

Regulatory issues are also keeping pharmaceutical company Biogen from using some of the more common, less interpretable deep learning methods. Decisions about how drugs are marketed and how drug trials are conducted all need to be auditable, according to U.S. Food and Drug Administration rules. This means black boxes won't work.

"We're not a Google, where you can take any kind of data set, put an algorithm on it and put it out in the world," said Adam Jenkins, data scientist at Biogen, in a presentation at the recent Open Data Science Conference in Boston. "For us, it really is life or death."

This doesn't mean deep learning is out of the question for Biogen; it just takes another layer of analysis. Jenkins said he and his co-workers will still use things like neural networks, and then they'll take the output and try to re-create it using more traditional analytic methods, like propensity modeling. Deep learning points them in the right direction, and traditional analyses serve as an interpretable proof.

Humans, deep learning models better together

But regulatory issues aren't the only reason the output of deep learning models should be interpretable. In a separate presentation at the conference, Google Software Engineer Jesse Johnson said models just work better when they're interpretable, particularly if they are intended to aid a person in making a decision.

"If deep learning is going to change the world, we need to get interpretability right," he said. "What's going to be important for adoption is not just how accurate it is, but how it interacts with humans."

As an example, he said that when someone tells a friend that they like a certain movie, and that friend makes a recommendation based on that information, the person is going to ask why the friend made that recommendation. But we interact with deep learning algorithms making recommendations for movies all the time without knowing why the algorithm made its recommendation. Many of us would like to know from where these recommendations come.

This is going to be even more important in the future, as deep learning algorithms encroach on even greater decision-making turf. They're already starting to show up on our streets in the form of self-driving cars, and Johnson pointed out that they are starting to show up in medical, legal and financial software.

To effectively use these models, people will need to trust them, and to trust them, they'll need to understand them.

"If deep learning models don't tell you about context, then they don't interact with your mental models," Johnson said. "Interpretability builds an interface between statistical models that allows them to interact with mental models. Now, instead of making a binary decision -- do we trust the deep learning model or do we trust ourselves? -- you can get something that's better than [what] the human or the computer could have done on their own."

Next Steps

Deep learning tools push advanced analytics even further ahead

Deep learning could help make artificial intelligence less artificial, more human

Big data has big role to play in pushing deep learning forward

This was last published in May 2017

Dig Deeper on Advanced analytics software

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What are your tips for making deep learning models more interpretable?
Cancel

-ADS BY GOOGLE

SearchDataManagement

SearchAWS

SearchContentManagement

SearchCRM

SearchOracle

SearchSAP

SearchSQLServer

SearchSalesforce

Close