News Stay informed about the latest enterprise technology news and product updates.

Experts explain how to deploy deep learning in production

When deploying deep learning models into production, experts say it's important to take care of the basics, like model design and testing, to ensure optimal business impact.

Successfully applying deep learning models to the development of self-driving cars, computer vision and speech...

recognition has generated a lot of buzz, but when deploying deep learning in production environments, analytics basics still matter.

In a presentation at the Deep Learning Summit in Boston, Nicolas Koumchatzky, engineering manager at Twitter, said traditional analytics concerns like feature selection, model simplicity and A/B testing changes to models are crucial when deploying deep learning.

"These are not in the media, but they're actually the ones that generate a lot of value for Twitter," Koumchatzky said.

Deep learning central to Twitter experience

Deep learning is currently a central pillar of the Twitter experience. For most of the social network's history, users saw tweets in their timelines from people they follow in the chronological order in which they were posted. But about a year ago, Twitter changed to a ranked timeline. Similar to how Facebook operates its ranked news feed, this new approach assesses people's tweets for content and surfaces them in a user's timeline based on how strongly the algorithm thinks they will appeal to the user.

A lot of deep learning is involved in this process, from natural language processing to image recognition and description. "This is what gets a lot of attention in the media, like we broke through some barrier," Koumchatzky said.

But once the underlying models go into production, he said, it's critical that they function smoothly. Models need to be sparse to keep latency down, since they're surfacing large amounts of content in users' timelines in near-real time. This means they have to return results in a few milliseconds. Testing is also crucial to avoid potentially detrimental changes to the user experience, he said.

All about user experience

Deploying deep learning in production demands a sense of user experience, said David Murgatroyd, machine learning leader at Spotify.

"It's really important to find the product encapsulation that's going to resonate with people," he said in a presentation at the conference. "If you don't have that, your model is just going to sit on a shelf somewhere, and no one is going to use it."

He described how this played out at Spotify through its Discover Weekly feature, a recommendation engine that helps users find new music based on the kinds of artists they've listened to in the past. Murgatroyd said the deep learning algorithms that make connections between songs and artists have been around for a while and have functioned well.

But recommendations weren't used by as many listeners as the team thought they should be. So, they looked into the user interface in which recommendations were surfaced. They found it essentially compiled collections of album covers and left it up to the user to pick which one looked appealing. This left room for users' preconceptions and biases.

Now, the Discover Weekly feature generates a playlist for each user. All the user has to do is hit play.

Leave room for intuition-based corrections

Another important consideration when deploying deep learning in production environments is programming in an override. Murgatroyd said models will often start to deliver recommendations that may look strange to humans, so leaving some room for intuition-based corrections can be useful.

Any manual control should be used sparingly, though, because changing outputs can throw off the model in the long run, Murgatroyd said. While these controls are important to have in place, "you should be sad that you had to use it," he said.

Next Steps

What's the difference between deep learning and machine learning?

Feed your deep learning algorithms limitless amounts of data

Tips from Facebook on how to use deep learning algorithms

Dig Deeper on Predictive analytics

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

3 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How is your company putting deep learning into production?
Cancel
Our team try to deploy deep learning model for computer vision on embedded system like raspberry pi. Performance is critical here, which programming language and frameworks/libraries we should use ? recent paper on deep learning for computer vision mostly use caffe. What is caffe pros and cons in computer vision context? does python deep learning library has huge impact on performance over c++ library?
Cancel
ON RPI ,you can use normal mobilenet or sqeeznet type deep learning architecture. Caffe (c++ implimentation is fast though).
Cancel

-ADS BY GOOGLE

SearchDataManagement

SearchAWS

SearchContentManagement

SearchCRM

SearchOracle

SearchSAP

SearchSQLServer

SearchSalesforce

Close