PRO+ Premium Content/Business Information

Thank you for joining!
Access your Pro+ Content below.
February 2018, Vol. 6 No. 1

Big data throws big biases into machine learning data sets

Say you're training an image recognition system to identify U.S. presidents. The historical data reveals a pattern of males, so the algorithm concludes that only men are presidents. It won't recognize a female in that role, even though it's a probable outcome in future elections. This latent bias is one of the many types of biases that challenge data scientists today. If the machine learning data set they use in an AI project isn't neutral -- and it's safe to say almost no data is -- the outcomes can actually amplify bias and discrimination that's present in the machine learning data set. Visual recognition technologies that label images require vast amounts of labeled data, which largely comes from the web. You can imagine the dangers in that -- and researchers at the University of Washington and University of Virginia confirmed one poignant example of gender bias in a recent report. They found that when a visual semantic role labeling system sees a spatula, it labels the utensil as a cooking tool, but it's also likely to refer...

Access this PRO+ Content for Free!

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Features in this issue

Columns in this issue