Outrage about Facebook’s psychological experiment is misplaced.
The past weekend saw another outpouring of outrage about Facebook’s abuse of personal data. An article in the scientific journal Proceedings of the National Academy of Sciences of the United States of America reported the results of a psychological experiment into the emotional impact of seeing emotionally charged content in social media. In brief, during one week in January 2012, Facebook deliberately manipulated the levels of positive or negative posts in the News Feeds of almost 700,000 users and measured the resulting emotional behavior of the same users by the level of positivity or negativity in their subsequent posts. Commentators take issue that the people involved were neither informed of the experiment nor gave their consent. Facebook obviously disagrees.
Sorry folks, Facebook is correct. Not only did the users consent, but they (and all other users of social media) willingly participate daily in the same type of experiment. The results of these experiments are never published in respectable journals. They are silently used to target advertising and drive marketing. Facebook has been deciding which posts users see for many years, based on relevance, as determined by a proprietary algorithm. Advertisements are delivered in a similar fashion, also based on assumed relevance. The central questions are: Relevant to whom? Relevant on what basis? And how might advertisement relevance be related to the News Feed posts shown at the same time?
What this experiment has emphasized—I presume inadvertently—is that the algorithm(s) used to choose the posts and advertisements you see can be “tuned” in any manner the programmer desires and you will not be any the wiser. If your News Feed can drive your negativity through filtering of posts, is that an opportunity to advertise anti-depressants? If your friends are equally positive about products X and Y, but the social media provider can earn more from ads for X, might that lead to the favoring of posts liking X?
So, we return again to the issue I raised only two weeks ago. Internet services such as social media or search funded by advertising allow and invite manipulation of the data gathered for increased profit. If we agree that such services are socially desirable or now necessary, can we afford to expose them to even the possibility of such manipulation?
My bottom line is that the free internet is an oxymoron. Your individual freedom will be severely constrained by your desire for free stuff.