Year: 2013

  • Bring on the Privacy-Arms Race!

    Yoav Simchoni

     

    Vertically-integrated tech companies compete vigorously across multiple product lines to try and capture market share. A large part of this competition revolves around marketing and branding campaigns, by which companies try to capture the loyalty of their target audience. These campaigns have traditionally involved bids to make products or services seem “cool” or useful, but recently this competition has fanned into a new arena of battle – user privacy. Technology consumers are increasingly concerned with privacy issues as a result of the huge amount of information they share online. Just how much users care is unclear, but some companies have begun distinguishing themselves from competitors by advertising their heightened privacy restrictions and lax privacy regulations of their competition.

    The appearance of privacy issues on the “playing field” of corporate advertising may be a significant step towards a market-based solution for privacy regulation. It may prompt a “race to the top” in which companies compete to offer consumers better privacy controls, and make consumers aware of privacy risks posed by competitors. This may seem like a welcome development, but critics warn that privacy based campaigns may also be problematic. The term “privacy” is often vague, and the meaning of byzantine privacy policies is poorly understood by most internet users. Consequently, advertising campaigns can easily be set up to incite panic or concern where it should not exist. These advertising campaigns could serve as a template for smear campaigns and weaken consumers by increasing the amount of misinformation related to privacy issues.

    Microsoft’s recent “Scroogled” campaign highlights how these two views interact. The new campaign is the largest advertising campaign to date that specifically targets privacy conduct. In the “Scroogled” campaign, Microsoft accuses Google’s Gmail service of sifting through user emails to target users with specific advertisements. The campaign also suggests users sign a petition to protest Google’s use of personal user information. The petition informs that Google reads through “every word of every email”. It then insinuates that private emails aren’t safe by stating that “email between a husband and wife, or two best friends, should be completely personal”. The campaign also provocatively asks the user if they “feel violated yet?” and clarifies that Microsoft’s mail client, Outlook, does not “go through your email to sell ads”. So far, this campaign has done little to disrupt Gmail’s dominance in the e-Mail market, but the significance of the campaign has not gone unnoticed.

    Microsoft is hitting where tech companies have traditionally considered off limits, raising the privacy issue is considered by many alarmist and populist. Are Microsoft’s claims even justified? Microsoft’s message seems to fall, like most advertising, between reality and hyperbole. Microsoft argues Google doesn’t care about privacy, and is willing to monetize user data by violating privacy without scruple. Google’s scanner, however does not involve a human reading any emails, it merely let’s an algorithm scan your mail for keywords that might trigger relevant ads, arguably providing the user with more relevant online experience. This algorithm activity, however, might still be problematic. In 2004, Mark Rasch wrote for Security Focus stating scanning practices could set a “dangerous legal precedent” for law enforcement being able to collect data on users in the same way.  But if this is the case, Microsoft is no less a troublemaker, they also use algorithms to scan the content of user mail. Microsoft’s own terms of use state they “may occasionally use automated means to isolate information from email, chats or photos in order to help detect and protect against spam and malware or to improve the services with new features that makes them easier to use.” Accordingly, both companies seem to be scanning e-mail, but only Google is monetizing that data through targeted advertising. Does that make Microsoft any better? Are its descriptions of what Google is doing bordering on the disingenuous? Further dampening Microsoft’s critique, Google offers users to opt out of the demographic categories Google’s advertising algorithms have placed them in, Microsoft does not have these features on Bing.

    These incongruities lead most critics to believe that Microsoft’s campaign is less about altruism, and more about money. Microsoft has been desperately trying to capture market share from Google in both Search and e-Mail and has been drastically less efficient in monetizing advertising on their search platforms. They have been phasing out Hotmail and converting its user base to outlook.com and in a push to try and pull away some market share from Google, critics argue, they have chosen to inflammatorily target privacy concerns – ironically – because down the road, they want to do just what they accuse Google of doing overzealously, advertise. Supporters of Microsoft argue that it does not earn its money through advertising, and is more interested in users turning to Outlook because it is a better quality product. As a vertically integrated company, the use of one product is a segue towards purchasing more software and hardware devices. Accordingly, Microsoft claims they have the incentive to respect privacy more, because their money does not come from selling user data.

    It is unclear whether the critics or supporters of Microsoft are closer to being on the money. What is clear is that no tech-company is above privacy scrutiny. Microsoft itself has been criticized for privacy policies relating to its Skype product. In late 2012, an open letter signed by 45 privacy-focused organizations demanded Microsoft and Skype clarify their hazy privacy policy. The letter accused Microsoft of using “persistently unclear and confusing statements about the confidentiality of Skype conversations,” regarding what access Microsoft was willing to provide governments to user and conversation data. Microsoft has also been criticized for recent changes to its services agreement which allow it to aggregate customer content from one product, and apply it to another. This means user use patterns from one product, say Windows, can be used to engineer another device, such as the Xbox. Previously, Microsoft limited data use to one product at a time.

    It seems that even if one accepts that Microsoft is a moral exemplar in its war with Gmail, their own privacy lapses suggest they too are willing to “cut corners” when there are big financial ramifications. Despite this fact, two wrongs just make a right when it comes to consumer privacy concerns. If one can stomach hypocrisy, Microsoft’s campaign can be useful to consumers. Arguably, any dialogue that isn’t populist or smearing and that brings privacy issues to the fore and educates the public is welcome. For instance, a Mozaic Group survey found that 70% of Gmail users did not know that their data was being screened. Will those users now switch to Outlook? Doubtful. But at least they will be more educated as a service consumer, and if they day comes when a flagrant violation is highlighted, will be educated enough to move to another service. Perhaps this war of attrition between large technology companies, and the reputational damage they suffer will leave the consumer as the only true benefactor as we become more aware of privacy issues, and have more products to choose from. So bring on the privacy arms race.

     

    My Sources:

    http://washington.cbslocal.com/2013/02/10/sundaysecurity/

    http://www.ibtimes.com/microsoft-rips-email-snooping-google-outlook-any-more-private-gmail-1094118

    http://www.ucstrategies.com/unified-communications-newsroom/microsoft-has-to-tell-the-truth-about-skype-privacy.aspx

    http://www.nytimes.com/2012/10/20/technology/microsoft-expands-gathering-and-use-of-data-from-web-products.html?pagewanted=all&_r=0

    http://www.zdnet.com/three-sides-to-every-scroogled-microsofts-googles-and-the-truth-7000011202/

  • Netflix Privacy Violation Lawsuit

    Alina Mejer

     

    http://news.cnet.com/8301-1023_3-57377084-93/netflix-pays-$9-million-to-settle-privacy-violation-lawsuit/

     

    In keeping with this week’s discussion about statutes that regulate commercial entities’ use of personal data, this recent article about Netflix highlighted how consumers can use the statutes to try and redress any privacy violations.

     

    Netflix settled a class action lawsuit that was filed in January 2011 for $9 million. The case was brought under the Video Privacy Protection Act, a statute that was discussed during Tuesday’s class. Professor Rubinstein noted how the statute was a reaction to prying reporters who tried to get information about Robert Bork’s video rentals during his Supreme Court nomination hearings. The law was passed in 1988 and essentially makes it illegal for video stores to provide information about what their customers rent. The plaintiffs in this lawsuit made the claim that Netflix was in direct violation of this law by keeping records of what they had watched for up to two years after subscribers cancelled their Netflix accounts.

     

    Interestingly enough, however, this $9 million payout does little to impact Netflix’s profit margin. Though it decreased their fourth quarter income by fourteen percent, Netflix’s most profitable year was in 2011.  This article highlights how these statutes are used by consumers to try and protect their privacy. However, it is discouraging to see that the incentives for companies like Netflix are not perfectly aligned with the available remedies because it seems like this settlement was more like a slap on the wrist – especially in light of the fact that Netflix admitted no wrongdoing as per the terms of the settlement agreement. It will be interesting to see how future cases are litigated considering the privacy risks that evolving technologies pose to consumers, a broader theme explored in this course.

  • FTC Cracks Down on Mobile Applications and Coppa Violations

    Anisha Mehta

     

     Earlier this month the Federal Trade Commission (FTC) reached an $800,000 settlement with Path for violations of the Children’s Online Privacy Protection Act (Coppa). The Act regulates the collection and use of personal information from children under the age of 13 by websites. Coppa requires websites to post privacy policies describing what information is being collected and how it will be used, and requires the website to obtain verifiable parental consent for the collection of such information. The settlement requires Path to delete collected information from children under age 13, pay a $800,000 civil penalty, establish a comprehensive privacy program, and obtain independent privacy assessments every other year for the next 20 years. Approximately 3000 accounts from children under age 13 have been found out of 6 million users.

    Path targets families to allow them to share personal moments by creating private social networks. The company faced FTC privacy concerns when it was discovered that information from user’s iPhone address books was being uploaded to Path’s servers without their consent. In the process of this investigation in February 2012 the Coppa violations were discovered by the startup and by May 2012 the startup changed the sign-up process so that individuals under the age of 13 were automatically detected and blocked. According to an article in Gigaom.com, the FTC has stated “This settlement with Path shows that no matter what new technologies emerge, the agency will continue to safeguard the privacy of Americans”.

     

    http://gigaom.com/2013/02/01/path-reaches-settlement-with-ftc-agrees-to-pay-800000-fine-for-coppa-violations/

     

    This case also shows the emphasis the FTC is placing on privacy considerations in regard to mobile devices. At the beginning of the month the FTC also published suggested privacy guidelines for mobile apps, which while not binding, show the seriousness with which the agency is looking at mobile privacy. The FTC is not just focusing on major corporations, but also on small businesses that create apps and providing them with recommended strategies to lower their risks of privacy violations.

     

    http://www.nytimes.com/2013/02/02/technology/ftc-suggests-do-not-track-feature-for-mobile-software-and-apps.html

     

  • EU vs US Data Protection

    Johnston Chen

    http://www.nytimes.com/2013/01/26/technology/eu-privacy-proposal-lays-bare-differences-with-us.html?_r=0

     

    In January, the United States government and Silicon Valley lobbied against European efforts to increase consumer information privacy law in the European Union.  At that time, several proposed laws were working their way through the European Parliament.  These proposed laws are designed to give 500 million consumers the ability to block or limit many forms of online web tracking and targeted advertising.  While seen as a major boon in consumer privacy, all major American tech companies have lobbied the European headquarters in Brussels arguing that Europe weaken or remove these limits.

    Ben Wizner of the American Civil Liberties Union highlights that, unlike Europe, the United States has no general data protection law.  As a result, he states that online companies in the United States may conduct “unfettered” data mining.  Under the European proposals, however, Web businesses would be unable to collect and profile individual users without their explicit consent.  Businesses would also have to permanently remove information upon a user’s request.

    Adoption of the bill is expected in early 2014, and is critical for both European and American consumers because the outcome of these information privacy laws could critically affect United States technology companies.  Although based in the United States, many Silicon Valley companies typically generate a third or more of their sales in the European Union.  The profitability and continued success of companies such as eBay, Amazon, Microsoft, Google, and Texas Instruments, among other companies, could depend in large part on how the European Parliament decides to format their information privacy laws.  While these laws are designed to protect the privacy of the consumers, many corporations fear that their loss of data could turn into a loss of sales, hurting both the consumers and the corporation. As a result, the tension between consumer privacy and profitability is highlighted in Brussels’ current struggle over increased European privacy laws.

  • Leave my e-mail alone!

    Catalina Carmona

     

    For quite some time now, both industry and privacy advocates have pointed out the need of reforming the Electronic Communications Privacy Act (ECPA). The main argument is that the act, which was passed in 1986, cannot adequately respond to new technologies, and leaves important loopholes for privacy to be disrupted.

     

    For example, ECPA only requests law enforcement authorities to have a warrant when searching through email that has not been opened, and is less than 180 days old. For older emails, no warrant is required. In times in which people no longer store their emails in their hard drives, but on the cloud or a server, this poses serious threats to privacy.

     

    In November 2012, the Senate Judiciary Committee approved a reform to ECPA, which would now require law enforcement authorities to obtain a warrant in all cases when searching through email.

    http://www.nytimes.com/2012/11/30/technology/senate-committee-approves-stricter-privacy-for-e-mail.html?_r=0

     

    The Committee approved this bill despite strong opposition from enforcement agencies. In fact, just a few days before this proposal was approved, Patrick Leahy, the Democratic chairman of the Senate Judiciary Committee, who also took part in the drafting of the original version of ECPA, was ready to go through with a version that would allow several agencies –including the Securities and Exchange Commission and the Federal Communications Commission– to access email without a warrant. The FBI and Homeland Security would have even greater powers under the Act, as they could even fully access online accounts without a judge authorization, or notification to the owner of the account.

    http://news.cnet.com/8301-13578_3-57552225-38/senate-bill-rewrite-lets-feds-read-your-e-mail-without-warrants/?part=rss&subj=news

     

    The online community has enthusiastically received the reforms to ECPA, and now awaits the final vote on the Senate, which is expected to happen some time this year.

    (See, for example: https://www.cdt.org/pr_statement/senate-committee-takes-historic-step-privacy and https://www.netnanny.com/blog/the-ecpa-and-your-online-privacy/ )

     

    But the bill will still need to overcome the resistance from more conservative groups, who believe that public safety should have a stronger stance when analyzing online privacy.

  • Mistakes By Credit Reporting Agencies

    Zachary King

     

    This past Sunday 60 Minutes aired a report about the enormous amount of mistakes made by credit reporting agencies.  (http://www.cbsnews.com/8301-18560_162-57567957/40-million-mistakes-is-your-credit-report-accurate/).

     

    In the report Steve Kroft cites to a newly released 8-year long study conducted by the FTC into the big 3 credit reporting agencies (Experian, TransUnion, and Equifax) saying that 40 million Americans have an error on their credit reports and 20 million have a mistake significant enough to lower their credit score. This translates to one in every five adults with an error, which the Ohio attorney general has called “unconscionable.”

     

    The segment explains the harms faced by individuals with mistakes on their credit records. The show concentrates on one woman who had a six year battle with the big three companies. She was denied credit and couldn’t refinance her mortgage or undersign a loan for her children. When she ordered her credit reports there was nothing alarming. She only found out what the problem was by peaking at her file at a bank when nobody was looking. She learned that the credit reports that banks get are different from what the consumer can get. In her case the large debts of a woman with the same first name, but a completely different last name from a different state somehow got added to her file. While it seems like this would be easy to fix, it turns out that it was impossible. The companies refuse to undergo the reasonable investigations required by the FCRA. 60 Minutes interviewed former employees of Experian who said that they did not have the power to do even the most basic investigation and were instructed to always take the word of the creditor to be true. The only way that she was able to finally prevail was by filing a lawsuit. The show says that the credit reporting companies are not interested in improving their policies. They reason that it is cheaper to every so often pay $ 1 million in punitive damages than it would be to implement a system that is in line with the basic fair information practice principles.

     

    60 Minutes explained this story as “a horror story worthy of Hitchcock or Kafka.” While these analogies aren’t bad, what is more apt is the movie Brazil, where a fly gets jammed in a typewriter causing a slight change in a name printed on a government document, which sets into place a very unfortunate series of events. Rather than give spoilers, you should watch the movie (http://www.imdb.com/title/tt0088846/). In any event, now that there is some press about the practices of the credit reporting agencies, perhaps changes will be made and we can avoid the path that is currently set towards Terry Gilliam’s dystopian bureaucratic vision captured in Brazil.

  • Understanding Facebook Privacy

    Jessica Heimler

     

    http://www.nytimes.com/2013/02/07/technology/personaltech/protecting-your-privacy-on-the-new-facebook.html?smid=tw-nytimes

     

    With Facebook consistently rolling out new features and subsequent privacy settings, many people may be unaware as to how to best protect their online information. This article, which appeared on February 6, 2013 in the New York Times. The article suggests four questions to ask yourself so as to best be able to format your privacy settings. First is “How You Would Like To Be Found.” It gives tips on how to disable search engines from linking to your facebook timeline and how to determine what the privacy settings are for something posted by a friend. The next question is “what do you want the world to know about you?” It urges readers to reconsider including seemingly harmless pieces of information, such as gender and birthday, which can be exploited by hackers. The article also identifies online tools which can identify pieces of information, such as profanity, and gives you the option of deleting it from your profile. Third asks “do you mind being tracked by advertisers?” and explains how to remove targeted advertising from your homepage. Finally, the article asks “Whom do you want to befriend?” and asks readers to carefully consider who they create connections with over Facebook. It identifies two more pieces of software that can prevent a Facebook friend’s actions from displaying pieces of your own information publicly.

     

    This article is an important read even for those who think they have a good handle on Facebook’s privacy settings. The new version of Facebook, released this past December, will allow all users–including strangers–to search for pieces of information such as what you do and where you go. It is imperative that users know how to protect this information in the best way possible.

  • US Interests behind proposed amendments to the EU’s planned General Data Protection Regulation.

    Akiva Miller

     

    The approaches to privacy regulation taken by Europe and the United States are often seen as being at odds with one another. The European regulatory scheme is characterized as overarching, comprehensive, principled, centrally-controlled, and more protective of citizen’s rights, whereas the US regulatory system is characterized as a patchwork of sector-specific laws and regulations, lacking in unitary concepts, driven by a combination of FTC action and self-regulation by the industries, and less-protective of citizens’ rights. (See, for example: http://www.nytimes.com/2013/02/03/technology/consumer-data-protection-laws-an-ocean-apart.html?_r=1& ,  which was featured in last week’s PRG blog post).

     

    However, this impression may need to be revisited following closer scrutiny of the drafting process of the EU’s new Data Protection Regulation.  As technology news site GigaOm reports, a recent examination of the proposed amendments to the draft Data Protection Regulation conducted by Max Schmers, and Austrian Law student and vocal critic of Facebook, casts light on the extent to which US commercial interests are influencing the drafting process.  Schmers’s examination shows how language coming from from lobbyists for US-based commercial giants Amazon and eBay, as well as the American Chamber of Commerce, have been copy-and-pasted directly into the opinion submitted by the European Parliament’s Committee on the Internal Market and Consumer Protection to amend the proposed General Data Protection Regulation. According to the report, these suggested changes water-down the original protections of European citizens’ rights in favor of American business.

     

    http://gigaom.com/2013/02/11/amazon-ebay-privacy-lobbying-sparks-cut-and-paste-crowdsourcing-drive/

     

     

    So perhaps the guiding hands behind privacy regulation in the US and Europe are not so vastly different after all? If true, this information is a vivid reminder that Europe’s principled approach to privacy does not necessarily translate into tougher privacy safeguards for citizens. It should also serve as a food for thought for advocates of comprehensive privacy legislation in the United States and elsewhere around the world.

     

     

    Information on the proposed General Data Protection Regulation can be found at: http://ec.europa.eu/justice/newsroom/data-protection/news/130206_en.htm

     

    The proposed amendments by the Committee on the Internal Market and Consumer Protection  can be found at: http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-%2f%2fEP%2f%2fNONSGML%2bCOMPARL%2bPE-496.497%2b02%2bDOC%2bPDF%2bV0%2f%2fEN

     

  • FTC uses the Fair Credit Reporting Act to protect social media users

    Peter Kauffman

     

    http://www.nytimes.com/2012/06/13/technology/ftc-levies-first-fine-over-internet-data.html

    Last June, the Federal Trade Commission assessed an $800,000 penalty on Spokeo, a data collection agency, for distributing personal information as a way for potential employers to screen job applicants. According to the above New York Times article, this was “the F.T.C.’s first case addressing the sale of Internet and social media data for use in employment screening.” Like the Google buzz case and the Path settlement discussed in the “FTC is getting serious about regulating mobile privacy” blog post, this indicates the FTC’s willingness to aggressively curb social media sites’ abilities to disseminate their users’ private information. Unlike those two cases, the FTC assessed the fine against Spokeo under the Fair Credit Reporting Act.

    Based on this case, institutions can be considered consumer reporting agencies despite their best attempts to not fall under that label. In 2010, Spokeo changed its terms of service to state that it “was not a ‘consumer reporting agency’ and that consumers could not use its profiles for purposes that were covered by the Fair Credit Reporting Act.” Similar to the Google Buzz case, the FTC faulted the company for insufficient notice to subscribers about such a change in its practice. The FTC then argued that the “coherent people profiles” Spokeo made available—which included an individual’s marital status, hobbies, ethnicity, religion, and photos—constituted a “consumer report” under the definition in 15 U.S.C. § 1681b(d). This case highlighted an interesting strategy the FTC can employ in its quest to protect dissemination of social media users’ private information.

  • Everyone can now mine open sources and social network information (but the government may have a new too for doing it too).

    Posted by: Akiva Miller

    It was recently published that defense giant Raytheon has developed a system called “Rapid Information Overlay Technology” (RIOT), designed to powerfully mine information from social networks, including photos and the location information associated with them. RIOT reportedly has the ability to predict behavior based on people’s online habits.

    http://www.pcmag.com/article2/0,2817,2415340,00.asp

    http://www.guardian.co.uk/world/video/2013/feb/10/raytheon-software-tracks-online-video

    Although RIOT has not yet been sold to any client, the clear market for it is national security agencies and law enforcement.  The news on RIOT already sparked some strong negative reactions from rights advocates:

    http://www.rawstory.com/rs/2013/02/10/rights-groups-slam-ratheyon-secret-software-that-tracks-social-media-and-predicts-peoples-future-behavior/

    Meanwhile, it seems that many commercial entities are looking into technologies that would allow them to harness information from open sources, including social networks, in much more sophisticated ways than ever before. One company that provides this kind of software is ClearForest, a Thomson Reuters company. ClearForest offers a product called Calais, which allows users to “derive meaning from unstructured information, such as news articles, blog posts, research reports and more”.

    See: http://www.clearforest.com/

    Israeli newspaper Haaretz reports that ClearForest software is used in a variety of ways: Reuters uses it to offer its users better access to its content. Brand-monitoring services (such as Meltwater) use it to track brand reputation, pension funds and hedge funds use it in order to scour the internet for relevant information that could impact their investments, and at least one journalist uses the software to find hidden connections between government-owned enterprises and contractors who win public tenders.

    http://www.haaretz.co.il/misc/1.1196500 (Sorry, its only in Hebrew)

    Here’s what was written about ClearForest when it was bought in 2007:

    http://www.reuters.com/article/2007/04/30/idUSNAAD300120070430

    The proliferation of tools to mine blogs social media raises interesting questions about the new and potentially valuable ways information that ordinary people generate can be used by corporations and the government. How should we react to the knowledge that our blog posts and tweets are not merely visible by anyone but can also mined for a myriad of new purposes?

    A few other sources on data mining of open sources and social media:

    More on data mining for brand management:

    http://www.ibm.com/developerworks/library/ba-social-media-spss-text-mining/

    Mining information for job applicant screening and employee monitoring (apparently, it’s not a violation of the FCRA):

    http://www.forbes.com/sites/kashmirhill/2011/06/15/start-up-that-monitors-employees-internet-and-social-media-footprints-gets-gov-approval/

    http://www.forbes.com/sites/kashmirhill/2011/06/15/start-up-that-monitors-employees-internet-and-social-media-footprints-gets-gov-approval/

    Mining social media for banking and credit assessment purposes (doesn’t this possibly run afoul of the FCRA?):

    http://thefinancialbrand.com/20160/analyzing-social-media-networks-for-financial-marketing/