Category: PRG Students

  • With the Launch of Zcash, Speculators Consider the Potential of an Untraceable Cryptocurrency

    By Eli Siems

    A new digital currency was launched last Friday (28 Oct.) that threatens to give Bitcoin a run for its virtual money. It’s called Zcash. But there’s one major distinction between the two so-called cryptocurrencies that Zcash believes will give it an edge in the digital market. The currency’s official website puts it this way: “If Bitcoin is like http for money, Zcash is https.” In other words, this new cryptocurrency is designed to be secure, private, and virtually untraceable by anyone but the parties to a transaction.

    Interest and speculation is high. On Monday, the New York Times reported that “investors were paying over $1000 for a single unit of Zcash.” The currency launched with a ton of buzz and with the support of computer scientists at Johns Hopkins and MIT, privacy activists, and electronic currency traders, speculators, and aficionados.

    While it’s far too early to say if the currency will take off, its core principles and technology are already shaping conversations on the future of data privacy.

    The difference between Zcash and other, less private cryptocurrencies is its handling of an essential component known as a blockchain, a permanent ledger that tracks coins. The blockchain is key to maintaining the integrity of the currency and proving no counterfeiting or interference has taken place. For Bitcoin, the blockchain is public and can be accessed to analyze the flow of currency, which has raised more than a few eyebrows across the spectrum of potential Bitcoin users. As The Economist reports, “This is a serious barrier for banks: blockchains could reveal their trading strategies and information about their customers”

    But Zcash is fundamentally different. Using a “zero-knowledge proof construction called a zk-SNARK,” the Zcash team has managed to create a secure ledger that keeps the identities of parties to a transaction and the amounts transferred undisclosed. Beyond cryptocurrency, the encryption technology is making waves on all shores of digital privacy and cryptography.

    Aside from potential benefits to large players like banks, Zcash markets itself on its privacy protection for every user. But such a currency, readily accessible and exchangeable, will bring with it huge and probably obvious law enforcement concerns. Back in 2013, when the idea that became Zcash was first proposed by Johns Hopkins researchers, Global Financial Integrity voiced strong opinions that a currency like Zcoin would do little more than facilitate a wide range of illicit transactions and cripple hard-won law enforcement tools. Monero, a similarly private but less anticipated cryptocurrency, has already shown up in countless illicit transactions.

    On the other side, Zcash founder Zooko Wilcox insists that Zcoin has a different purpose: “All of the conversations I’ve had with businesses, banks, regulators and law enforcement have been about the need for data security for commercial applications.”

    Matthew Green of Johns Hopkins, an originator of the Zcoin concept, frames it differently: “The basic story is that we have been gradually losing our privacy in a whole bunch of ways that people don’t appreciate,” Zcash being a way to take back that privacy in at least one area.

    Whatever your opinion is on the utilities or dangers of an untraceable cryptocurrency, one thing is quite clear: Zcash is here and is bringing back longstanding debates about privacy and law enforcement in the digital age with renewed immediacy.

  • Facebook Wants You to Get Even More Political

    By Sofia Grafanaki

    Facebook rolled out a new feature last week, allowing users to officially endorse a presidential candidate. It is very simple to use – all a user needs to do is go on the candidate’s Facebook page and click on the “endorsement” tab to add his/her own endorsement. One can also add a message with it, presumably explaining their position. The feature has already sparked several interesting discussions, ranging from whether journalists should use this tool, given the conflicting values of neutrality and transparency in the context of political journalism, to the potential harassment that can result from expressing political opinions.

    Facebook seems to have a bigger agenda than just the upcoming presidential election by planning to make the feature more widely available, to state and local election candidates for instance. Detailed instructions on Facebook’s Help Center page explain that to receive endorsements, all a user needs to do is change the category of his/her page to “Politician, Political Candidate, or Government Official.”

    The feature is not just directed to users who are open about their political opinions and positions, as the feature allows you to select the audience who can view your endorsement post. Detailed instructions on Facebook’s Help Center page warn users to:

    Keep in mind that if you choose Public as the audience of your endorsement, it may also appear on the candidate’s Page if the candidate chooses to feature your endorsement.

    Interestingly, while this may seem somewhat respectful to voter privacy, it also helps a reluctant user feel more comfortable to share their political preferences, making it almost as if the user were completing a missing piece of their profile, one that no one needs to see. The result however is that Facebook obtains more accurate data on their users, allowing for more accurate targeted advertising.

    The fact that the Company has been tracking political preferences is not news; it has been doing that since the launch of its ad personalization tool, in order to bring users ads that cater to their interests. Theoretically users’ can see and somewhat control their political labels among others, but as they are “tucked away” in the ad preferences section on Facebook, this is not always intuitive.

    Most importantly, while previously these labels were based on inferences Facebook algorithms were “taught” to make based on information collected from the users’ profiles and activity, with the new endorsement feature, these inferences are now confirmed or even corrected by the users themselves.

    Ultimately this is just a glance at a much larger discussion on the acceptable boundaries of voter-micro targeting. Is it just the natural evolution of political campaigning or are we starting to cross lines that affect our democratic process?

    https://www.facebook.com/help/1289003767810596

    https://techcrunch.com/2016/10/18/facebook-presidential-endorsements/

    http://money.cnn.com/2016/10/18/technology/facebook-endorsement-election-2016/

    http://www.poynter.org/2016/ask-the-ethicist-should-journalists-use-facebooks-new-endorsement-tool/435713/

    http://www.digitaltrends.com/social-media/facebook-endorsements/

    http://www.nytimes.com/2016/08/24/us/politics/facebook-ads-politics.html?_r=2

    http://www.digitaltrends.com/social-media/facebook-political-views-ads/

    https://www.facebook.com/ads/preferences

     

     

  • Privacy, Security and the Internet of Things: A Changing Landscape

    By: Yan Shvartzshnaider

    There is no such thing as a “bullet-proof” system. A system’s security is in a constant state of becoming. Breaking into any system used to require resources and time: the more resources you had, the less time you needed, and vice-versa. To protect your system you would want to ensure that it takes a significant amount of time (in the best case, approaching infinity) for the attacker to be able to break it.

    For a while, this was an achievable goal: resources were too expensive and hard to come by for the average perpetrator to even bother with an attack. This was particularly true of well-established infrastructure. Things have changed, however. Cloud services like Amazon Web Services (AWS) allow one to span hundreds of servers with the ease of clicking a button. We connected fridges, toaster, thermostats and other appliances to the Internet, the Internet of Things (IoT). Today, one neither needs money, expensive resources nor time to mount a serious attack. In one of the most recent attacks, two teenagers were able to “coordinate more than 150,000 so-called distributed denial-of-service (DDoS) attacks” from the comfort of their home, while making money in the process.

    While the technological landscape has changed, the attitude of consumers has not. The market is full of unpatched devices that make it easy for an attacker to compromise the system and use it as they see fit.

    In a recent blog post— Security Economics of the Internet of Things —Bruce Schneier discusses these issues and argues that we have reached a point where the government needs to intervene with adequate regulation:

    IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution. The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care.

    Whether or not government intervention is the correct answer remains to be seen, but we should all be grateful to Schneier for raising the question.

    https://www.schneier.com/blog/archives/2016/10/security_econom_1.html

  • Google’s Clever Plan to Stop Aspiring ISIS Recruits

    By Sofia Grafanaki

    A new and promising approach seeks to disrupt ISIS online recruiting efforts through targeted advertising, as presented at a recent event at the Brookings Institution. Google’s tech incubator Jigsaw (previously called Google Ideas), together with Moonshot CVE, Quantum Communications, and the Gen Next Foundation, developed a plan to help the fight against terrorism. The “Redirect Method” is described as a way to get inside the heads of potential terrorists before they are actually recruited and change their intentions.

    The way the program seems to work, is that it “places advertising alongside results for any keywords and phrases that Jigsaw has determined people attracted to ISIS commonly search for”. The advertising links to YouTube channels with videos that Jigsaw believes can “undo ISIS’s brainwashing”. According to Yasmin Green, Jigsaw’s head of research and development, “the Redirect Method is at its heart a targeted advertising campaign: Let’s take these individuals who are vulnerable to ISIS’ recruitment messaging and instead show them information that refutes it.” Results seem to show that the program is effective – it seems that more than 300,000 people were drawn to the anti-ISIS YouTube channels in just about 2 months.

    But could this “powerful tool for getting inside the minds of some of the least understood and most dangerous people on the Internet”, as described by Wired Magazine, be used for just about anything else as well? There is no doubt that the specific use is desirable (and a lot more respective of privacy than NSA’s bulk surveillance method). But once it’s out there as a tool, can it not be used for other causes? If it’s really just a targeted advertising campaign, can Google develop a product out of this? Or is it already a product in some ways? How would we feel if the cause was not to stop terrorism but to stop a political candidate for instance that some deem dangerous? The minute we move away from extremism, the idea of using data and analytics to get inside the minds of people and change their intentions starts to sound much less appealing.

    https://www.wired.com/2016/09/googles-clever-plan-stop-aspiring-isis-recruits/

    http://www.slate.com/articles/technology/future_tense/2016/09/the_problem_with_google_jigsaw_s_anti_extremism_plan_redirect.html

    https://theintercept.com/2016/09/07/google-program-to-deradicalize-jihadis-will-be-used-for-right-wing-american-extremists-next/

    http://www.businessinsider.com/jigsaw-redirect-method-to-stop-isis-recruits-2016-9

  • Digital Ellis Islands

    American tech companies, especially those running social networking sites, often pride themselves on giving voice and information to oppressed netizens around the world. Many commend Twitter’s role in facilitating coordination and information flow during the 2009 Iranian presidential election process. Even the U.S. State Department acknowledged this role when it asked the microblogging site to hold off on a software update so as not to interfere with use by Iranian protesters. Twitter is currently banned in Iran. Also in 2009, the Chinese government blocked access to Facebook in order to curtail communication between independence activists rioting in Xinjiang. Twitter, Google search, and Youtube are blocked behind the Great Firewall of China to this day.

    Anonymous web browsing, such as onion routing via Tor or a comparable mechanism, provides a route around censorship and persecution. Individuals can breathe free on American-run sites, they just have to wear a hoodie.

    Yet, according to a recent paper, some of the world’s largest sites are cutting off access to anonymous users. Site providers can easily detect when a user is accessing it from an anonymous origin, and now many are restricting certain uses or precluding access altogether. The authors describe this as “second-class treatment.”

    Google is one site that limits search functions to anonymous users. Some companies have done the opposite. In 2014, Facebook provided a “hidden service,” where users can access the site anonymously and not be curtailed by algorithms that might otherwise block them for fraudulent use. Mark Zuckerberg once said, “How can you connect the whole world if you leave out 1.6 billion people?”

    This state of affairs is a common one in the privacy v. security debate. Blocking anonymous use is meant to curtail criminal use. This comes at the cost of denying innocent users, such as those seeking a refuge of communication and connection to the world when oppressive regimes won’t allow it.

    It is up to American companies, not the American government, to decide whether to stamp the ticket.

    Paper: https://www.internetsociety.org/sites/default/files/blogs-media/do-you-see-what-i-see-differential-treatment-anonymous-users.pdf

  • U.S. Gets Widespread Facial Recognition Technology

    This past week, Fortune reported that U.S. retailers were using facial recognition software to target shoplifters.[1] The technology works by scanning the face of customers entering a store and seeking to match the photograph with a group of previously identified individuals. According to the article, this previously identified group is created by the security personnel employed by the store.[2] This article raises several fascinating questions: (1) who owns an individual’s face; (2) what other databases can be compared to the facial scans; and (3) how accurate is the the scanning technology.

    Facebook and Google, among other online tech giants, have been using facial recognition software to “tag” individuals in photographs for several years now.[3] This allows these social media platforms to identify users is in a photograph to post on their profile. In addition, it allows the platform to gather information about friend groups to provide superior marketing information to its advertisers. The U.S. Department of Commerce created a working group to determine the answer of who owns the facial scan.[4] However, privacy groups dropped out of the working group after they were unable to get companies to agree to basic privacy controls. There is a question of whether taking continual photographs requires the consent of those photographed, which would likely render facial recognition software impractical.[5]

    While relatively new to the U.S., this technology has been used in Europe for years. For instance, a music festival in the U.K. adopted this technology to scan the faces of concertgoers.[6] The police claimed that the the system was to be used “to find organized criminals who prey on festivalgoers who are often victims of theft.”[7] This use of facial scanning shows the potential of the technology: software can incorporate broader databases into the facial database to catch individuals who may have a warrant out for failing to pay a parking ticket.

    A final question raised is the accuracy of the technology. One study found that the FBI’s database, which contains the most pictures but is also one of the least technologically advanced, can only provide the right person 80% of the time.[8] Facebook, on the other hand, claims that it’s algorithm depicts the same person 97.25% of the time which is almost equivalent to a human.[9] However, there are no good studies that currently depict the real time offline accuracy of facial recognition software.[10]

    Walmart’s experiment only lasted a couple of months after it found that it did not have a good return on investment.[11] In addition, many other companies appear reticent to use the technology. While Congress or the Department of Commerce can hopefully one day find a workable solution to these legal questions, many companies are already scared about the privacy backlash that may occur by adopting these scanners.

    [1] Jeff John Roberts, Walmart’s Use of Sci-fi Tech to Spot Shoplifters Raises Privacy Questions, Fortune (Nov. 9, 2015), http://fortune.com/2015/11/09/wal-mart-facial-recognition/.

    [2] Id.

    [3] Help Center: Tagging Photos, Facebook, https://www.facebook.com/help/122175507864081 (last visited Nov. 16, 2015).

    [4] Robinson Meyer, Who Owns Your Face, The Atlantic (Jul. 2, 2015), http://www.theatlantic.com/technology/archive/2015/07/how-good-facial-recognition-technology-government-regulation/397289/.

    [5] Id.

    [6] Paul Gallagher, Download Festival: Facial Recognition Technology Used at Event Could be Coming to Festivals Nationwide, Independent (London, U.K.) (Sept. 24, 2015), http://www.independent.co.uk/news/uk/crime/download-festival-facial-recognition-technology-used-at-event-could-be-coming-to-festivals-10316922.html.

    [7] Id.

    [8] Tim Cushing, The FBI’s Facial Recognition Database Combines Lo-Res Photos with Zero Civil Liberties Considerations, techdirt (Apr. 15, 2014), https://www.techdirt.com/articles/20140414/16045126909/fbis-facial-recognition-database-combines-lo-res-photos-with-zero-civil-liberties-considerations.shtml.

    [9] Meyer, supra note 4.

    [10] Id.

    [11] Roberts, supra note 1.

  • Industry Leaders Oppose CISA, Choosing User Privacy Over Liability Protection

    The Cyber Security Information Act (CISA), which passed the Senate on Tuesday, allows businesses to hand over users’ information to the U.S. government when the business deems it a cyber “threat indicator.” Businesses have been reluctant to volunteer this information, fearing exposure to liability from affected users. CISA assuages that fear by granting immunity. Privacy advocates, like the EFF and Fight for the Future, are against CISA in part because it eases flow of consumer data to the Intelligence Community. More interesting is how private industry came down.

    Tech giants, including Microsoft, Apple, and Twitter, publicly opposed its passage, (reversing their previous support, as discussed below). They claimed to oppose the bill out of respect for their users’ privacy. Yet under CISA, sharing is voluntary. A company can respect privacy by never sharing info that implicates a cyber threat. Opposition to the bill is more a message to consumers that they care about privacy interests in general. Some argue that sharing information about threat indicators is not, in fact, voluntary. The government has a history of attaching info-sharing requirements in roundabout ways. The competitive advantage that might come with participating in the program may also make sharing effectively necessary. That remains to be seen.

    The fact that companies felt it was in their interest to voice opposition, instead of supporting a bill that would grant them greater immunity from liability, suggests that letting customers know they care about privacy is becoming better and better business. In fact, Microsoft, Apple and Salesforce previously supported CISA, and only changed positions after advocacy and consumer rights groups petitioned them to reconsider. According to Fight for the Future’s scorecard, many tech companies, including AT&T and Verizon, still support CISA. They likely weighed liability protection more heavily.

    If a company wants to protect its users’ information after the passage of CISA, it can simply refrain from sharing info with the state, (according to how the bill is advertised in its current form). Yet many companies felt it necessary to show their support for privacy rights in general by opposing the bill. What’s more, they took this stance when it meant losing better liability protections. The market value of being Tough on Behalf of Privacy is increasing.

  • Ad Blockers and AppleNews: Apple’s iOS 9 Portends a Changing Landscape for Online Publishers and Advertisers

    Ad Blockers and AppleNews: Apple’s iOS 9 Portends a Changing Landscape for Online Publishers and Advertisers

    By: Erin L. Bansal

    Apple’s recent update of its operating system (labeled iOS 9) includes two significant changes to the way online publishers, and their advertisers, may interact with users.  First, Apple now allows owners of newer mobile devices to download “ad blocker” apps.  These apps provide users with extensions to their Web browsers that can block ads from being shown while the user browses the Web.  In addition, Apple announced the release of its own AppleNews app, which directly provides users with content from over 50 leading media outlets such as New York Magazine and The Washington Post.

    For some commentators, the inclusion of ad-blocking apps is a sea change in digital advertising that will protect consumers from unwarranted tracking and intrusion into their online experience.  Computer browsers have long allowed the use of ad-blocking software, but until now, Apple did not allow similar apps to list in its app store.  Use of ad-blockers and calls for their increased ubiquity has grown in recent years.  A study released in August by Adobe and PageFair found that more than 198 million people worldwide actively use ad blockers when searching the Web.  It is important to note, however, that even using ad-blockers, users may still receive certain advertisements.  Some ad-blockers allow advertisers to bypass the blocking if the ads meant certain standards, such as ensuring that ads are clearly marked as such or if they join the Do Not Track list.

    For smaller publishers and bloggers, the rise of ad blockers may be the next crack in the already crumbling world of digital advertising.  Many major online publishers, including The New York Times and the Wall Street Journal, have already shifted to a paywall where readers are only given access to the entire contents of a site in return for a subscription fee.  The rise of ad blockers may force smaller publishers who rely on digital advertising as a major portion of their revenue model to likewise seek other sources of revenue in order to survive.  These sources might include, for example, sponsored content where advertisers pay to provide content on the site, or increasing use of links to e-commerce sites, who will pay a fee for delivering users.

    Apple’s inclusion of ad-blocks and its NewsApp in iOS9 could simply be a consequence of technology’s seemingly inevitable march toward mobile devices and their apps.  Users increasingly spend their online time on smartphones and their apps.  Forrester Research reported that smartphone users spend 85% of their time on their devices in apps.  This shift has encouraged not only Apple but also other technology providers to move into partnerships with content providers.  Facebook recently launched Instant Articles, allowing it to directly host content provided by its partner-publishers, while Snapchat now includes original news articles within its app.

    In the end, it is too early to tell whether the increased use of ad blockers will actually provide users with the content they want.  In any event, major technology companies — like Apple – are clearly going to play an increasingly large role in the provision of content on the Web.  At a minimum, Apple’s moves have once again reverberated throughout the advertising and technology sectors.

    Sources:

    http://www.apple.com/pr/library/2015/09/09iOS-9-Available-as-a-Free-Update-for-iPhone-iPad-iPod-touch-Users-September-16.html

    http://www.wired.com/2015/09/apple-taunting-publishers-ad-blocking-apple-news/

    http://www.bloomberg.com/news/articles/2015-09-09/apple-s-ad-blocking-feature-is-sending-publishers-scrambling

    https://www.eff.org/deeplinks/2015/09/adblockers-and-innovative-ad-companies-are-working-together-build-more-privacy

    http://blogs.wsj.com/cmo/2015/09/16/apple-software-update-brings-ad-blockers-along-with-apple-news-sponsors/tab/print/

    http://www.ft.com/intl/cms/s/0/a8daf5d0-7892-11e5-933d-efcdc3c11c89.html#axzz3pKBCV6a5

    https://iapp.org/news/a/the-privacy-consequences-in-the-rise-of-ad-blockers/

     

     

  • How Does the Trans-Pacific Partnership Affect Users, Security Researchers?

    Last Friday, Wikileaks released the copyright chapter of the Trans-Pacific Partnership (TPP). The chapter included a section stating that “judicial authorities shall, at least, have the authority to […] order the destruction of devices and products found to be involved in” any activity that circumvents software controls that manufacturers build into their devices, known as Digital Rights Management (DRM) technology. What effect will this language have on users and white hat security researchers who try to modify the software of the products they buy?

    The Electronic Frontier Foundation summarized how this language may negatively affect users who tinker with a product’s software:

    The odd effect of this is that someone tinkering with a file or device that contains a copyrighted work can be made liable (criminally so, if wilfullness and a commercial motive can be shown), for doing so even when no copyright infringement is committed. Although the TPP text does allow countries to pass exceptions that allow DRM circumvention for non-infringing uses, such exceptions are not mandatory, as they ought to be.

    The language from the copyright chapter may also have an adverse effect on the work of private security researchers—sometimes referred to as “white hat” hackers—in detecting and preventing security defects. Privacy advocates should be concerned about the effect this text will have on the efforts of white hat hackers, who work to improve products’ security vulnerabilities before they become massive privacy breaches.

    The “first sale doctrine,” which limits the ability of copyright holders to control uses of copies of their work after it has been sold or transferred to a consumer, has long been a feature of copyright law. The copyright chapter of the TPP, along with other recent developments in copyright law, represent a significant shift away from the first sale doctrine. This shift threatens not only the rights of consumers to tinker with and modify the software in the products they buy, but also the security of those products. While copyright should continue working to protect rights holders in the digital era, the costs imposed on consumers by the TPP copyright chapter may prove too high.

    References

    https://wikileaks.org/tpp-ip3/WikiLeaks-TPP-IP-Chapter/WikiLeaks-TPP-IP-Chapter-051015.pdf

    https://www.eff.org/deeplinks/2015/10/final-leaked-tpp-text-all-we-feared

    http://motherboard.vice.com/read/white-hat-hackers-would-have-their-devices-destroyed-under-the-tpp?utm_s

    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2474635

  • Facebook’s New Patent Application Identifies Camera Signatures Using Metadata, Faulty Pixels

    Earlier this year, Facebook filed a patent application claiming a method for identifying camera signatures based on features extracted from uploaded images, including faulty pixel positions in the camera and metadata available in files storing the images. The patent also claims a method for making inferences about the users associated with the cameras. For reference, the abstract of the patent application is included below:

    Images uploaded by users of a social networking system are analyzed to determine signatures of cameras used to capture the images. A camera signature comprises features extracted from images that characterize the camera used for capturing the image, for example, faulty pixel positions in the camera and metadata available in files storing the images. Associations between users and cameras are inferred based on actions relating users with the cameras, for example, users uploading images, users being tagged in images captured with a camera, and the like. Associations between users of the social networking system related via cameras are inferred. These associations are used beneficially for the social networking system, for example, for recommending potential connections to a user, recommending events and groups to users, identifying multiple user accounts created by the same user, detecting fraudulent accounts, and determining affinity between users.

    The “fingerprinting” of cameras claimed in the patent poses several privacy concerns. Although Facebook states that the claimed process could be used as a means of “identifying multiple user accounts created by the same user, detecting fraudulent accounts, and determining affinity between users,” the process also significantly diminishes the ability of individuals to anonymously take and upload photos online. Currently, individuals have several means to protect their privacy through the removal of geolocation and other metadata before uploading their photos to online services such as Facebook. The process claimed by Facebook in this patent application would essentially override the ability of users to remove metadata and protect their privacy by identifying data directly from the camera—such as lens scratches or flawed pixels.

    Though technical solutions could be used to maintain anonymity even if Facebook’s patent application goes through—including, for example, an application that randomly includes flawed pixels or minor lens scratches to photographs before they are uploaded without diminishing overall picture quality—we should nonetheless question whether the benefits introduced by this new patent application outweigh the privacy risks.

    References:

    https://www.google.com/patents/US20150124107

    http://www.imaging-resource.com/news/2015/09/18/facebook-wants-to-be-able-to-fingerprint-a-single-im

    http://venturebeat.com/2015/09/18/facebook-files-patent-for-using-a-photos-camera-signature-to-connect-you-with-other-users/

    http://www.geek.com/news/facebook-developing-way-to-fingerprint-the-camera-you-used-to-take-a-photo-1634542/

    http://thenextweb.com/facebook/2015/09/22/facebook-seeks-patent-on-process-that-would-id-your-camera-and-images/