Blog

  • Facebook Wants You to Get Even More Political

    By Sofia Grafanaki

    Facebook rolled out a new feature last week, allowing users to officially endorse a presidential candidate. It is very simple to use – all a user needs to do is go on the candidate’s Facebook page and click on the “endorsement” tab to add his/her own endorsement. One can also add a message with it, presumably explaining their position. The feature has already sparked several interesting discussions, ranging from whether journalists should use this tool, given the conflicting values of neutrality and transparency in the context of political journalism, to the potential harassment that can result from expressing political opinions.

    Facebook seems to have a bigger agenda than just the upcoming presidential election by planning to make the feature more widely available, to state and local election candidates for instance. Detailed instructions on Facebook’s Help Center page explain that to receive endorsements, all a user needs to do is change the category of his/her page to “Politician, Political Candidate, or Government Official.”

    The feature is not just directed to users who are open about their political opinions and positions, as the feature allows you to select the audience who can view your endorsement post. Detailed instructions on Facebook’s Help Center page warn users to:

    Keep in mind that if you choose Public as the audience of your endorsement, it may also appear on the candidate’s Page if the candidate chooses to feature your endorsement.

    Interestingly, while this may seem somewhat respectful to voter privacy, it also helps a reluctant user feel more comfortable to share their political preferences, making it almost as if the user were completing a missing piece of their profile, one that no one needs to see. The result however is that Facebook obtains more accurate data on their users, allowing for more accurate targeted advertising.

    The fact that the Company has been tracking political preferences is not news; it has been doing that since the launch of its ad personalization tool, in order to bring users ads that cater to their interests. Theoretically users’ can see and somewhat control their political labels among others, but as they are “tucked away” in the ad preferences section on Facebook, this is not always intuitive.

    Most importantly, while previously these labels were based on inferences Facebook algorithms were “taught” to make based on information collected from the users’ profiles and activity, with the new endorsement feature, these inferences are now confirmed or even corrected by the users themselves.

    Ultimately this is just a glance at a much larger discussion on the acceptable boundaries of voter-micro targeting. Is it just the natural evolution of political campaigning or are we starting to cross lines that affect our democratic process?

    https://www.facebook.com/help/1289003767810596

    https://techcrunch.com/2016/10/18/facebook-presidential-endorsements/

    http://money.cnn.com/2016/10/18/technology/facebook-endorsement-election-2016/

    http://www.poynter.org/2016/ask-the-ethicist-should-journalists-use-facebooks-new-endorsement-tool/435713/

    http://www.digitaltrends.com/social-media/facebook-endorsements/

    http://www.nytimes.com/2016/08/24/us/politics/facebook-ads-politics.html?_r=2

    http://www.digitaltrends.com/social-media/facebook-political-views-ads/

    https://www.facebook.com/ads/preferences

     

     

  • Privacy, Security and the Internet of Things: A Changing Landscape

    By: Yan Shvartzshnaider

    There is no such thing as a “bullet-proof” system. A system’s security is in a constant state of becoming. Breaking into any system used to require resources and time: the more resources you had, the less time you needed, and vice-versa. To protect your system you would want to ensure that it takes a significant amount of time (in the best case, approaching infinity) for the attacker to be able to break it.

    For a while, this was an achievable goal: resources were too expensive and hard to come by for the average perpetrator to even bother with an attack. This was particularly true of well-established infrastructure. Things have changed, however. Cloud services like Amazon Web Services (AWS) allow one to span hundreds of servers with the ease of clicking a button. We connected fridges, toaster, thermostats and other appliances to the Internet, the Internet of Things (IoT). Today, one neither needs money, expensive resources nor time to mount a serious attack. In one of the most recent attacks, two teenagers were able to “coordinate more than 150,000 so-called distributed denial-of-service (DDoS) attacks” from the comfort of their home, while making money in the process.

    While the technological landscape has changed, the attitude of consumers has not. The market is full of unpatched devices that make it easy for an attacker to compromise the system and use it as they see fit.

    In a recent blog post— Security Economics of the Internet of Things —Bruce Schneier discusses these issues and argues that we have reached a point where the government needs to intervene with adequate regulation:

    IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution. The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care.

    Whether or not government intervention is the correct answer remains to be seen, but we should all be grateful to Schneier for raising the question.

    https://www.schneier.com/blog/archives/2016/10/security_econom_1.html

  • Google’s Clever Plan to Stop Aspiring ISIS Recruits

    By Sofia Grafanaki

    A new and promising approach seeks to disrupt ISIS online recruiting efforts through targeted advertising, as presented at a recent event at the Brookings Institution. Google’s tech incubator Jigsaw (previously called Google Ideas), together with Moonshot CVE, Quantum Communications, and the Gen Next Foundation, developed a plan to help the fight against terrorism. The “Redirect Method” is described as a way to get inside the heads of potential terrorists before they are actually recruited and change their intentions.

    The way the program seems to work, is that it “places advertising alongside results for any keywords and phrases that Jigsaw has determined people attracted to ISIS commonly search for”. The advertising links to YouTube channels with videos that Jigsaw believes can “undo ISIS’s brainwashing”. According to Yasmin Green, Jigsaw’s head of research and development, “the Redirect Method is at its heart a targeted advertising campaign: Let’s take these individuals who are vulnerable to ISIS’ recruitment messaging and instead show them information that refutes it.” Results seem to show that the program is effective – it seems that more than 300,000 people were drawn to the anti-ISIS YouTube channels in just about 2 months.

    But could this “powerful tool for getting inside the minds of some of the least understood and most dangerous people on the Internet”, as described by Wired Magazine, be used for just about anything else as well? There is no doubt that the specific use is desirable (and a lot more respective of privacy than NSA’s bulk surveillance method). But once it’s out there as a tool, can it not be used for other causes? If it’s really just a targeted advertising campaign, can Google develop a product out of this? Or is it already a product in some ways? How would we feel if the cause was not to stop terrorism but to stop a political candidate for instance that some deem dangerous? The minute we move away from extremism, the idea of using data and analytics to get inside the minds of people and change their intentions starts to sound much less appealing.

    https://www.wired.com/2016/09/googles-clever-plan-stop-aspiring-isis-recruits/

    http://www.slate.com/articles/technology/future_tense/2016/09/the_problem_with_google_jigsaw_s_anti_extremism_plan_redirect.html

    https://theintercept.com/2016/09/07/google-program-to-deradicalize-jihadis-will-be-used-for-right-wing-american-extremists-next/

    http://www.businessinsider.com/jigsaw-redirect-method-to-stop-isis-recruits-2016-9

  • PRG News Roundup: October 19th

    By Eli Siems

    Researchers from the Center for Privacy & Technology at Georgetown Law released a major study on the police use of facial recognition software. The report, The Perpetual Line-Up: Unregulated Police Face Recognition in America, reveals that half of all Americans are catalogued in law enforcement facial recognition networks and that the use of such networks by at least 52 agencies is effectively unregulated. #PerpetualLineUp

    The Justice Department outlined a new initiative to collect data on the use of force by law enforcement. The plan seeks to “collect, maintain and report data . . .  on all officers involved shootings, whether fatal or nonfatal, as well as any in-custody death.” The DoJ will be collaborating with “local, state, tribal and federal agencies” to implement a comprehensive data collection program.

    Facebook is testing an update to its messenger app that will propose conversation topics based on information about a user’s activities and interests.

    Our own Helen Nissenbaum will be a panelist on the topic of data collection and sharing this Friday (10/21) at the Conference on Security and Privacy for the Internet of Things at Princeton University. The conference is to be videotaped and livestreamed.

    The European Digital Rights Initiative (EDRi) has released a charming illustrated guide to internet privacy for kids. Adults seeking similar information can check out this page maintained by Consumer Reports.

    And finally, Famed naturalist David Attenborough has suggested that gorilla exhibits at zoos should utilize peepholes for visitor viewing rather than customary glass panes, TIME reports. The proposal is the result of evidence that the animals’ knowledge that they’re being watched affects their behavior and well-being, perhaps amounting to a suggestion that the chilling effect of surveillance is not limited to human subjects (though this did not come as news to primatologists).

  • PRG News Roundup: October 12

    By Nate Tisa

    The ACLU of Northern California has revealed U.S. firm Geofeedia used social media metadata access to deliver location and monitoring information to law enforcement agencies engaged in tracking activists, particularly those involved in the Ferguson, MO protests surrounding the death of Micheal Brown. Facebook, Instagram, and Twitter cut Geofeedia’s API access in late September for violation of their respective privacy policies.

    In the wake of Hurricane Matthew, government agencies and private firms are developing ways to use mobile application geolocation and metadata to track progress and compliance rates in areas of mandatory evacuation. Comparison of live data to existing baselines can give emergency planners an estimate of how many people remain in a given area that should be more or less void of activity.

    U.K. Prime Minister Theresa May has banned wearable technology, including the Apple Watch, from all cabinet meetings out of fear that compromised devices could serve as microphones for foreign intelligence services. The decision comes on the heels of U.S. investigations into possible Russian hacks of the Democratic National Committee and other election-related entities.

    The Tactical Tech Collective is hosting an open event with Mozilla in New York City this November and seeking workshop leaders. For more information see their website: https://tacticaltech.org.

  • PRG News Roundup: October 5th

    By Taylor Black

    On Tuesday, Reuters reported that Yahoo secretly installed a program in all user email accounts to search incoming emails for specific information. Since that report, journalists have also uncovered allegations that internal engineering built a siphon system on behalf of NSA to run every email looking for sets of characters. Yahoo states that they are “a law abiding company which complies with the laws of the United States.”

    An investigative journalist published in the New York Review of Books on Oct. 2 that Italian author Elena Ferrante, who writes under a pseudonym, had been unmasked, resulting in controversy over the ethical concerns around the potential identity reveal.

    Apple text message metadata: Content of messages are encrypted, but Apple retains logs of who you’re writing to for ~30days. More info forthcoming?

    Signal received a grand jury subpeona earlier in 2016, which they were permitted to disclose this week.

    Johnson and Johnson warns they have recently learned of a security vulnerability in one of its insulin pumps which could leave patients open to a malicious exploit, though they also stated the risk of such an exploit is low.

    An Austrian teenager is suing her parents for violating her privacy by posting childhood pictures to Facebook, and for refusing to take the photos down at her request.

     

     

  • PRG News Roundup: September 28th

    By Eliana Pfeffer

    Yahoo has experienced a number embarrassing security failures over the last four years. Last week, the company disclosed that hackers backed by what it believed was an unnamed foreign government stole the credentials of 500 million users in a breach that went undetected for two years. It was the biggest known intrusion into one company’s network, and the episode is now under investigation by both Yahoo and the Federal Bureau of Investigation. The company is currently facing lawsuits from people who fear their accounts have been hacked and claim the company was “grossly negligent,” putting their financial and personal data at risk. http://www.nytimes.com/2016/09/29/technology/yahoo-data-breach-hacking.html ; http://money.cnn.com/2016/09/23/news/companies/yahoo-sued-data-breach/

    What Facebook Thinks You Like, is a new project from ProPublica. The tool, an extension for Google’s Chrome browser, let users see exactly what activities, brands and products Facebook, based on its data, thinks they like. The tool also tells users which — and how many — advertising categories those interests place them in.https://www.propublica.org/article/breaking-the-black-box-what-facebook-knows-about-you

    Snapchat will start selling subglasses that record 10-second snippets of video this fall. new glasses: http://www.cnbc.com/2016/09/26/why-snapchats-new-glasses-could-be-more-than-just-a-toy.html

    Woman sues We-Vibe maker over secretly amassing ‘highly sensitive, personally identifiable information’ from vibrator that can be controlled by a smartphone.https://www.theguardian.com/us-news/2016/sep/14/wevibe-sex-toy-data-collection-chicago-lawsuit

    Intel’s new office in Israel will be ultra-smart, and feature face recognition software that replaces the need for identification badges, software that suggest carpooling with other users if an individual is often late to work, and recommend that an individual eat healthier based on their lunchtime diet. http://www.cnbc.com/2016/09/26/intels-office-of-the-future-is-a-micromanaging-monster.html?utm_source=twitterfeed&utm_medium=twitter

    A German data protection commissioner ordered Facebook on Tuesday to stop collecting and storing data on WhatsApp users in Germany. http://www.nytimes.com/2016/09/28/technology/whatsapp-facebook-germany.html

  • PRG News Roundup: September 21st

    The U.S. government officially endorsed driverless cars: http://www.nytimes.com/2016/09/20/technology/self-driving-cars-guidelines.html?_r=0

    Google unveiled its highly anticipated messaging app Allo which partially relies on Artificial Intelligence (AI) technologies: http://www.nytimes.com/2016/09/22/technology/personaltech/allos-tryout-5-days-with-googles-annoying-office-intern.html

    There’s a class-action lawsuit over the privacy policies of a Canadian sex toy producer: http://arstechnica.com/tech-policy/2016/09/sex-toys-and-the-internet-of-things-collide-what-could-go-wrong/

    Chelsea bombing was one of the first testing beds for New York City’s “ring of steel” as well as the Wireless Emergency Alerts messaging system: http://www.nbcnews.com/storyline/ny-nj-bombings/more-8-000-cameras-helped-snare-bomb-suspect-ahmad-rahami-n650891 and http://www.nytimes.com/2016/09/20/nyregion/cellphone-alerts-used-in-search-of-manhattan-bombing-suspect.html

    Two new court cases on suspicion requirements came out from the 10th Circuit and the Massachusetts Supreme Court.

  • Laura Poitras at the Whitney

    Laura Poitras at the Whitney

    By: Kayla Wieche

    http://whitney.org/Exhibitions/LauraPoitras

    http://www.nytimes.com/2016/02/05/arts/design/laura-poitras-astro-noise-examines-surveillance-and-the-new-normal.html?_r=0

    http://www.newyorker.com/podcast/political-scene/laura-poitras-and-david-remnick-visit-the-whitney-museum

    Until May 1, visitors to the Whitney Museum’s eighth floor will encounter ‘Astro Noise,’ the multi-sensory exhibit by artist and journalist Laura Poitras. Poitras is best known for her involvement with the Snowden revelations and her documentary Citizenfour, which features NSA whistleblower Edward Snowden detailing and describing classified documents on government surveillance. ‘Astro Noise,’ named after an encrypted file that Snowden gave to Poitras in their initial communication over two years ago, continues to probe the tension between privacy rights and government surveillance.

    The exhibit features visual presentations of various components of the government surveillance program – detention, torture, drones, data mining – and the legal reasoning that enables and supports it. After exiting the elevator, visitors are greeted by large prints depicting images of an American and British intelligence hack of Israeli drone feeds. The first room houses a screen with one side streaming video footage of passersby’s faces reacting to the site where the Twin Towers had stood in the days after the Sept. 11 attacks, and the opposite side projecting video of prisoner interrogations in Afghanistan. Following this striking display is an interactive video and sound exhibit relating to drone surveillance. Next, the visitor is guided through a dark hallway perforated with brightly lit peepholes through which intelligence documents legally justifying these programs are displayed. The exhibit ends with indications that all visitors have been surveilled during it.

    The sense of unease generated by visiting ‘Astro Noise’ is purposeful and powerful; it is intended to make the visitor critically question the validity of and take action against privacy violations committed in the name of national security. Poitras told The New Yorker “we create the political landscape in which we live and we can change that landscape.” The gift shop sells US Constitutions, perhaps suggesting that visitors use it as a tool to begin to enact that change.

  • Your Next Ride Might Be Used by The Government and Third Parties to Track Your Steps

    Your Next Ride Might Be Used by The Government and Third Parties to Track Your Steps

    By: Felipe Palhares

    April 21, 2016

    Link: https://www.theguardian.com/technology/2016/apr/12/uber-us-regulators-data-passengers-report

    Taking a ride with Uber might reveal more than you think about your whereabouts, especially to the government and to regulatory agencies. Uber has recently disclosed that state and local transport agencies requested data of more than 11 million user accounts and half a million drivers between July and December. This includes GPS coordinates, route maps and addresses.

    Although this data is supposedly anonymized, thus not direct revealing the name of the users, it is not clear exactly what data is being informed by Uber to the authorities besides those identified above and this could impose a great concern regarding the privacy of Uber’s users. Even if users’ names are not disclosed, it should not be difficult to discover this information after looking through the other kind of data being disclosed to the regulators. If Uber is being forced to reveal the model and color of the car, plate numbers and a specific ID number unique to each user, it would only take a little bit of research and surveillance to allow someone to discover their real identity.

    Furthermore, considering that you can set your home and work address to your Uber account, those data could also be used to easily match an ID number to a person’s identity. The implications of this type of data being provided to third parties are fairly dangerous. For one, according to the article some of the data is available to the public through record requests, which means that anyone could discover where you live, where you work, the places you frequent, how often you frequent these places, what time of the day you usually leave home and what time you come back, along with a lot of other information that you might not want to have disclosed to the world.

    After all, the places that you frequent might reveal a lot about you, such as your political, religion and sexual preferences, aspects of your life that you would not expect to have revealed only for choosing to take a ride with Uber. This could also be dangerous for your safety. According to a study conducted by the CDC (National Intimate Partner and Sexual Violence Survey: 2010 Summary Report), one in 6 women (16.2%) and one in 19 men (5.2%) in the United States have experienced stalking victimization at some point during their lifetime. Hence, revealing your whereabouts to the public could allow stalkers to track you more easily and increase unnecessary risks to your personal safety.

    Moreover, if this data is immediately available for everyone, or at least for the authorities, it could also be used by the government or the police to track your steps and investigate your life without applying for or being granted a search warrant. Therefore, collecting and providing all this information to transport regulators upon blank requests without explaining why the information is needed raises serious concerns about users’ privacy. This should be clearly and expressly communicated to users, allowing them to make an informed decision before calling their next Uber ride.