Month: October 2016

  • PRG News Roundup: October 26th

    By Alexia Ramirez

    AARP has filed a lawsuit against the Equal Employment Opportunity Commission in response to the growing number of employers who financially incentivize their workers to sign up for wellness programs. AARP argues that the programs, which force individuals to choose between financial penalties and the disclosure of private medical information, violate anti-discrimination laws meant to protect workers’ medical information.

    ProPublica reported that Google had quietly changed its privacy policy over the summer. Now, users’ browsing habits “may be” combined with Gmail data and other tools (i.e., Double Click). Existing users were prompted to opt-in to the change and it has become the default standard for new users. Here’s how to opt-out.

    The Pentagon has prioritized artificial intelligence as central to the United States’ defense strategy. The military is examining the use of artificial intelligence to create autonomous and semi-autonomous weapons, such as drones that can identify targets. This development has sparked a debate amongst legal and military experts about the ethics of implementing this technology.

    Last Friday, DynDNS, a company whose servers facilitate internet traffic, experienced a distributed denial-of-service attack. The troubling aspect of this attack was that the hackers relied on new weapons—hundreds of thousands of internet-connected devices, such as cameras, baby monitors, and home routers. These everyday devices were infected with software that allowed hackers to command them to flood a target with overwhelming traffic.

    Sweden’s highest court has banned drones with cameras. “Cameras attached to drones fall foul of Sweden’s strict surveillance laws, the country’s highest court has ruled by slapping an outright ban on drone filming—unless the kit is used by a law enforcement agency or an expensive permit has been issued.”

     

     

  • Facebook Wants You to Get Even More Political

    By Sofia Grafanaki

    Facebook rolled out a new feature last week, allowing users to officially endorse a presidential candidate. It is very simple to use – all a user needs to do is go on the candidate’s Facebook page and click on the “endorsement” tab to add his/her own endorsement. One can also add a message with it, presumably explaining their position. The feature has already sparked several interesting discussions, ranging from whether journalists should use this tool, given the conflicting values of neutrality and transparency in the context of political journalism, to the potential harassment that can result from expressing political opinions.

    Facebook seems to have a bigger agenda than just the upcoming presidential election by planning to make the feature more widely available, to state and local election candidates for instance. Detailed instructions on Facebook’s Help Center page explain that to receive endorsements, all a user needs to do is change the category of his/her page to “Politician, Political Candidate, or Government Official.”

    The feature is not just directed to users who are open about their political opinions and positions, as the feature allows you to select the audience who can view your endorsement post. Detailed instructions on Facebook’s Help Center page warn users to:

    Keep in mind that if you choose Public as the audience of your endorsement, it may also appear on the candidate’s Page if the candidate chooses to feature your endorsement.

    Interestingly, while this may seem somewhat respectful to voter privacy, it also helps a reluctant user feel more comfortable to share their political preferences, making it almost as if the user were completing a missing piece of their profile, one that no one needs to see. The result however is that Facebook obtains more accurate data on their users, allowing for more accurate targeted advertising.

    The fact that the Company has been tracking political preferences is not news; it has been doing that since the launch of its ad personalization tool, in order to bring users ads that cater to their interests. Theoretically users’ can see and somewhat control their political labels among others, but as they are “tucked away” in the ad preferences section on Facebook, this is not always intuitive.

    Most importantly, while previously these labels were based on inferences Facebook algorithms were “taught” to make based on information collected from the users’ profiles and activity, with the new endorsement feature, these inferences are now confirmed or even corrected by the users themselves.

    Ultimately this is just a glance at a much larger discussion on the acceptable boundaries of voter-micro targeting. Is it just the natural evolution of political campaigning or are we starting to cross lines that affect our democratic process?

    https://www.facebook.com/help/1289003767810596

    https://techcrunch.com/2016/10/18/facebook-presidential-endorsements/

    http://money.cnn.com/2016/10/18/technology/facebook-endorsement-election-2016/

    http://www.poynter.org/2016/ask-the-ethicist-should-journalists-use-facebooks-new-endorsement-tool/435713/

    http://www.digitaltrends.com/social-media/facebook-endorsements/

    http://www.nytimes.com/2016/08/24/us/politics/facebook-ads-politics.html?_r=2

    http://www.digitaltrends.com/social-media/facebook-political-views-ads/

    https://www.facebook.com/ads/preferences

     

     

  • Privacy, Security and the Internet of Things: A Changing Landscape

    By: Yan Shvartzshnaider

    There is no such thing as a “bullet-proof” system. A system’s security is in a constant state of becoming. Breaking into any system used to require resources and time: the more resources you had, the less time you needed, and vice-versa. To protect your system you would want to ensure that it takes a significant amount of time (in the best case, approaching infinity) for the attacker to be able to break it.

    For a while, this was an achievable goal: resources were too expensive and hard to come by for the average perpetrator to even bother with an attack. This was particularly true of well-established infrastructure. Things have changed, however. Cloud services like Amazon Web Services (AWS) allow one to span hundreds of servers with the ease of clicking a button. We connected fridges, toaster, thermostats and other appliances to the Internet, the Internet of Things (IoT). Today, one neither needs money, expensive resources nor time to mount a serious attack. In one of the most recent attacks, two teenagers were able to “coordinate more than 150,000 so-called distributed denial-of-service (DDoS) attacks” from the comfort of their home, while making money in the process.

    While the technological landscape has changed, the attitude of consumers has not. The market is full of unpatched devices that make it easy for an attacker to compromise the system and use it as they see fit.

    In a recent blog post— Security Economics of the Internet of Things —Bruce Schneier discusses these issues and argues that we have reached a point where the government needs to intervene with adequate regulation:

    IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution. The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care.

    Whether or not government intervention is the correct answer remains to be seen, but we should all be grateful to Schneier for raising the question.

    https://www.schneier.com/blog/archives/2016/10/security_econom_1.html

  • Google’s Clever Plan to Stop Aspiring ISIS Recruits

    By Sofia Grafanaki

    A new and promising approach seeks to disrupt ISIS online recruiting efforts through targeted advertising, as presented at a recent event at the Brookings Institution. Google’s tech incubator Jigsaw (previously called Google Ideas), together with Moonshot CVE, Quantum Communications, and the Gen Next Foundation, developed a plan to help the fight against terrorism. The “Redirect Method” is described as a way to get inside the heads of potential terrorists before they are actually recruited and change their intentions.

    The way the program seems to work, is that it “places advertising alongside results for any keywords and phrases that Jigsaw has determined people attracted to ISIS commonly search for”. The advertising links to YouTube channels with videos that Jigsaw believes can “undo ISIS’s brainwashing”. According to Yasmin Green, Jigsaw’s head of research and development, “the Redirect Method is at its heart a targeted advertising campaign: Let’s take these individuals who are vulnerable to ISIS’ recruitment messaging and instead show them information that refutes it.” Results seem to show that the program is effective – it seems that more than 300,000 people were drawn to the anti-ISIS YouTube channels in just about 2 months.

    But could this “powerful tool for getting inside the minds of some of the least understood and most dangerous people on the Internet”, as described by Wired Magazine, be used for just about anything else as well? There is no doubt that the specific use is desirable (and a lot more respective of privacy than NSA’s bulk surveillance method). But once it’s out there as a tool, can it not be used for other causes? If it’s really just a targeted advertising campaign, can Google develop a product out of this? Or is it already a product in some ways? How would we feel if the cause was not to stop terrorism but to stop a political candidate for instance that some deem dangerous? The minute we move away from extremism, the idea of using data and analytics to get inside the minds of people and change their intentions starts to sound much less appealing.

    https://www.wired.com/2016/09/googles-clever-plan-stop-aspiring-isis-recruits/

    http://www.slate.com/articles/technology/future_tense/2016/09/the_problem_with_google_jigsaw_s_anti_extremism_plan_redirect.html

    https://theintercept.com/2016/09/07/google-program-to-deradicalize-jihadis-will-be-used-for-right-wing-american-extremists-next/

    http://www.businessinsider.com/jigsaw-redirect-method-to-stop-isis-recruits-2016-9

  • PRG News Roundup: October 19th

    By Eli Siems

    Researchers from the Center for Privacy & Technology at Georgetown Law released a major study on the police use of facial recognition software. The report, The Perpetual Line-Up: Unregulated Police Face Recognition in America, reveals that half of all Americans are catalogued in law enforcement facial recognition networks and that the use of such networks by at least 52 agencies is effectively unregulated. #PerpetualLineUp

    The Justice Department outlined a new initiative to collect data on the use of force by law enforcement. The plan seeks to “collect, maintain and report data . . .  on all officers involved shootings, whether fatal or nonfatal, as well as any in-custody death.” The DoJ will be collaborating with “local, state, tribal and federal agencies” to implement a comprehensive data collection program.

    Facebook is testing an update to its messenger app that will propose conversation topics based on information about a user’s activities and interests.

    Our own Helen Nissenbaum will be a panelist on the topic of data collection and sharing this Friday (10/21) at the Conference on Security and Privacy for the Internet of Things at Princeton University. The conference is to be videotaped and livestreamed.

    The European Digital Rights Initiative (EDRi) has released a charming illustrated guide to internet privacy for kids. Adults seeking similar information can check out this page maintained by Consumer Reports.

    And finally, Famed naturalist David Attenborough has suggested that gorilla exhibits at zoos should utilize peepholes for visitor viewing rather than customary glass panes, TIME reports. The proposal is the result of evidence that the animals’ knowledge that they’re being watched affects their behavior and well-being, perhaps amounting to a suggestion that the chilling effect of surveillance is not limited to human subjects (though this did not come as news to primatologists).

  • PRG News Roundup: October 12

    By Nate Tisa

    The ACLU of Northern California has revealed U.S. firm Geofeedia used social media metadata access to deliver location and monitoring information to law enforcement agencies engaged in tracking activists, particularly those involved in the Ferguson, MO protests surrounding the death of Micheal Brown. Facebook, Instagram, and Twitter cut Geofeedia’s API access in late September for violation of their respective privacy policies.

    In the wake of Hurricane Matthew, government agencies and private firms are developing ways to use mobile application geolocation and metadata to track progress and compliance rates in areas of mandatory evacuation. Comparison of live data to existing baselines can give emergency planners an estimate of how many people remain in a given area that should be more or less void of activity.

    U.K. Prime Minister Theresa May has banned wearable technology, including the Apple Watch, from all cabinet meetings out of fear that compromised devices could serve as microphones for foreign intelligence services. The decision comes on the heels of U.S. investigations into possible Russian hacks of the Democratic National Committee and other election-related entities.

    The Tactical Tech Collective is hosting an open event with Mozilla in New York City this November and seeking workshop leaders. For more information see their website: https://tacticaltech.org.

  • PRG News Roundup: October 5th

    By Taylor Black

    On Tuesday, Reuters reported that Yahoo secretly installed a program in all user email accounts to search incoming emails for specific information. Since that report, journalists have also uncovered allegations that internal engineering built a siphon system on behalf of NSA to run every email looking for sets of characters. Yahoo states that they are “a law abiding company which complies with the laws of the United States.”

    An investigative journalist published in the New York Review of Books on Oct. 2 that Italian author Elena Ferrante, who writes under a pseudonym, had been unmasked, resulting in controversy over the ethical concerns around the potential identity reveal.

    Apple text message metadata: Content of messages are encrypted, but Apple retains logs of who you’re writing to for ~30days. More info forthcoming?

    Signal received a grand jury subpeona earlier in 2016, which they were permitted to disclose this week.

    Johnson and Johnson warns they have recently learned of a security vulnerability in one of its insulin pumps which could leave patients open to a malicious exploit, though they also stated the risk of such an exploit is low.

    An Austrian teenager is suing her parents for violating her privacy by posting childhood pictures to Facebook, and for refusing to take the photos down at her request.

     

     

  • PRG News Roundup: September 28th

    By Eliana Pfeffer

    Yahoo has experienced a number embarrassing security failures over the last four years. Last week, the company disclosed that hackers backed by what it believed was an unnamed foreign government stole the credentials of 500 million users in a breach that went undetected for two years. It was the biggest known intrusion into one company’s network, and the episode is now under investigation by both Yahoo and the Federal Bureau of Investigation. The company is currently facing lawsuits from people who fear their accounts have been hacked and claim the company was “grossly negligent,” putting their financial and personal data at risk. http://www.nytimes.com/2016/09/29/technology/yahoo-data-breach-hacking.html ; http://money.cnn.com/2016/09/23/news/companies/yahoo-sued-data-breach/

    What Facebook Thinks You Like, is a new project from ProPublica. The tool, an extension for Google’s Chrome browser, let users see exactly what activities, brands and products Facebook, based on its data, thinks they like. The tool also tells users which — and how many — advertising categories those interests place them in.https://www.propublica.org/article/breaking-the-black-box-what-facebook-knows-about-you

    Snapchat will start selling subglasses that record 10-second snippets of video this fall. new glasses: http://www.cnbc.com/2016/09/26/why-snapchats-new-glasses-could-be-more-than-just-a-toy.html

    Woman sues We-Vibe maker over secretly amassing ‘highly sensitive, personally identifiable information’ from vibrator that can be controlled by a smartphone.https://www.theguardian.com/us-news/2016/sep/14/wevibe-sex-toy-data-collection-chicago-lawsuit

    Intel’s new office in Israel will be ultra-smart, and feature face recognition software that replaces the need for identification badges, software that suggest carpooling with other users if an individual is often late to work, and recommend that an individual eat healthier based on their lunchtime diet. http://www.cnbc.com/2016/09/26/intels-office-of-the-future-is-a-micromanaging-monster.html?utm_source=twitterfeed&utm_medium=twitter

    A German data protection commissioner ordered Facebook on Tuesday to stop collecting and storing data on WhatsApp users in Germany. http://www.nytimes.com/2016/09/28/technology/whatsapp-facebook-germany.html