Month: March 2016

  • Redefining Fourth Amendment Law for The Digital Age

    Redefining Fourth Amendment Law for The Digital Age.

    By Macarena Troncoso

    On March 11, Brian Farrell – accused of being a staff member on silk road 2.0 – pleaded guilty of conspiracy to distribute illegal drugs using the TOR network.

    The Silk Road 2.0 was a hidden service operating in the Deep Web until it was shut down by the FBI in November 2014. Farrell’s plea agreement could be the final chapter of a case that arises relevant questions about the protection afforded by the fourth amendment to internet users.

    Let’s begin with undisputed facts: Carnegie Mellon University was funded by the US Department of Defense to carry out the research that allowed to reveal the identity of several Dark Market users. The information obtained during the study was later accessed by the FBI through a subpoena and permitted the identification of Brian Farrell as a prominent user of Silk Road 2.0.

    To put it simply: Carnegie Mellon engaged in a prolonged and prospective surveillance of the Deep Web, used government funding, and obtained results that were used for law enforcement purposes.

    What differentiates this conduct from outsourcing police work to universities? Where is the line between private searches and government searches? Was Carnegie Mellon University acting as an agent of the government? Unfortunately, these essential questions will remain unanswered since the court determined that the case did not involve a search, turning irrelevant the discussion about state action as a necessary trigger for fourth amendment safeguards.

    Indeed, while denying the defendant’s motion to compel discovery, judge Richard Jones ruled that users of TOR have no reasonable expectation of privacy on their IP addresses when using TOR Network, even though the purpose of TOR is precisely to hide the identity of its users, enabling them to communicate privately and securely and to access the internet anonymously. Relying on Forrester[1], the judge considered that in the course of using TOR network, “an individual would necessarily be disclosing his identifying information to complete strangers”[2] and this submission of information “is made despite the understanding communicated by the Tor project that the Tor network has vulnerabilities and that users might not remain anonymous”.

    Using the third-party doctrine announced in Smith v. Maryland[3], the judge presumed that individuals who convey information to third parties have taken the risk of an eventual disclosure to the government. Is this notion workable in the digital era? The assumption of this kind of risk appears to be an integral part of life in the XXI Century. People daily turn over a great amount of information to private and public entities through the use of computers, mobile apps, or other tech devices connected to the web. Does that mean that we have surrender our expectations of privacy?

    I believe we should not yield. In her concurrence in Jones[4], Justice Sotomayor called for a reevaluation of the premise that an individual had no reasonable expectation of privacy in information voluntarily disclosed to third parties, considering this approach “ill-suited” for the digital age. It is imperative for the courts to rethink and reshape Third Party doctrine and other fundamentals notions such as State Action and the Reasonable Expectation of Privacy test to attune them to the challenges posed by the Internet era.

    [1] United States v. Forrester. United States Court of Appeals for the Ninth Circuit, 2007. 495 F. 3d 1041.

    [2] The mention of “complete strangers” points to the individuals that host the network nodes.

    [3] Smith v. Maryland, 442 U.S. 735 (1979)

    [4] United States v. Jones, 132 S. Ct. 945 (2012)

  • HIPAA, Gun Control, and Mental Health

    HIPAA, Gun Control, and Mental Health

    By: Erika Asgeirsson

    A new HIPAA rule issued in January will allow certain health agencies and medical facilities (“covered entities” under HIPAA) to report the identity of individuals subject to mental health disqualifications to the federal database to prevent them from purchasing firearms. 45 C.F.R. § 164.512(k)(7). Currently those involuntarily committed to mental health institutions, those found incompetent to stand trial, and those deemed a danger to themselves or others are excluded from shipping, transporting, possessing, or receiving a firearm. In the past, certain covered entities often did not disclose the identity of these individuals to the federal database for fear of violating HIPAA. The new rule, which the administration asserts only clarifies and does not change the situation, is part of President Obama’s action more generally on gun control.

    As the Washington Post article notes, mental health advocates are spilt on this rule. While encouraged by increased attention to the need to effectively care for those suffering from a mental illness, some advocates argue this rule unfairly targets the mentally ill, stigmatizes those suffering from a mental illness, and is not based on data about gun violence actually committed by this community.

    Analyzing this rule from a privacy perspective shows that this issue is much more complicated than it often appears. I agree that we need to take action to reduce gun violence. Attention to mental illness and its intersection with gun violence has recently become a common talking point. However, there are compelling interests on both sides that should make even those supportive of gun control think more critically about this rule. Such restrictions have the potential to reduce gun violence by ensuring that firearms do not fall into the hands of those who should not have them. On the other hand, the attitudes underlying this rule stigmatize those suffering from mental illness. The rule might discourage people from seeking needed treatment or unduly target those suffering from a mental illness without supporting evidence. Important privacy interests are at stake because HIPAA deals with very sensitive information that is often relayed through a health care provider, who has a protected relationship with the patient. (Note, the final rule does not apply to most health care providers but applies to entities the provider may report to.)

    Given this context, it is important to think deeply about this rule and its subsequent implementation. While the points below remain preliminary suggestions, I hope they might encourage further conversation on this important issue. None of these are easy to solve, and will take a great deal of time and effort. However, they are a starting point to ensure that the competing interests, including privacy, are properly balanced.  Some of these appear to be addressed in the administration rule, while others may require more action.

    1. Keep the circle close. Ensure the information is only shared with the database and not with other related agencies. The new rule explicitly addresses who the covered entity discloses information to (the database or a designated entity per 45 C.F.R. 164.512(k)(7)), but it is also important to address who the federal database shares information with and what information is shared. Given the sensitive nature of the information, it may require more rigorous requirements regarding the extent of disclosure by the database than those imposed for the disclosure of other information in the federal database.
    2. Disclose as little as possible. Under the rule, the entity only discloses certain demographic and other data, and does not include the specific diagnosis or other clinical information. The extent of information disclosed to the database should be consistently reassessed to ensure only the information necessary is disclosed.
    3. Use an evidence-based approach. Thresholds triggering the prohibition should be based on clear supporting data so that those suffering from a mental illness are not unnecessarily targeted. In addition to ensuring fair treatment, this also protects against overbroad disclose and other infringements on privacy.
    4. Right to appeal. Just as it is important that a consumer has the opportunity to correct data collected on her, the information and determination must be subject to appeal. This includes appealing misidentification or incorrect classification. Procedures for appeal need to respect the privacy and dignity of the individual contesting the identification or determination.
    5. Explore alternatives that are less intrusive to patient privacy. Other actions, such as increased funding for mental health treatment or gun training and licensing requirements, may be just as or more effective at reducing gun violence with a more limited intrusion into the patient privacy. More research should be done to better evaluate the efficacy of various alternatives.
    6. Operate on principles that protect the dignity of those suffering from mental illness. Ensure that the rules and implementation do not stigmatize those suffering from mental illness. Privacy is often a central element to human dignity.

    Sources:

    For further information, please see:

     

     

  • Germany Is Putting Facebook Through the Rounds

    Germany Is Putting Facebook Through the Rounds

    By Ryan P. K. Brown

    Things do not bode well for Facebook in Germany. The country’s government has been stepping up its enforcement of user-protective laws against Facebook’s data collection practices. User-friendly EU privacy laws are already much more restrictive on how websites can use and collect user data than in the United States as they stand. But within less than a two-week period of time, Germany has pushed back three separate times against Facebook’s data collection and use policy, each through a different means of restriction.

    In late February of this year, the social media giant was fined 100,000 euros for failing to amend its terms of service to in compliance with a 2012 order to fit within the narrowly tailored laws of the European Union that protect user data. After the fine was issued in court, Facebook agreed to change its terms of service in order to comply and said it was going to pay the fine. Obviously, this sum of money is not much of a blow to the media giant’s massive bank account, but this is but one manner in which the German government has warned Facebook of its data collection and use policies.

    The next warning came on March 2 of this year. The German Federal Cartel Office (FCO), Germany’s competition watchdog, issued a statement claiming that Facebook may be using its dominant marketing position in order to violate user data privacy laws. The FCO announced that it is conducting an investigation into the terms of use of Facebook’s social media services. They are particularly concerned that Facebook is abusing its dominance in the social media market in order to conduct illegal and unethical data use and collection practices.

    Finally, media outlets reported that this past Wednesday, March 9, a German court ruled that Facebook’s “like” button may actually be in violation of law if clicked on commercial websites. The court specifically pointed out that the violation occurred when users were not notified that their data may be shared if they clicked a “like” button on a commercial website. In this decision, the court warned that they could fine the commercial websites for hosting the “like” button who lack any notice of how a user’s data may be shared. This warning was not directly aimed at Facebook, but the implications for the social media company—i.e. increasing the friction of data collection and use—are clear.

    Obviously these fines, rulings and investigations are not, individually, much of a threat to Facebook as the social media giant that it is, but the path that Germany appears to be treading could lead to long-term difficulties and power shifting. EU law is already much more user- and consumer-friendly than the United States’. This tightening of the grip, so to speak, on Facebook is an indication of further measures the German government is willing to take to further protect users.

  • First Alexa, Now Fox! From valuable personal assistant to home and outdoor external spy…

    First Alexa, Now Fox!

    From valuable personal assistant to home and outdoor external spy…

    By: Annabelle Divoy

    Why open a dictionary when you can ask Alexa the height of Mount Everest? Why painfully reach for your timer on the kitchen shelf when Alexa can tell you when to turn off the oven? Why even bother tell jokes to your children when Alexa can do it for you? This is only a very small – and seductive – preview of all the tasks that Alexa, the voice-controlled personal assistant created by Amazon Echo, is able to perform[1]. Launched for sales in November 2014, Alexa has been welcomed in many homes, for the reasonable price of $199 (even $99 if you are an Amazon Prime member) and has definitely stolen Siri[2]’s thunder.

     

    Blog 2016

    Although Amazon does not communicate its sales estimates, its artificially intelligent personal assistant seems well-acclaimed by consumers, against all the threats it implies for their privacy. Indeed, in order to hear and execute the commands that consumers direct to “her” by calling “her” name, Alexa is constantly recording everything that happens in the home. If this very well-achieved gadget will certainly help you in many of your daily tasks and chores, entertain you and stimulate your knowledge, it will also seriously invade your most private moments. Alexa will hear you narrating your full day of work to your husband, listen to your telephone call with your best friend Carrie, be the best new companion of your children, learn that you prefer pop music to jazz, know that you added chocolate and wine to your shopping list and, even, that you let your vegetables burn for the third time this week.

    Alexa may thus quickly shift from valuable personal assistant to home-robot intruder spy[3]. The level and amount of personal data that it is able to collect, analyze, use and/or disclose is so high that it becomes worrisome. And these privacy concerns grow considerably bigger when you consider the risk of Alexa’s gigantic range of data not only being used by Amazon and its commercial partners, but also potentially pirated by outsiders. Anxiety does not vanish at the view of Amazon’s Echo Terms of Use, as “Alexa” does not even have its own privacy policy, only referring to Amazon General Privacy Policy[4].

    Yet, only a few seem truly concerned about Alexa’s dangers. For now, most consumers only focus on the attractive functions of this high-tech gadget and are filled with excitement as “Pringles-can-sized Alexa” will soon have a shorter and portable sibling[5]. As related by the Wall Street Journal in January 2016, Amazon recently announced the upcoming launch of “Fox”, a voice-controlled personal assistant, using Amazon Echo’s technology, but fitting in the palm of your hand and not requiring a power cord to function, allowing little Fox to be used outdoor and not be placed under house arrest like tall Alexa.

    Amazon’s business strategy and technological innovation certainly deserve applause and give serious competition to others in the field. But when trading Alexa for Fox, or, even more so, combining both, and giving company to our already indiscrete iPhones or Androids and computers, there might be very little room left for our, yet so valuable, privacy.

    [1] « Introducing Amazon Echo », Amazon’s official video, November 6, 2014. https://www.youtube.com/watch?v=KkOCeAtKHIc

    [2] Siri (Speech Interpretation and Recognition Interface) is Apple’s intelligent personal assistant.

    [3] “Goodbye Privacy, Hello Alexa: Amazon Echo, the Home Robot who hears it all”, The Guardian, November 21st 2015,

    http://www.theguardian.com/technology/2015/nov/21/amazon-echo-alexa-home-robot-privacy-cloud

    [4] https://www.amazon.com/gp/help/customer/display.html?nodeId=201625490, linking to https://www.amazon.com/gp/help/customer/display.html?nodeId=468496

    [5] http://www.wsj.com/articles/amazon-to-release-portable-version-of-echo-speaker-in-coming-weeks-1452532671

  • Privacy Blog Assignment – Panel 6

    Privacy Blog Assignment – Pannel 6

    By: Ricardo Leite Ribeiro

    The article which I am providing the link here was published in the Wall Street Journal on March 2, 2016. It’s about the fact that the German antitrust regulator, the “Bundeskartellamt”, had opened an investigation against Facebook for the “abuse of dominant position” regarding the harvest of personal data from consumers. In the words of the head of the agency “It needs to be clarified whether consumers are being sufficiently informed about the nature and scale of data collection”.

    The news is relevant because it points out the possibility of competition laws to be enforced as a vehicle to guarantee privacy protections for consumers. This is a path that Europe may follow, especially regarding abuse of dominant position violations. Might be antitrust a new frontier for advancing in privacy protection? Is there a role for it to play in this field? Are its instruments and tools suitable for the task? What would be the remedies applied in this case?

    From the article, it becomes clear that the accusation the motivate the investigation is that Facebook is leveraging its monopolistic advantage as a social network to obtain advantages in the data market. As an antitrust problem, this might be classified as using the market power acquired in one specific market to restrain the competition in an upstream market. This is particularly interesting because in U.S. this conduct is very unlikely to be a violation of § 2 of the Sherman Act, particularly after Trinko.

    http://www.wsj.com/articles/facebook-faces-antitrust-investigation-in-germany-1456920796

     

     

     

  • Pierre-Paul’s Medical Disclosure Claim Against ESPN: Issues of Intersecting Privacy Torts

    Panel 5

    Pierre-Paul’s Medical Disclosure Claim Against ESPN: Issues of Intersecting Privacy Torts.

    By: Eliza Marshall

    The New York Giants’ Jean Pierre-Paul’s suit against ESPN, which pertains to ESPN’s publication[2] of medical records linked to Pierre-Paul’s index finger amputation last summer, provides fruitful grounds for exploring the territory covered by intersecting and perhaps under-inclusive legal regimes in the medical information context. Pierre-Paul’s claim appears to fall in between two broad legal regimes: HIPAA and Florida’s state medical information statute.[3] HIPAA’s protection is broad in that it focuses on source rather than content or publication to avoid questions of harm in the context of medical information, but narrow in that it only covers certain entities and their business associates. Florida’s law, in contrast, is more limited in terms of content, publication and harm, but more broad in that it applies outside of the covered entities listed in HIPAA.[4] Yet Pierre-Paul’s claim may lie in territory covered by neither laws, and demonstrates a gap between regimes arguably worth addressing by expanding one or both.

    Under HIPAA, the content of the disclosure is clearly covered. But the statute does not regulate the behavior of ESPN. It is not a “covered entity,” and falls outside of more expansive definition of “business associate” because it does not (and did not) receive, maintain, or transmit personal health information for any of the functions or activities listed in the regulation.[5] HIPAA is prefaced on the notion that medical information is uniquely sensitive and inherently involves privacy harm, and so the statute does not require any inquiry into harm or publicity and covers the entire category of medical data being vulnerable to disclosure even if no unauthorized access ever occurs. One can question, therefore, why entities like ESPN should not be forced to treat this information with care. But the answer seems clearly to be that HIPAA does not cover them, and so any claim thereunder would have to be against the health care provider who provided the records to ESPN in the first place—and theirs are the only (presumably shallower) pockets that Pierre-Paul can tap.

    State law picks up where HIPAA leaves off,[6] but like the wider genre of Prosser’s privacy torts presents Pierre-Paul with its own set of obstacles. ESPN is covered under Florida’s statute, but it is not clear that the disclosure that occurred is actionable. First, it is not clear that a private right of action exists. Second, unlike HIPAA, Florida’s statute requires Pierre-Paul to prove concrete harm from the disclosure of his medical records. Especially in light of first amendment limits on the publication of true facts, Pierre-Paul faces an uphill battle. It is not clear what information other than the amputation was included in the medical records. But arguing on the basis of the amputation alone, he will have to craft a convincing explanation for an injury suffered simply by the timing of the disclosure—as a professional athlete whose occupation is highly public, this information would not have been secret for long.[7] His absence, or his finger’s, would surely lead to speculation and would be easy to detect with the naked eye even without detailed medical records. As for information beyond the fact of amputation, Pierre-Paul may have a harder time describing how ESPN’s disclosure harmed him in any concrete way. Still, the Shulman[8] case supports a court finding offensiveness and the potential existence of a special zone of privacy when it comes to the medical context and the relationship between a medical provider and a patient that journalists must respect. This suggests Pierre-Paul has some hope. Still, intuitively, having a journalist publish medical records is a highly offensive and unacceptable invasion of privacy. Certainly, most people would object to having it happen to them. Yet the legal result is far less straightforward. This may suggest the need for new methods of protecting privacy that avoids the difficulties of proving harm.

    [1] Link to Article: https://www.law360.com/articles/764455/nfl-player-must-tackle-common-privacy-pratfall-in-espn-suit

    [2] An ESPN reporter tweeted an image of Pierre-Paul’s medical records, reaching nearly 4 million twitter followers.

    [3] Fla. Stat. § 456.057.

    [4] The Florida statute applies to any “records custodian” which is defined as any person or entity that “obtains medical records from a records owner,” which seems to include ESPN. § 456.057(3)-(4).

    [5] These include “claims processing or administration, data analysis, processing or administration, utilization review, quality assurance, patient safety activities listed at 42 CFR 3.20, billing, benefit management, practice management, and repricing.” 42 CFR § 160.103.

    [6] 42 CFR § 160.203(b).

    [7] The article references the Hulk Hogan case and its potential for revealing the promise of Pierre-Paul’s claim for harm in this case, but surely the expectation of privacy is far higher in intimate sexual activity than it is in the presence or absence of a publicly visible body part—regardless of celebrity status.

    [8] Shulman v. Grp. W Prods., Inc., 955 P.2d 469, 479 (1998), as modified on denial of reh’g (July 29, 1998).

  • Privacy Blog (1)

    Privacy Blog (1)

    By: Maggie Kornreich

    Professor Rubinstein

    March 24, 2014

    http://www.natlawreview.com/article/health-apps-and-hipaa-ocr-publishes-new-guidance-health-app-developers

    This article addresses whether mobile device applications are subject to HIPAA regulations. In February, the Department of Health and Human Services’ Office for Civil Rights (OCR) released Health App Use Scenarios & HIPAA to examine if HIPAA applies to apps that “collect, store, manage, organize, or transmit health information.”

    The Health App Guidance provides six scenarios and decides whether HIPAA would apply to the app developer in each instance. The first scenario involves a consumer who downloads a health app and provides the app with her personal information in order to organize her information without her healthcare providers. Here, the consumer is not a covered entity or business so the app developer is not subject to HIPAA. The second scenario involves a consumer who downloads a health app to manage a chronic condition. The consumer retrieves data from her doctor’s electronic health record as well as her own information to put into the app. The consumer is not a covered entity or business associate and the healthcare provider did not hire the app developer for the service so it is not subject to HIPAA. The third scenario involves a consumer who downloads an app after their doctor recommends it to track diet and exercise. The consumer sends a report to their doctor before the next appointment. The doctor did not hire the app developer so the developer is not subject to HIPAA.

    The fourth scenario involves a consumer downloading an app to manage a chronic condition, where the app developer and the healthcare provider have an interoperability agreement at the consumer’s request in order to exchange consumer information. The consumer inputs their own information into the app. The developer is not subject to HIPAA because they are not creating, maintaining, or transmitting personal health information on behalf of a covered entity or business associate. In the fifth scenario, a healthcare provider contracts with the app developer for patient management services and the provider instructs patients to use the app. Here, because the provider is a covered entity and the developer is considered a business associate, the developer is subject to HIPAA. The sixth scenario involves a health plan that offers a health app to allow members to store health records, check the status of claims and track their wellness information. The health plan analyzes the information. The developer is considered a business associate and the health plan is a covered entity. Therefore, the developer is subject to HIPAA.

    This article is interesting and informative because it outlines the instances when developer or company will be subject to HIPAA. This is increasingly important as people rely on their phones and apps on their phones for most if not all of their personal affairs. It is also significant in that it brings to light instances where people share health information, which many people deem to extremely private, in electronic forms.

  • Information Privacy: Blog Post (Panel 6)

    Information Privacy: Blog Post (Panel 6)

    By:Harry Grabow

    http://www.marketplace.org/2016/02/10/business/new-frontier-voter-tracking

    http://fusion.net/story/268108/dstillery-clever-tracking-trick/

    http://www.usatoday.com/story/news/politics/onpolitics/2016/02/08/company-tracked-iowa-caucusgoers-phones/80005966/

    In the first United States Presidential campaign since the 2013 Snowden, candidates have expressed varying opinions of both the man and the reality he exposed: while some have mildly praised his impact on the privacy discourse while questioning the legality and intelligence of his conduct (see here and here), others have gone as far as insisting that he is a traitor, or even an active Russian spy, prompting a response from the Kremlin itself.

    However, while the candidates’ focus remains on the legitimate national security and civil liberties implications of government surveillance, another use of data collection is taking the campaign season by storm: the creation of voter profiles based on mobile device advertising profiles of individuals at polling places. Traditionally the domain of phone-based public polling and on-site exit polls, digital advertising company Dstillery engineered a way – both creative and creepy in equal measure – to track voter preferences by matching mobile device identifiers present at the geographical locations of caucus sites, to the digital advertising profiles of the device’s owners. Specifically, they monitored the real-time ad bidding that occurs each time a mobile device opens an app or web site, captured the identification associated with the service of an ad, and then further researched additional characteristics associated with that mobile device.

    Though not conducted with the scientific rigor of a public poll, the results captured voter characteristics not usually polled by campaigns. For example, according to an analysis of the results conducted by USA Today: voters expecting a newborn child tend to be Republican and had a greater concentration at Senator Marco Rubio’s caucus sites; voters in locations with strong support for Donald Trump had a penchant for outdoor activities and home improvement; and tech-industry workers and enthusiasts were more concentrated in Senator Bernie Sanders’ caucus sites than those of former Secretary of State Hillary Clinton.

    In an interview with news site Fusion, a representative from Dstillery explained, “One thing that isn’t in the data is personal identifiable information. The data and system are completely anonymous. We have no idea, for example, what your name is. All we see are behaviors and everything we do is based on analyzing those behaviors writ-large.”

    So, while identifying individual voters and their preferences through this method of data collection might not be a present concern, the prospect of campaigns themselves utilizing these tactics might subject Iowans to an even greater saturation of political advertising in 2020, on top of the astounding $70 million spent in 2016. And, with many Iowans lamenting the constant barrage of TV and radio ads, campaigns might be eager to attempt this new, more subtle approach

  • PRG News Roundup: March 23rd

    PRG long-time participant Luke Stark’s research on emotion has been published in Information Society: https://starkcontrastdotco.files.wordpress.com/2014/09/01972243-2015.pdf

    DOJ postpones hearing on Apple case: http://fortune.com/2016/03/21/doj-court-hearing-apple-postpone/

    Utah tests online voting in state caucus: http://www.npr.org/2016/03/22/471474745/utah-republicans-to-test-online-voting-in-states-caucus

     

  • Privacy Blog

    Privacy Blog

    By: Mary Churan Huang

    http://www.irishtimes.com/business/technology/google-eu-s-data-protection-authorities-have-absolute-focus-on-privacy-1.2556597

    This article discusses the positions taken by some of the world’s largest multinational companies in respect of the EU’s incoming General Data Protection Regulation (GDPR). The GDPR is set to replace the existing EU Data Protection Directive of 1995. There is a need to update the existing privacy laws in the EU as the technology evolution has accelerated and the current framework does not properly cover critical issues such as social networks and cloud computing. The GDPR is designed to be a more comprehensive and wide-ranging legal framework which will apply EU-wide, transcending any local privacy laws. The GDPR is intended to strengthen and unify data protection within the EU so that there will be one single legal framework within the EU. This would allow for international businesses to more easily comply with the EU privacy framework while allowing for more comprehensive protection for EU residents.

    Both Google and Microsoft seems to regard the GDPR as a necessary step in data protection but does not anticipate a major change in their respective companies’ policies as they have already undertaken significant steps to enhance privacy over the years. Microsoft is concerned about the heavy penalties set out in the GDPR and has indicated that the new regime will be a great test of whether their privacy controls are working the way they should be. Google indicated that it will be working closely with the regulators to find a rational way of interpreting some of the ambiguities contained in the GDPR.

    Adobe seems to be more critical of the GDPR, expressing its disappointment that the framework is not as unifying as it is intended to be. Adobe is of the view that new framework will still be fragmented and subject to interpretation by local authorities. The responses by these major multinational companies are unsurprising considering that any new regime would bring about much uncertainty. However, all these companies have expressed a willingness to comply and work with the relevant regulators to find a solution.