Month: April 2013

  • Comments to Dept of Commerce on Protecting Critical Infrastructure

    As a result of a recent Executive Order, the Administration is seeking comments on ways to protect national security. I was invited to submit comments to the Department of Commerce on this topic. There is a legitimate difficulty with understanding and developing public policies in order to protect privacy, or achieving secure IT systems.

    Balance.

    How much prviacy should we have? How much security should there be? No one really knows, yet everyone has an opinion. And most opinions are reasonable. In the case of IT security, this has been an outstanding questions for 20 years now. Maybe about half that for privacy.  In my Comment, I make the argument that while most consumer advocates want “more spending!” I suggest that “more” may not be “better.” The reason is because of waste. It is wasteful to spend more for a benefit that is less than the cost. So firms, just like individuals, should balance costs with benefits. It’s wasteful to do otherwise.

    In my Comment I next present policy mechanisms that can be used to address this balance. Not necessarily ways to find the optimal level of security or privacy protection, but ways the government can induce better (i.e. optimal) behaviors. I talk about regulation, disclosure, taxes, liability, nudging, etc. These approaches all have their benefits AND limitations. So it’s not a matter of which is best, but understanding the conditions under which each are appropriate (or not). I find it all very fascinating, and hopefully you do too.

    I then next discuss cyberinsurance. As you might imagine, this is an insurance product that firms purchase in order to reduce the cost of data breaches and security incidents. In short, this insurance covers losses that the firm itself suffers from being hacked (for instance), and fines or regulatory sanctions, and 3rd party liability from any resulting lawsuits. The market may be big now, but it is expected to approach $1 billion in total premiums. That’s a lot. (Though, to put it in perspective, it would be nice to know the size of other corporate insurance markets. If any reader knows, please send me a note.)

    What is most interesting about insurance, is the ability — or at least the potential — to help reduce risky behavior for the insured, and across an industry. Despite moral hazard, there do appear to be practical ways to reduce risky behavior, and even to induce actors to become more safe. It’s a wonderful opportunity. And more over, insurance companies have available to them data that would be invaluable at determining which security controls are best at preventing data and privacy breaches. My Comment concludes with a plea to insurance carriers to work with researchers like me in answering those questions. It can be done, and I’d love to try!

     

    The formal call: http://www.ntia.doc.gov/federal-register-notice/2013/notice-inquiry-incentives-adopt-improved-cybersecurity-practices

    My comments: http://www.ntia.doc.gov/federal-register-notice/2013/comments-incentives-adopt-improved-cybersecurity-practices-noi#comment-29922

     

    cheers,

    Sasha

  • FAA

    The FISA Amendments Act (FAA) enacted in 2008 and extended in 2012, has been the subject of much controversy as of late. The Act authorizes the Attorney General or the Director of National Intelligence to gather intelligence information on individuals who are “reasonably believed to be out of the United States.”[1] Of course, the Act places several restrictions on the government in order to prevent the warrantless seizure of information on U.S. citizens.  Beyond these restrictions that mainly prohibit intentional misuse of the Act in order to collect information from people in the United States or U.S. people abroad, the FAA also provides for judicial review of the targeting procedures that the government uses to gather information. However, one large concern of opponents of the Act is that the information gathered and the judicial review process are largely confidential.

     

    The Federal Intelligence Surveillance Court (FISC) handles judicial review of the FAA cases. These courts writes full, binding opinions on the permissibility of certain targeting and surveillance practices of the federal government. Proponents of the Act support the privacy interests of those conducting foreign intelligence gathering to keep us safe– keeping this information confidential is essential to protecting their efforts to thwart potential foreign attacks. Many citizens, including several senators, however, were not keen on extending the life of the Act without increasing transparency on an otherwise opaque process.

     

    One major concern over transparency is that while the government may not be intending to collect information on people in the United States or our citizens abroad, we have no idea how much inadvertent surveillance of American citizens the government has conducted. We also don’t know how FISCs are interpreting the statute and whether or not their interpretation is markedly different from the Congress’s intent. Organizations like the ACLU believe that the American people have a right to know how effective current procedures are in keeping American citizens and those in the US from mistakenly having their privacy interests infringed upon.[2]  At the same time, Congress has no way of knowing if they should look into changing the wording of the Act to ensure that the court interprets the statute as intended. Two proposed amendments to the FAA arose out of this concern.

     

    In 2012 when the Senate voted on extending the deadline for the FAA, Senator Jeff Merkley (D-OR) introduced S. 3515[3] and “put the Senate to a vote on whether the administration should be forced to release the court opinions, supply unclassified summaries of them, or explain why they should be kept secret.”[4] Finding that “Secret law is inconsistent with democratic governance. In order for the rule of law to prevail, the requirements of the law must be publicly discoverable,” Merkley’s proposed Amendment would require that the Attorney General disclose each decision, order, or opinion of a FISC that includes significant interpretations of FISA. If declassification of the full text would compromise national security, then the AG should provide summaries of the opinions. If even that will compromise national security, the Amendment asks the AG to provide a report on where they are in the process of declassifying these materials.

     

    Those who opposed this Amendment worried about the potential dangers of requiring the administrators to broadcast classified information to the world, putting all Americans at grave risk. Further, they believed the Amendment unrealistic to accomplish – these opinions contain facts about current surveillance techniques and targeted subjects that they cannot separate out. Finally, though Senator Merkley’s Amendment allows for summaries and updates that might avoid some of these national security issues, a major concern for the Senate and Congress was the timing. Indeed, the Senate discussed this proposed amendment on December 27, 2012, just 4 days before the President had to sign the bill.[5]

     

    What do you think? Was this proposed amendment worth holding up a bill that helps monitor potential foreign threats? Should we be concerned about “secret legal opinions”? Is this just the price we pay for a safer America?



    [1] 50 U.S.C. §§ 1801-1885 (2012), available at http://uscode.house.gov/download/pls/50C36.txt

    [2] Press Release, ACLU Background on FISA Amendments Reauthorization Act of 2012 (December, 27, 2012).

    [3] Protect America’s Privacy Act, proposed Apr. 2, 2012, available at http://thomas.loc.gov/cgi-bin/bdquery/D?d112:28:./temp/~bdhJ2T::

    [4] Michelle Richardson, Warrantless Wiretapping Wins Again, ACLU Blog of Rights (Jan. 2, 2013), http://www.aclu.org/blog/national-security/warrantless-wiretapping-wins-again.

    [5] Congressional Record for Senate, 112th Congress (Dec. 27, 2012).

  • CISPA and Cyberspace Anonymity

    By: Ross Woessner

    Great controversy surrounds the proposed Cyber Intelligence Sharing and Protection Act (“CISPA”), which passed the House and is currently in the Senate.  The bill provides for voluntary information sharing between private companies and the government in order to prevent or mitigate cyberattacks.  For example, if the government detects a cyberattack threatening Google or Twitter it could inform those companies of the threat; likewise, Google could notify the government if they detect suspicious activity on their networks.  Part of the bill’s rationale is the increasing number of cyberattacks on American companies emanating from China and Iran.

    This has alarmed civil liberties groups because of the ease with which private communications companies can share users’ information with the government.  CISPA is written broadly enough that such companies could provide someone’s text messages, emails, or cloud-shared files.  The bill authorizes such disclosure “notwithstanding any other law,” which according to the Electronic Frontier Foundation, “essentially means CISPA would override the relevant provisions in all other laws,” and thus creates “a cybersecurity loophole in all existing privacy laws.”

    But as Solove and Schwartz note on page 590, Internet “anonymity is quite fragile, and in some cases illusory.”  Indeed, Business insider has noted that CISPA merely legalizes already common cybersecurity practices.  The Electronic Privacy Information Center (“EPIC”), through a FOIA request, obtained documents that describe a well-established information sharing program between the Department of Defense, Department of Homeland Security and private companies, including immunity provisions for private companies.  This is particularly worrisome because the Obama administration has publicly threatened to veto CISPA “while privately granting immunity to [private companies] as they collaborate with government agencies to evade wiretapping laws.”  Thus, CISPA’s practical impact would be minimal because the practices it authorizes are already widely used.

    http://www.pcmag.com/article2/0,2817,2417993,00.asp (“What is CISPA, and Why Should You Care?”)

    http://www.businessinsider.com/cispa-legalizes-common-secret-practices-2013-4

  • Domestic Security

    By: Elena D. Lobo

     

    The past two weeks have brought about events that are surely making many government officials and privacy scholars think about our current policies in a new light. In some ways, what occurred in Boston reawakened fears that we felt in the aftermath of the 9/11 attacks in 2001. Additionally, in the same week, mysterious ricin-laden envelopes were sent to the White House. Homeland security is now forced to make decisions with respect to many of the issues we examine in a class like Information Privacy. The Boston bombing turned into a manhunt that upended what was set to be a beautiful, patriotic Monday; and suspects have been apprehended in the mailings incident. The main difference between these events and those that occurred in 2001, however, is that the perpetrators of these incidents (as far as we know, and as far as the news media/government has told us) were American citizens.

     

    The aftermath of the 2001 attacks resulted in an overhaul of our privacy regulations. The Patriot Act was passed with very little opposition. Many were generally ok with it because the people we were being protected against, the terrorists, were “out there;” they were the “other.” Well, it appears that now terrorists can be “one of us.” Once again, privacy laws are being questioned, and similar discussions are taking place about how much privacy we are willing to give up in the name of anti-terrorism and public safety. The information privacy regulations once saved for foreign terrorism suspects are now threateningly able to be used at home. Does the fact that we have more and more American citizens participating in terrorist activities mean our privacy policies will have to expand to include more and more surveillance of Americans?

     

    What is becoming apparent is that the once nebulous idea of “terrorism” that we have generally been so quick to blame for various atrocities we fall victim to as Americans is starting to bump up against a thinning border between “us” and “them.” And our government has to respond. In fact, all governments do. Scott Helfstein argues in an article in Foreign Affairs that security surveillance needs to become more globally cooperative. Of course, this sounds ludicrous. Why would we share our intelligence with say, post-Arab Spring countries, for example? We may be able to help each other….but would it endanger us much more than it would help? That is the fear, but is there a way to get the benefit without compromising our own national security? http://www.foreignaffairs.com/articles/139337/scott-helfstein/intelligence-lessons-from-the-boston-attacks

     

    As far as we know, the channels are already open for increased surveillance. In fact it is nearly impossible to know how much nonconsensual surveillance is already being conducted. We know that the CIA and the FBI can request access to emails sent 180 days prior without a warrant or judicial review of any kind. We know that FISA allows surveillance of international communications made by Americans. We know that the Department of Homeland Security trolls our Facebook and twitter accounts for buzz words that may lead to further monitoring. And now we know that the IRS can access our emails without a warrant, in the name of policing tax law criminals. (http://www.washingtonpost.com/blogs/post-politics/wp/2013/04/23/ma-senate-candidates-feud-over-homeland-security/).

     

    Our laws are not adapting quickly enough to our changing environment. It’s a dilemma that can only be fixed by making more laws, and faster. But with that comes the fear of carelessness, and in an area like homeland security, that is something we just can’t afford. Is it crazy to think the next step may be a computer that can draft and adapt laws for us? After all, it would be faster…

     

  • Could Immigration Reform Lead to Biometric ID Cards?

    By: Zach Portnoy

     

    A new proposal has emerged in the ever-controversial debate on immigration, which would affect not only immigrants, but also U.S. citizens.  The proposal, headlined by Senator Chuck Schumer (D-NY), would require all U.S. citizens and legal immigrants who wish to work to obtain biometric, Social-Security ID cards.  While many, if not most, of the details are still being hashed out, the general idea is as follows.  All employers would be required to scan their workers’ biometric ID cards in order to verify their identities.  According to the Senators, “each card’s unique biometric identifier would be stored only on the card; no government database would house everyone’s information.  The cards would not contain any private information, medical information, or tracking devices.”  The cards would be used in place of the current E-Verify system, which has been not been particularly effective, to prevent unauthorized persons from working in the U.S.

     

    These statements raise a number of questions.  First, what type of biometric identification system would be used?  Biometrics refers to information about a person’s body, including anything from height and weight to fingerprints and iris scans.  The Senators propose that the biometric ID cards would not contain any private or medical information.  Yet for the biometric ID cards to work, they necessarily must use a piece of information that is unique, does not change, and is not duplicable.  That would seem to fall in line much better with an identifier such as a fingerprint, something that is most definitely private information.

     

    Moreover, the Senators propose that there would be no central database holding everyone’s information.  But when employers scan these biometric ID cards, won’t they need some way to independently verify the information? However, a central government database storing unique, biometric information would lead to some serious privacy concerns.

     

    There are numerous other concerns with mandating biometric ID cards.  How long would it be before everyone, not just workers were required to carry such a card?  And if everyone is carrying a national ID card, how long until it must be shown to travel on airplanes, purchase a gun, or to identify yourself to law enforcement?  Each step would seem to be the logical extension of such a program.

     

     

    http://www.huffingtonpost.com/2013/01/31/immigration-reform-biometric-id_n_2594285.html

     

    http://communities.washingtontimes.com/neighborhood/tekknotes/2013/apr/23/tech-tuesday-what-biometric-id-card-and-why-do-we-/

  • Online Voting

    By: Elizabeth Filatova

    Voting in the United States is a huge hassle and after every presidential election there is a discussion on all levels of government on the ways in which voting can be improved. Unlike the United States, Estonia introduced online voting in 2005. Estonians are very happy with the convenience of their system of online voting. The percentage of the population who vote online has risen from 2% to 25% from 2005 to 2011. Estonians are issued a government ID which gives them a unique online identity. After each Estonian has voted their votes are encrypted to preserve anonymity. Even though the government guarantees secure transactions, the Estonians’ identity is authenticated by a party impendent of the government. Furthermore, to ensure that voters are not voting under duress the system allows them to override a prior electronic vote by voting again online or at a polling site.

    Estonians also use their ID for a variety of purposes like paying online bills or taxes. Inside this ID is a chip that holds information about the card’s owner and two certificates, one of which is used to authenticate identity and the second to render a digital signature. Each person who uses the ID online has a card reader attached to their computer. The ID card is secure because a PIN code is assigned to each chip and it is required every time the card is used. Estonians can also use their cellular phones for identification – which means that they don’t need to get an ID card reader for their computer as the phone acts as both the card and the reader. Over 90% of Estonians have an electronic ID that they use for various ever increasing purposes.

    According to the Estonian President, Toomas Hendrik Ilves, this identification system makes Estonia’s economy stronger and helped lessen the effects of the DDoS attacks of 2007. Furthermore, Estonians have legal ownership of their own data and are thus able to access their financial and medical information online. This makes them more comfortable with their ability to maintain privacy.

    However, computer scientists are not convinced. They say that a system that that is able to accurately count votes while keeping the information anonymous has not been invented and that anything short of perfection is not acceptable for the purposes of voting. There is no way to tell that existing systems, like the one in Estonia, is secure because discrepancies are so hard to detect.

    http://www.washingtonpost.com/blogs/wonkblog/wp/2012/11/06/estonians-get-to-vote-online-why-cant-america/

    http://www.nytimes.com/2013/04/12/opinion/global/cybersecurity-a-view-from-the-front.html?pagewanted=all

    http://estonia.eu/about-estonia/economy-a-it/e-estonia.html

    http://www.technologyreview.com/news/506741/why-you-cant-vote-online/

  • The government is attempting to create a de facto ID national database (the struggle over REAL ID and the proposed amendments to E-Verify).

    By: Piotr Semeniuk

    According to the National Conference of State Legislatures, last week the Department of Homeland Security confirmed that subsequent six states – Alabama, Florida, Kansans, Nebraska, Utah and Vermont – comply with the REAL ID Act. The Real ID Act, enacted with a motivation of enhancing national security after 9/11, sets minimum document criteria for state-issued driver’s licenses and identification cards.

    Pursuant to the REAL ID Act, the non-compliant state IDs will be starkly underprivileged under the federal law. The bottom line is that the non-compliant IDs will not be accepted for the so-called federal “official purposes,” e.g., boarding a commercial plane or entering a federal facility.

    Beloved by some conservative thinks tanks (such as the Heritage Foundation) and vivaciously questioned by civil rights advocates (such as ACLU) the act triggered some opposition among the states themselves. Last year several states, including the Montana’s governor Brian Schweitzer (listen to the governor discussing his opposition to REAL ID here), sent formal statements to Congress in which they underline the exorbitant costs of the REAL ID Act’s implementation as well as privacy concerns.

    The opposition of some state gave rise to a weird legal and political landscape. In this landscape the DHS is regularly setting deadlines for implementation of the Real ID Act and states have constant troubles meeting the required deadlines (to say the least). As a result, the full implementation is being constantly delayed whereas the states and its citizens don’t face any sanctions. The recent deadline was to lapse on 15 January 2013. However, on December 2012 the DHS announced that after January 15, 2013 “states not found to meet the standards will receive a temporary deferment.” This means that residents of the non-compliant states (still the majority of states) will be allowed to enter federal buildings and use interstate plane connections. So far, the period of the determent period remains undefined, and the DHS is heralding to develop a schedule for the phased enforcement of the REAL ID states commitments “by early fall 2013.”

    Where is it all going? It seems that the nationwide implementation of the Act is stuck in limbo. My guess is that the federal government would not resort to a sanction of rejecting the cards issued by non-compliant states. Such rejection would cause, with regard to the ban on boarding planes, a paralysis of the movement within the whole country. The DHS even admits its non-readiness to hit the ordinary people with sanctions by announcing that, while developing a schedule for the phased enforcement of the Act, residents of all states “will be treated in a fair manner.” Hence, at least so far, the rebellious states will likely have a final saying in relation to the implementation of the REAL ID Act.

    However, these of advocates focusing on the Real Act should be cautious not to overlook another legislative effort that comes close to what some people call “a de facto ID national database.” What I have in mind is the so-called E-Verify system. E-Verify is a national, electronic database administered by the Department of Homeland Security (you can access E-Verify here) where employers can check if a person can legally work in the US. So far the system has been voluntary for employers. If they participate in E-Verify, when they hire an employee, they are required to enter information into the system via the web. E-Verify will then determine whether an employee got an approval or not. The system has been criticized for many flaws, including frequent errors leading to mischaracterization of the employees’ status (watch Chris Calbrese from ACLU discussing the downsides of E-Verify here).

    Pursuant to ACLU, last week a group of eight bipartisan senators ( the so-called Gang of Eight) proposed a reform to federal immigrations laws expanding the scope of E-Verify. If the proposal was passed, E-Verify would come even closer to a de facto ID federal database. First, the proposal calls for the employers’ mandatory participation in E-Verify; second, if passed, it would require states to supply E-Verify with data on state driver’s licenses (including photographs).

    It is up for discussion, whether it is a successful implementation of the Real ID Act or a potential modification to the E-Verify system that will bring the US closer to having a de facto ID national database. One thing is certain. There are forces in DC obstinately pushing for electronic collection of more and more identifying data.

  • Blogger Anonymity in Defamation Lawsuit: Thomas Cooley Law School v. Doe

    By Sisi Wu

    In 2011, Thomas Cooley Law School filed a defamation lawsuit against a former student who criticized the school on his blog, which he called “Thomas M. Cooley Law School Scam.” The blogger, “John Doe,” sought a protective order from the trial court to prevent Cooley from disclosing his real name in court documents. The trial court ruled against Doe, finding that slander per se (which Cooley sufficiently alleged in its complaint) is not protected by the First Amendment.

    On April 4, 2013, the Michigan Court of Appeals reversed. The opinion surveyed various standards in other jurisdictions for determining when a plaintiff has the right to learn the identity of an anonymous defendant. Without adopting a clear standard, the appeals court determined that the trial court had abused its discretion in refusing Doe’s protective order by failing to properly consider Doe’s First Amendment rights.

    Although the decision was lauded by free speech advocates for being protective of anonymous speech, observers (links below) criticized the court for failing to provide a clear standard for future cases and, particularly, for not establishing a notice requirement for subpoenas issued to obtain the identity of anonymous defendants. Without mandatory notice, defendants may not be aware that their personal information is being sought, and thus won’t file motions to quash. This uncertainty could have a chilling effect on anonymous speech.

    More information and commentary:

     

    http://www.citizen.org/litigation/forms/cases/getlinkforcase.cfm?cID=691

     

    http://www.law.com/jsp/nlj/PubArticleNLJ.jsp?id=1202595256890&Cooley_Law_loses_bid_to_unmask_online_critic_on_appeal&slreturn=20130324223310

     

    http://thefire.org/article/15705.html

     

    http://www.techdirt.com/articles/20130405/15314122604/appeals-court-protects-anonymity-critics-cooley-law-school-could-have-done-more.shtml

  • Medical devices test privacy limits

    By Josh Stager

     

    Medical devices have the potential to significantly improve the quality of patient care, but recent innovations demonstrate that the convergence of health information technology and Big Data are testing the limits of health privacy law. As the Wall Street Journal recently explained, many new devices collect vast amounts of patient data – often without the patient’s knowledge. Medtronic is a leader in this field, as it manufactures many devices that wirelessly collect and transmit data from technology implanted inside patient’s bodies. For example, a defibrillator implant tracks a patient’s heartbeat and provides a shock if the heartbeat stops. It is an important device for people with serious heart conditions, and doctors can use the data collected by the device to provide better treatment. But patients wanting to see data about their own heartbeats are rebuffed.

     

    The pivotal question is: who owns the data collected by such devices? The Health Insurance Portability and Accountability Act of 1996 allows patients to access medical data from hospitals and physicians. However, the data collected by many medical devices is transmitted wirelessly to the device maker. Doctors can only access the data through websites maintained by the device maker – and patients have only been able to access that data from doctors who are willing to share it. Consequently, the data falls outside the scope of HIPAA’s patient access provisions.

     

    While the medical community apparently considers this data to be owned by the companies who develop the technology and store the data, the legal community is less certain. Some argue that HIPAA is too outdated to adequately address the issue, and many patients (and their doctors) have an instinctive sense that the patient must have some ownership rights to the data, given that it is derived from their own bodily functions. Stanford cardiologist Paul Zei articulates the question thusly: “Is the device itself a depository for medical records, or is it part of the patient, and an extension of vital signs that we download into a medical chart?”

     

    While a few enterprising patients have gone to great lengths to access data from their implanted devices (the Wall Street Journal described a man who took a $2,000 training course to learn how to read his device’s data transmissions and persuaded his doctor to copy his data from the manufacturer’s website), patient demand is relatively weak – for now. Few patients actually realize their device is transmitting data until they learn about it through some happenstance disclosure during a checkup. As public awareness increases, patient demand for access to this data will likely grow. Health data analytics is a fast-growing area of smartphone app development, as many people use apps such as Fitbit to track their physical activity or monitor sleep patterns.

     

    Big Data companies also have an interest in the data collected by medical devices. Medtronic has indicated that is looking into ways to monetize the data by selling it to interested third parties. While existing regulations prevent device makers and other third parties from selling data that is patient-identifiable, it is possible that anonymized data could be sold.

     

    Smartphone apps raise another important question: what happens to medical data collected by apps? Such programs are not subject to FDA approval and fall outside the ambit of HIPAA. Nonetheless, phones are increasingly being used to collect and analyze medical data. In addition to health monitoring applications, phone and texting logs have been used by researchers to predict the onset of depression and stress disorders. In this environment, the definition of “medical data” is unclear. Technological innovation appears to be broadening the understanding of what constitutes medical data, but privacy law is stuck in a 20th Century framework.

     

    Unprotected data from implanted devices, smartphone apps, and other medical technology could ultimately be used against patients. Medtronic envisions a future in which health insurers require those at risk of heart disease to wear monitoring devices or face higher premiums. Harvard research fellow Tolu Odomusu worries that an auto insurance company might buy unprotected medical data to prove that a driver’s sleepiness was to blame for a car accident.

     

    The potential for abuse of medical data is substantial, which is what motivated Congress to enact HIPAA 17 years ago. However, HIPAA is clearly straining to keep up with health information technology, as the advances in medical devices demonstrates. New devices reveal a loophole in privacy laws that device makers, data companies, and app developers have exploited. It seems the only actor not benefitting from outdated laws is the patient. Indeed, the FDA offers little guidance to patients seeking access to their device data, other than telling them to ask their doctors for it. The unsustainability of this situation and the inherent privacy risks should be a call to action for Congress to revise HIPAA for the 21st Century.

     

  • Healthcare Privacy: New Protections in the Law, New Vulnerabilities from Technology

    By Scott Snyder

    Earlier this month, the 11th Circuit Court of Appeals ruled in favor of greater privacy protection for the medical data of deceased nursing home patients.  The issue arose when family members of a deceased patient in Florida sought medical records and were denied access.  According to the Health Insurance Portability and Accountability Act of 1996, a federal law, medical records may be released only to a designated “personal representative.”  This conflicted with a less restrictive Florida state law that required nursing homes to release records of deceased residents to spouses, guardians, surrogates, or attorneys.  According to the 11th Circuit, the more restrictive federal law preempts.

    However, while privacy advocates can celebrate this small victory, they face growing challenges from new technologies that spread medical information across more devices and media.  One such medium is health social networking websites, on which users can share information and connect with individuals with similar afflictions.  This creates a significant privacy concern, especially as users frequently do not understand the privacy settings on these websites.  There is also uncertain accountability for third-parties who may wish to access and use data from the sites.

    In addition, the growing prevalence of Bring Your Own Device policies raises concerns that sensitive medical information could be gleaned from lost or stolen devices.  These policies can cut costs for businesses that would otherwise have to provide electronic accessories to their employees, but they create vulnerabilities even as they reduce expenses.  A Cisco survey of healthcare workers found that 89% of U.S. healthcare workers use their personal smartphones for work purposes.  Another survey of hospitals found that 85% of physicians and staff use personal devices at work; this usage includes reviewing medical records and transferring files, including radiology images and lab results.  These findings juxtapose starkly with a sample White House BYOD policy that would require users to refrain from downloading or transferring sensitive business data to their personal devices.

    While the decision in Florida demonstrates the availability of legal protection for private medical information, gaps clearly remain.  More widespread use of technology is rapidly exacerbating the problem; policymakers will need to work quickly to ensure that the law keeps pace.