April 23rd, 2015
Fitness apps may pose legal problems for doctors
By: Emma Trotter
The February 2015 Associated Press article “Challenges for Doctors Using Fitness Trackers & Apps,” which can be found at http://www.theepochtimes.com/n3/1257858-challenges-for-doctors-using-fitness-trackers-apps/, raises several issues that relate to topics covered during this week’s class on health privacy. The article reads as a list of potential trouble spots for doctors and declines to offer many solutions.
First, the article points out that, because HIPAA was written to only narrowly apply to entities that issue, accept, or otherwise deal in health insurance, the law’s privacy protections do not extend to the many new apps and devices that help users keep track of their health and fitness. As mentioned in class, this information might come as a shock to users, who tend to assume that HIPAA is much broader than it really is. This could lead to users over-sharing, thinking their information is protected because they are collecting and providing it in a health context, in the meaning of Helen Nissenbaum. If an app were to sell that normatively sensitive health information to third parties, it could theoretically be used, in secret, to deny a less in-shape person a job or offer that person insurance at a higher rate.
The article also mentions that certain apps have one purpose but could be used for others. For example, if a person wearing a step counter that tracks location goes and meets up with another person wearing that same brand of step counter, the device manufacturer probably has the ability to determine that those two people are together. While this may not seem like a privacy harm in and of itself, we have learned over the course of the semester from several theorists, including Neil Richards, that surveillance can curtail intellectual freedom and exploration.
Additionally, the article points out some reliability problems with certain types of data. For example, smart pillboxes that purport to track when patients take medication really only show when patients pick up the boxes. For now, doctors are still relying on patients to accurately self-report. That information could be supplemented by FICO’s new Medical Adherence Score, which we learned about from Parker-Pope’s NYT article, but since that score relies on information such as home ownership and job stability, not actual health data, it is fundamentally inference-based and reflects statistical averages better than the actual behavior of any individual patient.
Another reliability issue the article brings up stems from the fact that many of the apps and devices aren’t regulated by the FDA. The article suggests that this means some of the claims made by these businesses might not deserve doctors’ trust; for example, Fitbit sleep tracking might be oversensitive to movement and show a user as getting far less sleep than she really is. This concern could be mitigated somewhat by the FTC’s ability to use its section 5 jurisdiction to hold these companies accountable for deceptive or unfair business practices based on extremely overstated claims, which we studied earlier in the semester. But, as the article also points out, this limited recourse would only address data reliability and wouldn’t prevent the apps from selling data to third parties and violating contextual integrity if their posted privacy policies allow them to do so.
Yet another reliability issue raised by the article is that, for now, the data collected by these apps and devices skews toward younger people more likely to use or wear them. Since younger people are statistically healthier than older people, this could introduce bias into the data collected.
Finally, the article touches on the issue of liability. Imagine that a fitness tracking app shows something worrisome – a spike in blood pressure, for instance – and a doctor fails to notice it. Is that doctor liable, under traditional tort theories of medical malpractice, for an injury that then befalls the patient? The article suggests developing technological systems to scan the data and automatically flag potential trouble spots – but that doesn’t completely eliminate the issue. What if the technology fails, or the doctor still fails to act? This issue is of course compounded by the possibility that the data may be unreliable, as discussed above.