News

A U.S. District Court in San Francisco dismissed without prejudice a lawsuit claiming that Meta “misled shareholders in [its] proxy statement about their ability to ensure the safety of children who use Facebook and Instagram.” The judge noted that the plaintiffs failed to show that these disclosures lead to economic losses for shareholders. Meta, however, still must deal with a bevy of other suits centering on children’s safety concerns. To wit, a Suffolk County Superior Court judge held that Meta had to stand trial in a case filed by the Massachusetts Attorney General’s Office concerning allegations that the company lied about the dangers its products present to teenagers’ mental health and purposefully instituted features meant to increase addiction to its platforms. Meta argued that it was immune from liability under Section 230 of the Communications Act, but the judge held that the Act applied neither to false claims nor to a business’ own design choices. 

recently revised its privacy policies to, among other changes, incorporate a provision telling users that, unless they opt out, third party companies may scrape their data to train AI models. X also adjusted provisions on data retention, telling users that their different types of user content may be kept for different periods of time, “in order to provide [users] with our products and services, to comply with our legal requirements and for safety and security reasons.” 

CFPB finalized a rule covering the rights that consumers have over their personal financial data. The rule mandates that certain financial entities, including banks and credit card companies, transfer consumer data at the request of the consumer for free. Furthermore, the rule’s privacy provisions require that companies may only use consumer data in ways authorized by the consumer. The Bureau envisions that this rule will facilitate greater consumer choice and provide users with greater control over their financial data. 

The Massachusetts Supreme Court has agreed to hear Commonwealth v. Govan, a case involving a defendant subject to pretrial monitoring in which law enforcement used data collected from such monitoring in an unrelated case. The case implicates the right to privacy in one’s location data and the extent of pretrial protections for defendants, among other state and 4th Amendment privacy issues. 

A Florida teenager died by suicide following months spent speaking with chatbots generated by Character.AI, an app that allows users to create A.I.-generated characters and chat with them or bots generated by other users. The teenager’s mother has sued the company, arguing that it was responsible for her son’s death. The incident comes amid growing fears that technology is contributing to adolescent mental health struggles and that unregulated A.I. may exacerbate the issue. 

The California Civil Rights Department proposed adjustments to its hiring discrimination rules that account for employers relying on A.I. technology in their hiring processes. Notably, following industry pushback, the revised rules would reduce liability for third parties that create these tools. 
The American Law Institute will launch a Principles of the Law project for Artificial Intelligence to be led by NYU Law Professor Mark Geistfeld. The project, which will be directed to legislatures, private actors, administrators, and courts, will focus solely on physical harms. ALI noted that other types of harm will be addressed in forthcoming Principles projects, but may revisit the scope of the project to determine if a more comprehensive approach is more appropriate.

(Compiled by Student Fellow Shreyas Iyer)

Leave a Reply