Blog

  • PRG News Roundup, February 28, 2024

    News
    On February 20, 2024, Nevada Attorney General Aaron Ford filed a motion to prevent Meta from providing end-to-end encryption on Messenger for users residing in the state who are under the age of eighteen. Since December 2023, Meta has made end-to-end encryption the default for all messages on Messenger. The AG has sought rapid hearing on the matter, citing the “extreme urgency” affecting the safety and well-being of minors in Nevada. Meta responded by noting the value of encryption in protecting communications and personal information.

    The Supreme Court heard a pair of cases (Moody v. NetChoice, LLC and NetChoice, LLC v. Paxton) on February 26, 2024. The Court appeared skeptical of laws in Florida and Texas that regulate how large social media companies exercise their editorial discretions over content moderation. The Court’s decision would have an enormous impact on the scope of the First Amendment and the nature of speech in the internet era.

    UnitedHealth, the nation’s largest insurer, was hit by a cyberattack on its unit—Change Healthcare, a division of Optum. The attack was discovered on February 21, 2024, and appeared to be a ransomware attack launched by a foreign nation-state actor. This latest attack foregrounded the vulnerability of healthcare data and private medical records, especially those of patients. The cyberattack disrupted UnitedHealth’s services with prescription drug orders and even affected the U.S. military overseas.

    Canada has introduced a new bill—the Online Harms Act—that requires social media platforms to remove posts exposing children to online abuse. The Canadian Parliament needs to vote on the bill, but the proposed Act aims to create a “digital safety commission” to regulate social media companies and offer more effective means to protect children online.

    President Biden issued an Executive Order on February 28, 2024, to protect the sensitive personal data of Americans. The Executive Order authorizes the Attorney General to “prevent the large-scale transfer of Americans’ personal data to countries of concern” and provides relevant safeguards. The “countries of concern” specified in the Order included China, Russia, Iran, North Korea, Cuba, and Venezuela. Such restrictions mark the first-ever broad prohibition on the sale of digital data by the U.S. to individual countries.

    Wendy’s has announced its plan to spend $20 million on more enhanced features, including dynamic pricing and digital menu boards that allow for a more flexible menu in stores. The company has further clarified that it will not use surge pricing, similar to that used by Uber, after its CEO Kirk Tanner’s comments to investors sparked commotion around the possibility of adopting this practice, which raises prices when the demand is highest.

    (Compiled by Student Fellow Stephanie Shim)

  • PRG News Roundup, February 22, 2024

    News 

    The Centers for Medicare & Medicaid Services (CMS) announced changes to the current research data request and access policies in the name of data security that will limit individual researchers’ access to data.

    The European Commission has opened investigations to assess whether TikTok has breached the Digital Services Act.

    Reddit has signed a contract to allow a company to train its AI Models on the platform’s content, ahead of its IPO.

    Signal is testing a beta version that hides user’s phone numbers and lets them pick a username instead.

    The European Court of Human Rights ruled that weakening end-to-end encryption presents a disproportionate risk of undermining human rights.

    Events

    Abrams Institute Conversations will host Yale Law Professor Jack Balkin to discuss the cases before the Supreme Court concerning the power of states to regulate content moderation on social media platforms. Monday, March 4 · 12 – 1:30pm EST

    Papers

    Researcher Access to Social Media Data: Lessons from Clinical Trial Data Sharing authored by Christopher Morten (Columbia Law School), Gabriel Nicholas (New York University School of Law) and Salome Viljoen (University of Michigan Law School; Harvard University). 

    (Compiled by Student Fellow Marina Garrote)

  • PRG News Roundup, February 7, 2024

    News

    Google agreed to a $350 million settlement over a lawsuit related to a security lapse that exposed Google Plus users’ data, amidst other legal challenges for privacy and competition law violations.

    The FTC has issued proposed settlements to ban the sale of sensitive geolocation data by data brokers, marking a significant step in addressing privacy concerns and emphasizing the need for informed consumer consent.

    Apple is reportedly considering acquiring the German AI startup Brighter AI to integrate its Precision Blur and Deep Natural Anonymisation technologies into the Vision Pro, aiming to enhance privacy by anonymizing faces and license plates in photos and videos.

    The EU requires large tech platforms like TikTok, X, and Facebook to identify AI-generated content to safeguard the upcoming European election against disinformation.

    Nightshade v1.0 ‘poisons’ AI models by embedding imperceptible pixel-level changes into images to prevent unauthorized use of artworks for AI training, with some critics labeling the tool as a form of ‘illegal’ hacking.

    A new report criticizes state privacy laws as being significantly weakened by the tech industry’s influence, with most states enacting ineffective legislation that fails to protect consumer data adequately or offer meaningful enforcement.

    After over two years of development, the EU’s Artificial Intelligence Act (AI Act) is nearing approval, with the latest text offering a final compromise on high-risk AI systems, General Purpose AI, and governance and enforcement mechanisms; however, critiques note that last-minute concessions may limit its protective potential, especially due to industry lobbying and the possibility of insufficient enforcement resources.

    An investigation into Microsoft’s design practices across Windows 10 and 11, Edge, and Bing reveals the company’s use of harmful design techniques—such as coercive, manipulative, and deceptive patterns—to push users towards using Edge browser, leading to potential consumer, social, and market harms. The report concludes that Microsoft’s practices distort user choice and undermine trust in technology, advocating for the cessation of these practices and regulatory intervention if necessary.

    A Nigerian man has been arrested and charged with various offenses, including child pornography and attempted extortion, following the suicide of a Canadian teen, who fell victim to an online sextortion scheme.

    US police departments are attempting to use facial recognition on 3D models of suspects’ faces generated from DNA evidence, despite concerns from civil liberties groups and experts who argue that this practice is based on unproven science and could lead to wrongful identification, as shown in a controversial case by the East Bay Regional Park District Police Department.

    Bumble has introduced an AI-powered feature called “Deception Detector” to its dating app, designed to identify and block fake profiles, scams, and spam, reducing member reports of such issues by 45% during initial testing and supporting a 95% success rate in blocking undesirable accounts. 

    Events

    The Workshop for Junior Scholars on March 11, 2024, at MIT Stata Center, organized by Aniket Kesari and Sarah Scheffler, aims to build a community and provide guidance for early-career individuals in Law and Computer Science. The half-day event includes panels on academic and non-academic careers, mentoring sessions, and discussions on conducting interdisciplinary research, followed by dinner. Registration is available online. It precedes another conference: ACM Symposium on Computer Science and Law (CSLAW 2024).

    There is an open application for a two-year residential postdoctoral program at Harvard Law School aimed at developing scholars early in their careers who have a primary interest in private law, including common law subjects and statutory areas like intellectual property. Selected from recent graduates, academics, and practitioners, Fellows focus on their research, contribute to the Project on the Foundations of Private Law, mentor students, present and attend workshops, help with events, and engage in blogging.

    (Compiled by Student Fellow Rebecca Kahn)

  • PRG News Roundup, January 31, 2024

    News

    Child access and privacy work are at the forefront of issues being addressed politically. On Wednesday, January 31, five CEOs from major tech companies, including Meta’s CEO Mark Zuckerberg and TikTok’s CEO Shou Zi Chew, testified at a Senate hearing about the protection of children from online sexual exploitation as congressional leaders explore how to tackle these issues.

    Additionally, the California Attorney General Rob Bonta introduced two bills, one privacy and one on social media. “The privacy bill, deemed the proposed Children’s Data Privacy Act, aims to amend the California Consumer Privacy Act to tighten youth coverage. The proposed Protecting Youth from Social Media Addiction Act focuses on measures to moderate content and limit luring features or techniques on social media platforms.”

    23 & Me’s stock price tumbled to the ground as they face a class action filed last week around a data breach specifically impacting Jewish and Chinese customers. 

    Court began to scrutinize using AI chatbots on legal briefings as an attorney Jae Lee “reports that she relied on a generative artificial intelligence tool, ChatGPT, to identify precedent that might support her arguments, and did not read or otherwise confirm the validity of the (non-existent) decision she cited.”

    TikTok continues to struggle in preventing the sharing of data with its Chinese parent company. TikTok is trying to show the U.S. lawmakers that its video sharing application is a safe form of social media through these limits in data sharing.

    OpenAI removed their blanket prohibition on military use of ChatGPT by deleting the text from their usage policy. The blanket ban on “military and welfare” has been removed from the policy but continues the policy on using the tool for “to harm yourself or others” and “develop or use weapons”. 

    Events

    As an organization focused on the intersection of law and artificial intelligence, LunchGPT’s first lunch is planned for Friday February 16 at 12 PM. If you are interested, please reach out to Kevin Fraizer with questions.

    Registration is now open for the 2024 ACM Symposium on Computer Science and Law, which will take place on March 12-13, 2024, at Boston University. The Symposium is a leading venue for cross-disciplinary scholarship at the intersection of computer science and law.

    (Compiled by Student Fellow Molly Pushner)

  • PRG News Roundup, January 24, 2024

    News

    Earlier this month, New Jersey became the thirteenth state to pass a comprehensive privacy bill. The law, which will go into effect January 15, 2024, is said to borrow many features from previously enacted state-level privacy legislation while also bearing several unique characteristics: it covers non-profit organizations as well as for-profit ones, bears a broad definition for “sensitive data,” and does not apply any revenue-based thresholds to organizations within its remit.

    Deepfake phone calls have already factored into 2024 U.S. electoral races, as an AI-generated robocall purported to be from Joe Biden told New Hampshire voters to refrain from voting in the state’s primaries and an AI-generated audio clip of New York politician Keith Wright criticizing Inez Dickens disseminated last Sunday.

    Even though New York City has recently passed and begun enforcing a law that requires companies to disclose how AI factors into its hiring decisions, few employers have complied.

    Apple disabled a blood oxygen monitoring feature from two Apple Watch models sold in the U.S. in response to a successful patent litigation challenge from medical technology company Masimo.

    In December, Israel published a draft of an amendment to a 2002 surveillance law that experts worry would allow its domestic security agency, Shin Bet, to secretly search electronic devices and databases using spyware.

    The European Court of Justice (ECJ) issued a preliminary ruling in a landmark case on data protection in credit reporting, affirming the GDPR’s jurisdiction over automated credit reporting and scoring activities.

    France fined Amazon 32 million Euros for violating the GDPR by “using an ‘excessively intrusive system’ to monitor worker performance and activity” within its warehouses.

    Events

    A weekly conversation series focused on the intersection of law and artificial intelligence, entitled LunchGPT, is set to be launched in the coming weeks. If you are interested in participating in this series, navigate here.

    The Cyber Program at Columbia’s School of International and Public Affairs (SIPA) is hosting a social event Tuesday, February 6 from 6:30 to 9pm EST at Arte Café. To RSVP, navigate here.

    The 3rd ACM Computer Science and Law Symposium will be taking place March 12-13, 2024 in Boston. The day before this symposium, there will be a Workshop for Junior Scholars geared toward law students, post-docs, and recently appointed professors. The Workshop “aims to build community, provide advice in navigating interdisciplinary careers, and foster discussion about future research in Computer Science and Law.”

    (Compiled by Student Fellow Cooper Aspegren)

  • PRG News Roundup, November 29, 2023

    Meta’s paid ad-free service launched in Europe in November was targeted in an Austrian privacy complaint. The complaint was filed by the digital rights group NOYB with Austria’s Data Protection Authority. The group disagrees with Meta on the concept of consent, arguing that a privacy fee does not guarantee a genuine free will of the user.

    18 countries, including the United States and Britain, unveiled what is described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are “secure by design.” The agreement is non-binding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers.

    Meta’s attempt to drag the Federal Trade Commission into federal court over its plans to bar the tech giant from monetizing children’s data was shot down by a judge. This decision delivers the agency a significant victory as it paves the way for the agency to move forward with its sweeping proposed restrictions, which children’s safety advocates say could serve as a template for keeping the tech giants’ privacy practices in check.

    An analysis of the effects of the U.S. Supreme Court decision in Dobbs v. Jackson Women’s Health Organization on fertility indicate that states with abortion bans experienced an average increase in births of 2.3 percent relative to states where abortion was not restricted. The decision sparked the most profound transformation of the landscape of abortion access in 50 years.

    The California Privacy Protection Agency (CPPA) proposed a regulatory framework for Automated Decision Making Technology, which defines important new protections related to businesses’ use of these technologies. The proposed regulations outline how the new privacy protections that Californians voted for in 2020 could be implemented.

    Congressional leaders are discussing a controversial program that would reauthorize Section 702 surveillance, including by attaching it to the National Defense Authorization Act.

    (Compiled by Student Fellow Júlia Strack) 

  • PRG News Roundup, November 15, 2023

    News

    The European Parliament adopted the final version of the Data Act on November 9, 2023. The Data Act aims to create a new single market for data sharing and grants entities in the public sector access to data held by private companies in certain circumstances of high public interest. The Data Act will reinforce data availability, sharing measures, and portability among EU member states.

    In response to the Biden administration’s executive order on AI governance, the Cybersecurity and Infrastructure Security Agency (CISA) launched a Roadmap for Artificial Intelligence to pursue five lines of effort, in partnership with its parent agency, the Department of Homeland Security. CISA’s roadmap underscored the importance of building risk mitigation into AI/ML systems as a design feature and maintaining a transparent approach via information sharing.

    Clearview’s facial recognition technology has become the Ukrainian government’s “secret weapon” against Russia in its ongoing war. As Ukrainian authorities have come to rely heavily upon this private U.S. tech company for its wartime efforts, their partnership has raised critical questions over the deployment of controversial or invasive technology in an armed conflict as well as the extension of digital privacy rights.

    Meta and YouTube face criminal complaints in Ireland “for alleged unlawful surveillance of EU citizens via tracking scripts.” Alexander Hanff, a privacy consultant and advocate, challenged that both Meta’s and YouTube’s tracking code and ad-blocking violated Ireland’ computer abuse law.

    Human Rights Watch (HRW) raised concerns over a new vehicle tracking system in Uganda, which allows the government to track the real-time location of all vehicles in the country. HRW has criticized Uganda’s Intelligent Transport Monitoring System (ITMS) as a surveillance mechanism infringing on the rights to privacy, expression, and association.

    Meta, Google, TikTok, and other social media giants are facing a deluge of lawsuits based on the theory of addiction, especially as to children. Judge Yvonne Rogers in Oakland, CA, dismissed some claims while permitting others to proceed. Judge Rogers further rejected the companies’ arguments that they are immune from personal injury claims under the First Amendment and Section 230 of the Communications Decency Act – federal laws invoked by social media platforms to block suits concerning contents created and posted by their users.

    Events

    Columbia Law hosted its Accountability and Liability in Generative AI: Challenges and Perspectives symposium on November 17, 2023, featuring a wide range of viewpoints on how civil liability and institutional accountability can address the harms from generative AI.

    (Compiled by Student Fellow Stephanie Shim)

  • PRG News Roundup, October 25, 2023

    News

    The Consumer Financial Protection Bureau (“CFPB”) proposed the Personal Financial Data Rights rule to give people a legal right to give third parties access to their data related to their credit card, checking, prepaid, and digital wallet accounts. This change will allow people to switch service providers and manage multiple accounts without paying junk fees or permitting risky methods of data collection. 

    The French Data Protection Authority (“CNIL”) published a set of guidelines in the form of AI how-to sheets addressing compliance with personal data regulation, including the GDPR, while developing AI systems. The guidelines are intended to provide greater legal certainty to relevant parties. 

    New attorneys for the Fugees rapper, Pras, filed a motion for a new trial on the grounds that his previous defense attorney was ineffective because the attorney used an “experimental” Generative AI program to help him write the closing argument and it caused mistakes. 

    PEW Research published a report on a survey about Americans views on data privacy. Key highlights include:

    • American adults are concerned about and don’t understand how companies and the government use the data they collect. The percent has increased for Republicans responding to the poll
    • Americans don’t trust companies to use AI responsibly and worry that use of AI for data collection and analysis will result in unintended consequences and uses people would not be comfortable with
    • Americans feel their privacy choices don’t really matter
    • There is bipartisan support for increased regulation of company’s use of personal data

    The New York Court of Appeals ruled that independent oversight agencies, the Commission on Forensic Sciences and the DNA Subcommittee, had the authority to promulgate a regulation that permits law enforcement to request a familial DNA search of the state DNA Databank — which stores genetic information of New Yorkers convicted of certain felonies — when an initial search results in no match or a partial match. NY Court of Appeals Decision.

    (Compiled by Student Fellow Lindsey Schwartz)

  • PRG News Roundup, November 1, 2023

    News

    The Biden-Harris Administration issued a landmark executive order entitled “Safe, Secure, and Trustworthy AI.” The order aims to standardize federal procurement of AI, and to lay out the groundwork for establishing new standards for AI safety and security. The order proposes several key measures, including requiring agencies to work with NIST to develop responsible AI testing frameworks and guidances, requiring developers of powerful AI systems to share their safety test and performance results with the government, requiring agencies to evaluate how commercially available PII is collected (including from data brokers), and directing agencies to investigate civil rights violations and unlawful discrimination practices enabled by AI tools.

    The European Data Protection Board (EDPB) adopted a final ban on Meta’s data processing for behavioral advertising across EU member states and European Economic Area countries. This decision follows a petition from the Norwegian Data Protection Authority urging the EDPB to extend and make permanent their own previously-issued interim ban in Norway. In effect, the EDPB decision clarifies that Meta’s subscription-based consent model does not provide a valid legal basis for its behavioral advertising practices under GDPR.

    bipartisan coalition of 42 U.S. attorneys general across the nation filed suit against Meta in federal and state courts, claiming that Meta’s business practices violate state consumer protection laws and the federal Children’s Online Privacy Protection Act (COPPA). The suit alleges that Meta knowingly designed and deployed features on Instagram and other social media platforms that purposefully harm children’s mental health, while falsely assuring the public that these features are safe and suitable for young users. 

    The U.S. Supreme Court will hear arguments in a series of cases concerning state action and constitutional free speech on social media platforms. The cases will examine whether public officials can constitutionally block their constituents on social media, whether social media content moderation laws originating in Texas and Florida violate the First Amendment, and whether the Biden administration’s and social media companies’ joint efforts to curb misinformation online — particularly regarding the COVID-19 vaccine — constitutes censorship by the government. 

    The U.S. Securities and Exchange Commission announced charges against SolarWinds Corporation, a Texas-based software company, for defrauding securities investors. The SEC alleges that SolarWinds’ public statements on their website regarding their cybersecurity practices were overstated and at odds with multiple internal assessments, which identified specific and known deficiencies in their cybersecurity practices. 

    The G7 reached an agreement on a set of International Guiding Principles on Artificial Intelligence (AI) and a Code of Conduct for AI developers. The voluntary Guiding Principles are intended to help organizations mitigate the risks and potential misuses of AI systems. The Code of Conduct is intended to provide detailed and practical guidance for developers of AI. Both documents are intended to be living and voluntary, to be updated and reviewed as necessary to stay responsive to developments in AI technology. 

    (Compiled by Student Fellow Jennifer Kim)

  • PRG News Roundup, October 18, 2023

    News

    California governor Gavin Newsom signed the Delete Act, which requires data brokers to register with the California Privacy Protection Agency (CPPA) and charges the CPPA with developing a one-stop-shop deletion mechanism for consumers to request the deletion of their data held by registered brokers.

    Clearview AI successfully appealed a multimillion-pound fine imposed last year by the UK Information Commissioner’s Office, which a court found to lack jurisdiction, as since 2020 Clearview has only accepted law enforcement agencies or national security bodies as clients.

    Google is hosting a discussion on potential new protocols that could be used to allow online creators to prevent the inclusion of their data in AI training datasets. 

    Meta released a new product allowing users to talk to AI chatbots, many using the likenesses of partner celebrities.

    Events

    The Journal of Legislation and Public Policy is hosting a symposium (co-sponsored by PRG) on Monday, Oct. 23, about the legal and policy challenges surrounding telehealth. RSVP here.

    The Information Law Institute is hosting a symposium on Thursday, Oct. 26, discussing the recent slate of child privacy laws restricting youth access to social media and the internet. RSVP by emailing ILIChildPrivacyRSVP@gmail.com.

    (Compiled by Student Fellow Stephanie Chen)