Blog

  • PRG News Roundup, April 3, 2024

    News

    The California Privacy Protection Agency issues a bulletin advising businesses to implement strong data minimization principles, including when processing consumer requests under the CCPA itself. [April 2]

    Google settles a class action lawsuit alleging it illegally stored data about users’ “incognito” browsing by agreeing to destroy billions of records and alter its tracking policies moving forward. [April 1]

    OpenAI announces — but does not release — a new “Voice Engine” model which can recreate a person’s voice based on only 15 seconds of audio. [March 29]

    OMB announces new guidance for federal agencies using AI, requiring agencies that cannot implement certain mandatory evaluation and monitoring safeguards by December 2024 to cease using AI systems until they can comply.

    The European Court of Justice rejects a request from Amazon to temporarily suspend the application of the Digital Services Act while the company litigates whether or not it is a “very large online platform” subject to the Act’s heightened requirements. [March 27]

    The Department of Justice and 16 states sue Apple for restricting competition by, among other things, artificially constraining messaging features between iPhones and Android devices and preventing third party developers from competing with Apple’s tap-to-pay digital wallet feature. [March 21]

    (Compiled by Student Fellow Micah Musser)

  • PRG News Roundup, March 27, 2024

    News

    Florida signed into law a social media ban for minors 13 and under, with 14-15 needing parental consent

    Portugal’s data regulator has ordered Worldcoin to stop collecting biometric data for 90 days, given concerns about unauthorized data collection from minors, as well as Worldcoin’s lack of consent or data deletion mechanism.

    The NTIA finishes up its public comment period on open-source AI models on 3/29.

    The FTC opened a new investigation into TikTok into alleged unfair and deceptive business practices as well as violations of the Children’s Online Privacy Protection Act.

    A man in Montana pleaded guilty earlier this month to wildlife trafficking charges as part of an effort to create cloned giant sheep hybrids.

    In its investigation of the 2020 SolarWinds cyberattack, the SEC is asking technology and telecom companies for internal communications as to how they handled the hack, prompting business pushback.

    Events

    Professor Strandburg is speaking on a panel as part of the Journal of Law & Business’s spring symposium, Artificial Intelligence and Antitrust: Global Regulatory Changes, from 6-8pm on 4/3.

    (Compiled by Student Fellow Stephanie Chen)

  • PRG News Roundup, March 6, 2024

    News

    The House of Representatives will be performing a Full Committee markup of H.R. 7521, the Protecting Americans’ Data from Foreign Adversaries Act on March 7, 2024. This is a response to privacy concerns in national security with sensitive data, including health data, about Americans traveling abroad being collected and sold, particularly to foreign adversaries. President Biden issued an Executive Order last week discussing steps being taken to protect Americans’ sensitive data from exploitation by foreign adversaries, including genetic and biometric information.

    The National Institute of Standards and Technology, responsible for promoting innovation and new technologies in the US, is struggling with funding. The agency is unable to do foundational work because it is so underfunded. President Biden’s AI Executive Order rests heavily on the NIST to oversee AI models and the data privacy and security of such models. The Washington Post article describes the NIST building as absolute decay, with black mold forcing an evacuation, leaks, and struggling technology.

    bill introduced in Florida banning all minors under the age of 16, with or without parental consent, from addictive social media platforms was vetoed by Governor Ron DeSantis. Governor DeSantis previously seemed on board with the bill, having concerns about privacy for minors. Governor DeSantis is now believed to support a bill that would ban access for younger minors but allow access, with parental consent, for minors 14 and older.

    Google and Reddit came to an agreement allowing Google to use Reddit posts to train its AI in addition to other services. Reddit is then allowed to use Google’s AI model for its own site improvement. The $60 million deal set a precedent for tech companies to obtain data for AI models in the future. As part of the deal, Google must comply with Reddit’s privacy policy and fully delete data once a user has deleted a post, not allowing Google to keep shadow data.

    Germany is amending its Federal Data Protection Act. The amendment draft institutionalizes the German Data Protection Conference and aims to improve data protection law enforcement. The German press release states that there will additionally be legal certainty for consumer protective scoring, which has been developed with the Federal Ministry for the Environment and Consumer Protection.

    (Compiled by Student Fellow Paulina Andrews)

  • PRG News Roundup, February 28, 2024

    News
    On February 20, 2024, Nevada Attorney General Aaron Ford filed a motion to prevent Meta from providing end-to-end encryption on Messenger for users residing in the state who are under the age of eighteen. Since December 2023, Meta has made end-to-end encryption the default for all messages on Messenger. The AG has sought rapid hearing on the matter, citing the “extreme urgency” affecting the safety and well-being of minors in Nevada. Meta responded by noting the value of encryption in protecting communications and personal information.

    The Supreme Court heard a pair of cases (Moody v. NetChoice, LLC and NetChoice, LLC v. Paxton) on February 26, 2024. The Court appeared skeptical of laws in Florida and Texas that regulate how large social media companies exercise their editorial discretions over content moderation. The Court’s decision would have an enormous impact on the scope of the First Amendment and the nature of speech in the internet era.

    UnitedHealth, the nation’s largest insurer, was hit by a cyberattack on its unit—Change Healthcare, a division of Optum. The attack was discovered on February 21, 2024, and appeared to be a ransomware attack launched by a foreign nation-state actor. This latest attack foregrounded the vulnerability of healthcare data and private medical records, especially those of patients. The cyberattack disrupted UnitedHealth’s services with prescription drug orders and even affected the U.S. military overseas.

    Canada has introduced a new bill—the Online Harms Act—that requires social media platforms to remove posts exposing children to online abuse. The Canadian Parliament needs to vote on the bill, but the proposed Act aims to create a “digital safety commission” to regulate social media companies and offer more effective means to protect children online.

    President Biden issued an Executive Order on February 28, 2024, to protect the sensitive personal data of Americans. The Executive Order authorizes the Attorney General to “prevent the large-scale transfer of Americans’ personal data to countries of concern” and provides relevant safeguards. The “countries of concern” specified in the Order included China, Russia, Iran, North Korea, Cuba, and Venezuela. Such restrictions mark the first-ever broad prohibition on the sale of digital data by the U.S. to individual countries.

    Wendy’s has announced its plan to spend $20 million on more enhanced features, including dynamic pricing and digital menu boards that allow for a more flexible menu in stores. The company has further clarified that it will not use surge pricing, similar to that used by Uber, after its CEO Kirk Tanner’s comments to investors sparked commotion around the possibility of adopting this practice, which raises prices when the demand is highest.

    (Compiled by Student Fellow Stephanie Shim)

  • PRG News Roundup, February 22, 2024

    News 

    The Centers for Medicare & Medicaid Services (CMS) announced changes to the current research data request and access policies in the name of data security that will limit individual researchers’ access to data.

    The European Commission has opened investigations to assess whether TikTok has breached the Digital Services Act.

    Reddit has signed a contract to allow a company to train its AI Models on the platform’s content, ahead of its IPO.

    Signal is testing a beta version that hides user’s phone numbers and lets them pick a username instead.

    The European Court of Human Rights ruled that weakening end-to-end encryption presents a disproportionate risk of undermining human rights.

    Events

    Abrams Institute Conversations will host Yale Law Professor Jack Balkin to discuss the cases before the Supreme Court concerning the power of states to regulate content moderation on social media platforms. Monday, March 4 · 12 – 1:30pm EST

    Papers

    Researcher Access to Social Media Data: Lessons from Clinical Trial Data Sharing authored by Christopher Morten (Columbia Law School), Gabriel Nicholas (New York University School of Law) and Salome Viljoen (University of Michigan Law School; Harvard University). 

    (Compiled by Student Fellow Marina Garrote)

  • PRG News Roundup, February 7, 2024

    News

    Google agreed to a $350 million settlement over a lawsuit related to a security lapse that exposed Google Plus users’ data, amidst other legal challenges for privacy and competition law violations.

    The FTC has issued proposed settlements to ban the sale of sensitive geolocation data by data brokers, marking a significant step in addressing privacy concerns and emphasizing the need for informed consumer consent.

    Apple is reportedly considering acquiring the German AI startup Brighter AI to integrate its Precision Blur and Deep Natural Anonymisation technologies into the Vision Pro, aiming to enhance privacy by anonymizing faces and license plates in photos and videos.

    The EU requires large tech platforms like TikTok, X, and Facebook to identify AI-generated content to safeguard the upcoming European election against disinformation.

    Nightshade v1.0 ‘poisons’ AI models by embedding imperceptible pixel-level changes into images to prevent unauthorized use of artworks for AI training, with some critics labeling the tool as a form of ‘illegal’ hacking.

    A new report criticizes state privacy laws as being significantly weakened by the tech industry’s influence, with most states enacting ineffective legislation that fails to protect consumer data adequately or offer meaningful enforcement.

    After over two years of development, the EU’s Artificial Intelligence Act (AI Act) is nearing approval, with the latest text offering a final compromise on high-risk AI systems, General Purpose AI, and governance and enforcement mechanisms; however, critiques note that last-minute concessions may limit its protective potential, especially due to industry lobbying and the possibility of insufficient enforcement resources.

    An investigation into Microsoft’s design practices across Windows 10 and 11, Edge, and Bing reveals the company’s use of harmful design techniques—such as coercive, manipulative, and deceptive patterns—to push users towards using Edge browser, leading to potential consumer, social, and market harms. The report concludes that Microsoft’s practices distort user choice and undermine trust in technology, advocating for the cessation of these practices and regulatory intervention if necessary.

    A Nigerian man has been arrested and charged with various offenses, including child pornography and attempted extortion, following the suicide of a Canadian teen, who fell victim to an online sextortion scheme.

    US police departments are attempting to use facial recognition on 3D models of suspects’ faces generated from DNA evidence, despite concerns from civil liberties groups and experts who argue that this practice is based on unproven science and could lead to wrongful identification, as shown in a controversial case by the East Bay Regional Park District Police Department.

    Bumble has introduced an AI-powered feature called “Deception Detector” to its dating app, designed to identify and block fake profiles, scams, and spam, reducing member reports of such issues by 45% during initial testing and supporting a 95% success rate in blocking undesirable accounts. 

    Events

    The Workshop for Junior Scholars on March 11, 2024, at MIT Stata Center, organized by Aniket Kesari and Sarah Scheffler, aims to build a community and provide guidance for early-career individuals in Law and Computer Science. The half-day event includes panels on academic and non-academic careers, mentoring sessions, and discussions on conducting interdisciplinary research, followed by dinner. Registration is available online. It precedes another conference: ACM Symposium on Computer Science and Law (CSLAW 2024).

    There is an open application for a two-year residential postdoctoral program at Harvard Law School aimed at developing scholars early in their careers who have a primary interest in private law, including common law subjects and statutory areas like intellectual property. Selected from recent graduates, academics, and practitioners, Fellows focus on their research, contribute to the Project on the Foundations of Private Law, mentor students, present and attend workshops, help with events, and engage in blogging.

    (Compiled by Student Fellow Rebecca Kahn)

  • PRG News Roundup, January 31, 2024

    News

    Child access and privacy work are at the forefront of issues being addressed politically. On Wednesday, January 31, five CEOs from major tech companies, including Meta’s CEO Mark Zuckerberg and TikTok’s CEO Shou Zi Chew, testified at a Senate hearing about the protection of children from online sexual exploitation as congressional leaders explore how to tackle these issues.

    Additionally, the California Attorney General Rob Bonta introduced two bills, one privacy and one on social media. “The privacy bill, deemed the proposed Children’s Data Privacy Act, aims to amend the California Consumer Privacy Act to tighten youth coverage. The proposed Protecting Youth from Social Media Addiction Act focuses on measures to moderate content and limit luring features or techniques on social media platforms.”

    23 & Me’s stock price tumbled to the ground as they face a class action filed last week around a data breach specifically impacting Jewish and Chinese customers. 

    Court began to scrutinize using AI chatbots on legal briefings as an attorney Jae Lee “reports that she relied on a generative artificial intelligence tool, ChatGPT, to identify precedent that might support her arguments, and did not read or otherwise confirm the validity of the (non-existent) decision she cited.”

    TikTok continues to struggle in preventing the sharing of data with its Chinese parent company. TikTok is trying to show the U.S. lawmakers that its video sharing application is a safe form of social media through these limits in data sharing.

    OpenAI removed their blanket prohibition on military use of ChatGPT by deleting the text from their usage policy. The blanket ban on “military and welfare” has been removed from the policy but continues the policy on using the tool for “to harm yourself or others” and “develop or use weapons”. 

    Events

    As an organization focused on the intersection of law and artificial intelligence, LunchGPT’s first lunch is planned for Friday February 16 at 12 PM. If you are interested, please reach out to Kevin Fraizer with questions.

    Registration is now open for the 2024 ACM Symposium on Computer Science and Law, which will take place on March 12-13, 2024, at Boston University. The Symposium is a leading venue for cross-disciplinary scholarship at the intersection of computer science and law.

    (Compiled by Student Fellow Molly Pushner)

  • PRG News Roundup, January 24, 2024

    News

    Earlier this month, New Jersey became the thirteenth state to pass a comprehensive privacy bill. The law, which will go into effect January 15, 2024, is said to borrow many features from previously enacted state-level privacy legislation while also bearing several unique characteristics: it covers non-profit organizations as well as for-profit ones, bears a broad definition for “sensitive data,” and does not apply any revenue-based thresholds to organizations within its remit.

    Deepfake phone calls have already factored into 2024 U.S. electoral races, as an AI-generated robocall purported to be from Joe Biden told New Hampshire voters to refrain from voting in the state’s primaries and an AI-generated audio clip of New York politician Keith Wright criticizing Inez Dickens disseminated last Sunday.

    Even though New York City has recently passed and begun enforcing a law that requires companies to disclose how AI factors into its hiring decisions, few employers have complied.

    Apple disabled a blood oxygen monitoring feature from two Apple Watch models sold in the U.S. in response to a successful patent litigation challenge from medical technology company Masimo.

    In December, Israel published a draft of an amendment to a 2002 surveillance law that experts worry would allow its domestic security agency, Shin Bet, to secretly search electronic devices and databases using spyware.

    The European Court of Justice (ECJ) issued a preliminary ruling in a landmark case on data protection in credit reporting, affirming the GDPR’s jurisdiction over automated credit reporting and scoring activities.

    France fined Amazon 32 million Euros for violating the GDPR by “using an ‘excessively intrusive system’ to monitor worker performance and activity” within its warehouses.

    Events

    A weekly conversation series focused on the intersection of law and artificial intelligence, entitled LunchGPT, is set to be launched in the coming weeks. If you are interested in participating in this series, navigate here.

    The Cyber Program at Columbia’s School of International and Public Affairs (SIPA) is hosting a social event Tuesday, February 6 from 6:30 to 9pm EST at Arte Café. To RSVP, navigate here.

    The 3rd ACM Computer Science and Law Symposium will be taking place March 12-13, 2024 in Boston. The day before this symposium, there will be a Workshop for Junior Scholars geared toward law students, post-docs, and recently appointed professors. The Workshop “aims to build community, provide advice in navigating interdisciplinary careers, and foster discussion about future research in Computer Science and Law.”

    (Compiled by Student Fellow Cooper Aspegren)

  • PRG News Roundup, November 29, 2023

    Meta’s paid ad-free service launched in Europe in November was targeted in an Austrian privacy complaint. The complaint was filed by the digital rights group NOYB with Austria’s Data Protection Authority. The group disagrees with Meta on the concept of consent, arguing that a privacy fee does not guarantee a genuine free will of the user.

    18 countries, including the United States and Britain, unveiled what is described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are “secure by design.” The agreement is non-binding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers.

    Meta’s attempt to drag the Federal Trade Commission into federal court over its plans to bar the tech giant from monetizing children’s data was shot down by a judge. This decision delivers the agency a significant victory as it paves the way for the agency to move forward with its sweeping proposed restrictions, which children’s safety advocates say could serve as a template for keeping the tech giants’ privacy practices in check.

    An analysis of the effects of the U.S. Supreme Court decision in Dobbs v. Jackson Women’s Health Organization on fertility indicate that states with abortion bans experienced an average increase in births of 2.3 percent relative to states where abortion was not restricted. The decision sparked the most profound transformation of the landscape of abortion access in 50 years.

    The California Privacy Protection Agency (CPPA) proposed a regulatory framework for Automated Decision Making Technology, which defines important new protections related to businesses’ use of these technologies. The proposed regulations outline how the new privacy protections that Californians voted for in 2020 could be implemented.

    Congressional leaders are discussing a controversial program that would reauthorize Section 702 surveillance, including by attaching it to the National Defense Authorization Act.

    (Compiled by Student Fellow Júlia Strack) 

  • PRG News Roundup, November 15, 2023

    News

    The European Parliament adopted the final version of the Data Act on November 9, 2023. The Data Act aims to create a new single market for data sharing and grants entities in the public sector access to data held by private companies in certain circumstances of high public interest. The Data Act will reinforce data availability, sharing measures, and portability among EU member states.

    In response to the Biden administration’s executive order on AI governance, the Cybersecurity and Infrastructure Security Agency (CISA) launched a Roadmap for Artificial Intelligence to pursue five lines of effort, in partnership with its parent agency, the Department of Homeland Security. CISA’s roadmap underscored the importance of building risk mitigation into AI/ML systems as a design feature and maintaining a transparent approach via information sharing.

    Clearview’s facial recognition technology has become the Ukrainian government’s “secret weapon” against Russia in its ongoing war. As Ukrainian authorities have come to rely heavily upon this private U.S. tech company for its wartime efforts, their partnership has raised critical questions over the deployment of controversial or invasive technology in an armed conflict as well as the extension of digital privacy rights.

    Meta and YouTube face criminal complaints in Ireland “for alleged unlawful surveillance of EU citizens via tracking scripts.” Alexander Hanff, a privacy consultant and advocate, challenged that both Meta’s and YouTube’s tracking code and ad-blocking violated Ireland’ computer abuse law.

    Human Rights Watch (HRW) raised concerns over a new vehicle tracking system in Uganda, which allows the government to track the real-time location of all vehicles in the country. HRW has criticized Uganda’s Intelligent Transport Monitoring System (ITMS) as a surveillance mechanism infringing on the rights to privacy, expression, and association.

    Meta, Google, TikTok, and other social media giants are facing a deluge of lawsuits based on the theory of addiction, especially as to children. Judge Yvonne Rogers in Oakland, CA, dismissed some claims while permitting others to proceed. Judge Rogers further rejected the companies’ arguments that they are immune from personal injury claims under the First Amendment and Section 230 of the Communications Decency Act – federal laws invoked by social media platforms to block suits concerning contents created and posted by their users.

    Events

    Columbia Law hosted its Accountability and Liability in Generative AI: Challenges and Perspectives symposium on November 17, 2023, featuring a wide range of viewpoints on how civil liability and institutional accountability can address the harms from generative AI.

    (Compiled by Student Fellow Stephanie Shim)