Month: November 2025

  • PRG News Roundup 11/19/25

    Opinion editors at Scientific American argue that AI deepfakes pose escalating risks to democracy and personal privacy, and point to Denmark’s proposed law granting people rights over their face and voice as a potential model for the US to follow. 

    A new analysis from Georgetown Law’s Institute for Technology Law & Policy explains how existing U.S. consumer protection and privacy laws already apply to AI chatbots designed for kids and teens.

    Bloomberg Law reports that California is finalizing a privacy-law specialization for attorneys. The proposed standards include continuing education requirements (45 hours to qualify for initial certification and 36 hours for recertification), proof of significant engagement in privacy matters, and options to qualify without a written exam if certain thresholds are met. 

    On November 19, the European Commission proposed major reforms to Europe’s GDPR, AI Act, ePrivacy Directive and the Data Act, aiming to simplify digital regulations and encourage AI development. The changes would delay implementation of key parts of the AI Act, and would allow AI companies to use personal data for model training without user consent if in compliance with other GDPR requirements.

    (compiled by Karinna Gerhardt)

  • PRG News Roundup 11/12/25

    Privacy Research Group News Roundup 11/12/25

    The New York Algorithmic Pricing Disclosure Act took effect on November 10, 2025, requiring businesses to display a clear disclosure near prices stating that the price was set by an algorithm using personal customer data.

    New research from the European Broadcasting Union and the BBC has found that four leading chatbots routinely generate flawed summaries of news stories.

    At the 2025 Joint Mathematics Meetings, Meta’s AI Chief Yann LeCun said that even “a house cat has better intelligence than our most advanced AI systems.” He explained the Moravec paradox – “the observation that tasks difficult for humans are relatively easy for computers, while tasks that seem effortless to humans remain extraordinarily challenging for AI.” LeCun reportedly plans to leave Meta to build his own startup.

    The European Commission is expected to unveil the “Digital Omnibus” reform package on November 19, which could roll back the General Data Protection Regulation, the AI Act, and many other privacy-related regulations.

    A new opinion piece in The New York Times discusses whether chatbot conversations should be entitled to legal protections.

    Several journalists offer think pieces on how New York City mayor-elect Zohran Mamdani might reform the surveillance state enforced by the New York Police Department, given his commitment to working with current Police Commissioner Jessica Tisch and his plan to divert some resources into creating a $1B Department of Community Safety.

    (Compiled by Sarah Wang).

  • PRG News Roundup 11/5/25

    Florida’s novel lawsuit against Roku under its privacy law has been noticed by lawyers and industry insiders.

    A new preprint suggests that AIs are trained to hallucinate through their training that rewards confidence and conversely disincentivizes “I don’t know” responses.

    A new paper discusses a layer in AI industry that’s frequently not talked about: the human labor that goes into “collect[ing] and annotat[ing] data, monitor[ing] and maintain[ing] algorithmic systems, keep[ing] data centers running, and min[ing] rare earth minerals—not to mention the artists, translators, writers, and actors whose work fuels so-called generative AI”

    A recent audit found that the continuing budget and staffing cuts at the CFPB has left major data security risks.

    A network of global privacy regulators announced an enforcement sweep into digital services’ use of underage users’ data.

    The Fifth Circuit heard a case, Computer & Comm. Ind. Ass’n v. Paxton, regarding Texas’ law that would require content filtering for minors, although it seemed wary of deciding it directly instead of remanding to the District Court.

    A new bill introduced in the senate, the GUARD Act, would regulate the use of chatbots by minors.

    The FCC will vote later this month to reverse a Biden-era policy that added cybersecurity requirements.

    OpenAI has updated its terms of service to say its models cannot be used to provide legal or medical advice. OpenAI disclaimed this as “not a new change to our terms.”

    More than a dozen states have filed a motion to submit an amicus brief in Huiskamp v. ZoomInfo Tech. LLC, arguing that selling peoples’ phone numbers should be treated as commercial speech.

    (compiled by Tobit Glenhaber)

  • PRG News Roundup 10/29/25

    Meta’s new “smart glasses” raise similar issues to Google glass, with questions on whether privacy law is equipped to deal with the higher level of private surveillance they allow.

    The Guardian and +972 report that Israel’s contracts with Amazon and Google provide for “unorthodox ‘controls’” in the deal. It creates a “winking mechanism” that requires the companies to secretly divulge the identity of foreign countries whose law enforcement has asked for Israeli data through coded payments. The contract also limits the ability for the companies to revoke Israel’s access to the cloud platforms even if they find Israel’s use of the technology violates their terms of service or non-Israeli law.

    Reddit sued Perplexity for data scraping of its website. This follows a lawsuit filed against Anthropic earlier this year.

    DHS has published a final rule providing for photographing all non-citizens at all border entries.

    ICE and CPB have been using facial recognition technology in their enforcement raids.

    Character AI has modified their terms of service to bar minors from using its chatbots.

    Contact Clay Venetis, cvenetis@cspi.org, if you are interested in diving into the MTA’s alcohol ad policy change.

    (Compiled by Tobit Glenhaber)