Category: Uncategorized

  • PRG News Roundup 3/11/26

    The Electronic Frontier Foundation analyzed the SAFE Act, a new legislative proposal to reform and reauthorize federal surveillance powers under national security law. 

    Meta is facing a class-action lawsuit over privacy concerns with its AI smart glasses. 

    A former DOGE employee is accused of retaining sensitive Social Security information about more than 500 million living and deceased Americans for commercial use.

    AI-assisted coding tools were linked to production outages at Meta this week. 

    From the PRG community: 

    Nina Loshkajian, Technology & Racial Justice Collaborative Fellow at the NYU Law Center on Race, Inequality and the Law, argues for renewed attention to disparate impact protections in the age of AI. 

    Upcoming events: 

    Tomorrow, March 12, the Innovation Policy Colloquium will host Feng Fu, Associate Professor of Mathematics at Dartmouth, for a presentation on steering collective behavior in hybrid human-AI systems.

    (compiled by Nala Sharadjaya)

  • PRG News Roundup 3/4/2026

    Join the Engelberg Center on Innovation Law and Policy, the Information Law Institute, and S.T.O.P. (the Surveillance Technology Oversight Project) for the US launch of Albert Fox Cahn’s new book “Move Slow and Upgrade”. Albert will be discussing the book, co-authored by Evan Selinger of the Rochester Institute of Technology, which takes a deep dive into some of the most disastrous innovations of recent years while highlighting some of the unsung upgraders pushing real progress each day. The event will take place on Wednesday March 4, 2026 at NYU School of Law. Book discussion begins at 7:30; reception follows at 8:15. Register here

    Following the DoD threatening Anthropic, DoD has designated Anthropic to be a “supply chain risk” and told agencies to stop working with it

    The Dutch telecom company Odio has been hit with a data breach, raising questions about GDPR compliance and what next steps will be.

    Following the conversations with the Pentagon, OpenAI has also said that it is considering a contract to integrate its technology with NATO’s “unclassified” networks. Vietnam’s law regulating AI has entered into force, mirroring the EU law and emphasizing “digital sovereignty,” “domestic AI capacity,” and “national strategic interests.”

  • PRG News Roundup 2/25/26

    Join the Engelberg Center on Innovation Law and Policy, the Information Law Institute, and S.T.O.P. (the Surveillance Technology Oversight Project) for the US launch of Albert Fox Cahn’s new book “Move Slow and Upgrade”. Albert will be discussing the book, co-authored by Evan Selinger of the Rochester Institute of Technology, which takes a deep dive into some of the most disastrous innovations of recent years while highlighting some of the unsung upgraders pushing real progress each day. The event will take place on Wednesday March 4, 2026 at NYU School of Law. Book discussion begins at 7:30; reception follows at 8:15. Register here

    U.S. Defense Secretary Pete Hegseth gave AI company Anthropic a Friday deadline to rescind its self-imposed restrictions on military and surveillance uses of its technology or face potential blacklisting and loss of Pentagon contracts amid legal/policy disputes over AI governance. The Department of Defense threatened to invoke the Defense Production Act over the dispute.

    A new paper shows that large language models can be used to perform at-scale deanonymization.

    The UK Information Commissioner’s Office fined Reddit £14.5 million for unlawfully processing personal data of children under 13, under the UK’s Online Safety Act.

    US Rep. Lori Trahan announced the release of a bipartisan staff report outlining recommendations to modernize the Privacy Act of 1974. Under the Privacy Act, the DC Circuit recently permitted the IRS to share their data with DHS

    The U.S. District Court for the Southern District of New York held that documents a defendant created using an artificial intelligence tool and subsequently sent to his attorneys were not protected by the attorney-client privilege or attorney work-product privilege.

  • PRG News Roundup 2/11/26

    Join the Engelberg Center on Innovation Law and Policy, the Information Law Institute, and S.T.O.P. (the Surveillance Technology Oversight Project) for the US launch of Albert Fox Cahn’s new book “Move Slow and Upgrade”. Albert will be discussing the book, co-authored by Evan Selinger of the Rochester Institute of Technology, which takes a deep dive into some of the most disastrous innovations of recent years while highlighting some of the unsung upgraders pushing real progress each day. The event will take place on Wednesday March 4, 2026 at NYU School of Law. Book discussion begins at 7:30; reception follows at 8:15. Register here

    The UK has proposed legislation to mandate age verification for VPN use. 

    The French offices of Elon Musk’s X have been raided by the Paris prosecutor’s cyber-crime unit, as part of an investigation into suspected offences including unlawful data extraction and complicity in the possession of child sexual abuse material (CSAM).

    Amazon ran a Super Bowl ad promoting its AI-driven Search Party feature. The ad was met with backlash that extended beyond traditional privacy supporters. 

    Steve Yegge took a dive into Anthropic’s organizational and AI development philosophy. Relatedly, the New Yorker published an in-depth piece detailing the limits of Anthropic’s LLM – Claude.

    The New Yorker published an in-depth report detailing the limits ofIn-depth reporting explores the epistemic limits of large language models like Claude and the attendant legal/policy questions about transparency, explainability, and regulation of autonomous systems.

    U.S. Immigration and Customs Enforcement (ICE) uses facial recognition and AI-driven surveillance systems—often integrated with contractors like Palantir— to identify and harass peaceful protestors in Minneapolis.

    Discord updated their terms of service to require biometric user identification – often by providing an ID. This follows a data-breach where thousands of ID images were leaked from their servers. 

    (compiled by Anthony Perrins)

  • PRG News Roundup 1/28/26

    As ICE has continued to use the facial recognition app Mobile Fortify, the Illinois AG has sued in order to prevent its use and to prevent other invasions of privacy.

    The Wall Street Journal interviewed a lawyer representing a client alleging that xAI/Grok had made non-consensual deepfakes of her. The interview features a discussion of what pathways are available legally for such clients.

    Child safety cases continue to percolate up through the courts, with quite a few states passing statutes constraining what ages kids can get on social media, and corresponding litigation from the companies.

    Events coming up:

    NYU’s LPE, ACS, Energy Law Society, and Rights Over Tech  is hosting “Under the Hood of AI,” a discussion into the infrastructure—and financing—undergirding the AI craze. The event will be at 1pm on February 9th. RSVP here

    Profs. Michal Shur-Ofry & Katherine Strandburg are teaching innovation policy colloquium this semester at NYU, on Law and Complex Systems. The colloquium will be on Thurs 4:45-6:45; reach out to Prof. Strandburg for more information. On Feburary 5, the colloquium will host Prof. Albert-László Barabási, a professor of network science.

    Welcome back!

  • PRG News Roundup 11/19/25

    Opinion editors at Scientific American argue that AI deepfakes pose escalating risks to democracy and personal privacy, and point to Denmark’s proposed law granting people rights over their face and voice as a potential model for the US to follow. 

    A new analysis from Georgetown Law’s Institute for Technology Law & Policy explains how existing U.S. consumer protection and privacy laws already apply to AI chatbots designed for kids and teens.

    Bloomberg Law reports that California is finalizing a privacy-law specialization for attorneys. The proposed standards include continuing education requirements (45 hours to qualify for initial certification and 36 hours for recertification), proof of significant engagement in privacy matters, and options to qualify without a written exam if certain thresholds are met. 

    On November 19, the European Commission proposed major reforms to Europe’s GDPR, AI Act, ePrivacy Directive and the Data Act, aiming to simplify digital regulations and encourage AI development. The changes would delay implementation of key parts of the AI Act, and would allow AI companies to use personal data for model training without user consent if in compliance with other GDPR requirements.

    (compiled by Karinna Gerhardt)

  • PRG News Roundup 11/12/25

    Privacy Research Group News Roundup 11/12/25

    The New York Algorithmic Pricing Disclosure Act took effect on November 10, 2025, requiring businesses to display a clear disclosure near prices stating that the price was set by an algorithm using personal customer data.

    New research from the European Broadcasting Union and the BBC has found that four leading chatbots routinely generate flawed summaries of news stories.

    At the 2025 Joint Mathematics Meetings, Meta’s AI Chief Yann LeCun said that even “a house cat has better intelligence than our most advanced AI systems.” He explained the Moravec paradox – “the observation that tasks difficult for humans are relatively easy for computers, while tasks that seem effortless to humans remain extraordinarily challenging for AI.” LeCun reportedly plans to leave Meta to build his own startup.

    The European Commission is expected to unveil the “Digital Omnibus” reform package on November 19, which could roll back the General Data Protection Regulation, the AI Act, and many other privacy-related regulations.

    A new opinion piece in The New York Times discusses whether chatbot conversations should be entitled to legal protections.

    Several journalists offer think pieces on how New York City mayor-elect Zohran Mamdani might reform the surveillance state enforced by the New York Police Department, given his commitment to working with current Police Commissioner Jessica Tisch and his plan to divert some resources into creating a $1B Department of Community Safety.

    (Compiled by Sarah Wang).

  • PRG News Roundup 11/5/25

    Florida’s novel lawsuit against Roku under its privacy law has been noticed by lawyers and industry insiders.

    A new preprint suggests that AIs are trained to hallucinate through their training that rewards confidence and conversely disincentivizes “I don’t know” responses.

    A new paper discusses a layer in AI industry that’s frequently not talked about: the human labor that goes into “collect[ing] and annotat[ing] data, monitor[ing] and maintain[ing] algorithmic systems, keep[ing] data centers running, and min[ing] rare earth minerals—not to mention the artists, translators, writers, and actors whose work fuels so-called generative AI”

    A recent audit found that the continuing budget and staffing cuts at the CFPB has left major data security risks.

    A network of global privacy regulators announced an enforcement sweep into digital services’ use of underage users’ data.

    The Fifth Circuit heard a case, Computer & Comm. Ind. Ass’n v. Paxton, regarding Texas’ law that would require content filtering for minors, although it seemed wary of deciding it directly instead of remanding to the District Court.

    A new bill introduced in the senate, the GUARD Act, would regulate the use of chatbots by minors.

    The FCC will vote later this month to reverse a Biden-era policy that added cybersecurity requirements.

    OpenAI has updated its terms of service to say its models cannot be used to provide legal or medical advice. OpenAI disclaimed this as “not a new change to our terms.”

    More than a dozen states have filed a motion to submit an amicus brief in Huiskamp v. ZoomInfo Tech. LLC, arguing that selling peoples’ phone numbers should be treated as commercial speech.

    (compiled by Tobit Glenhaber)

  • PRG News Roundup 10/29/25

    Meta’s new “smart glasses” raise similar issues to Google glass, with questions on whether privacy law is equipped to deal with the higher level of private surveillance they allow.

    The Guardian and +972 report that Israel’s contracts with Amazon and Google provide for “unorthodox ‘controls’” in the deal. It creates a “winking mechanism” that requires the companies to secretly divulge the identity of foreign countries whose law enforcement has asked for Israeli data through coded payments. The contract also limits the ability for the companies to revoke Israel’s access to the cloud platforms even if they find Israel’s use of the technology violates their terms of service or non-Israeli law.

    Reddit sued Perplexity for data scraping of its website. This follows a lawsuit filed against Anthropic earlier this year.

    DHS has published a final rule providing for photographing all non-citizens at all border entries.

    ICE and CPB have been using facial recognition technology in their enforcement raids.

    Character AI has modified their terms of service to bar minors from using its chatbots.

    Contact Clay Venetis, cvenetis@cspi.org, if you are interested in diving into the MTA’s alcohol ad policy change.

    (Compiled by Tobit Glenhaber)

  • PRG News Roundup 10/15/25

    Representatives in the Michigan state legislature have proposed a ban on VPNs as a part of a larger bill that aims to ban online pornography in the state. 

    Mother Jones recently published an article detailing how a secretive surveillance firm called First Wap exploits telecom network loopholes to track, intercept, and surveil phones worldwide—including those of public figures, politicians, and dissidents—often without legal oversight. Lighthouse Reports has also published an investigatory report on First Wap’s activities.

    The U.S. Privacy Consortium, a bipartisan collective of U.S. regulators that collaborates on the implementation and enforcement of their states’ data privacy regimes, recently welcomed the attorneys general of Minnesota and New Hampshire as the group’s newest members.

    CA Governor Gavin Newsom signed a bill that requires social media companies to make canceling an account straightforward and clear, ensures that cancellation triggers full deletion of the user’s personal data, and provides additional data protections for Californians.

    Scouting America, formerly known as the Boy Scouts, announced two new badges that scouts can earn: one in artificial intelligence, and another in cybersecurity.

    Federal law enforcement has arrested a suspect in connection with starting what became the Palisades blaze that killed 12 people in early 2025. Among the evidence cited is an AI image of a burning city that the suspect allegedly generated with ChatGPT.

    (Compiled by Audrey Kim)