Month: November 2023

  • PRG News Roundup, November 29, 2023

    Meta’s paid ad-free service launched in Europe in November was targeted in an Austrian privacy complaint. The complaint was filed by the digital rights group NOYB with Austria’s Data Protection Authority. The group disagrees with Meta on the concept of consent, arguing that a privacy fee does not guarantee a genuine free will of the user.

    18 countries, including the United States and Britain, unveiled what is described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are “secure by design.” The agreement is non-binding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers.

    Meta’s attempt to drag the Federal Trade Commission into federal court over its plans to bar the tech giant from monetizing children’s data was shot down by a judge. This decision delivers the agency a significant victory as it paves the way for the agency to move forward with its sweeping proposed restrictions, which children’s safety advocates say could serve as a template for keeping the tech giants’ privacy practices in check.

    An analysis of the effects of the U.S. Supreme Court decision in Dobbs v. Jackson Women’s Health Organization on fertility indicate that states with abortion bans experienced an average increase in births of 2.3 percent relative to states where abortion was not restricted. The decision sparked the most profound transformation of the landscape of abortion access in 50 years.

    The California Privacy Protection Agency (CPPA) proposed a regulatory framework for Automated Decision Making Technology, which defines important new protections related to businesses’ use of these technologies. The proposed regulations outline how the new privacy protections that Californians voted for in 2020 could be implemented.

    Congressional leaders are discussing a controversial program that would reauthorize Section 702 surveillance, including by attaching it to the National Defense Authorization Act.

    (Compiled by Student Fellow Júlia Strack) 

  • PRG News Roundup, November 15, 2023

    News

    The European Parliament adopted the final version of the Data Act on November 9, 2023. The Data Act aims to create a new single market for data sharing and grants entities in the public sector access to data held by private companies in certain circumstances of high public interest. The Data Act will reinforce data availability, sharing measures, and portability among EU member states.

    In response to the Biden administration’s executive order on AI governance, the Cybersecurity and Infrastructure Security Agency (CISA) launched a Roadmap for Artificial Intelligence to pursue five lines of effort, in partnership with its parent agency, the Department of Homeland Security. CISA’s roadmap underscored the importance of building risk mitigation into AI/ML systems as a design feature and maintaining a transparent approach via information sharing.

    Clearview’s facial recognition technology has become the Ukrainian government’s “secret weapon” against Russia in its ongoing war. As Ukrainian authorities have come to rely heavily upon this private U.S. tech company for its wartime efforts, their partnership has raised critical questions over the deployment of controversial or invasive technology in an armed conflict as well as the extension of digital privacy rights.

    Meta and YouTube face criminal complaints in Ireland “for alleged unlawful surveillance of EU citizens via tracking scripts.” Alexander Hanff, a privacy consultant and advocate, challenged that both Meta’s and YouTube’s tracking code and ad-blocking violated Ireland’ computer abuse law.

    Human Rights Watch (HRW) raised concerns over a new vehicle tracking system in Uganda, which allows the government to track the real-time location of all vehicles in the country. HRW has criticized Uganda’s Intelligent Transport Monitoring System (ITMS) as a surveillance mechanism infringing on the rights to privacy, expression, and association.

    Meta, Google, TikTok, and other social media giants are facing a deluge of lawsuits based on the theory of addiction, especially as to children. Judge Yvonne Rogers in Oakland, CA, dismissed some claims while permitting others to proceed. Judge Rogers further rejected the companies’ arguments that they are immune from personal injury claims under the First Amendment and Section 230 of the Communications Decency Act – federal laws invoked by social media platforms to block suits concerning contents created and posted by their users.

    Events

    Columbia Law hosted its Accountability and Liability in Generative AI: Challenges and Perspectives symposium on November 17, 2023, featuring a wide range of viewpoints on how civil liability and institutional accountability can address the harms from generative AI.

    (Compiled by Student Fellow Stephanie Shim)

  • PRG News Roundup, October 25, 2023

    News

    The Consumer Financial Protection Bureau (“CFPB”) proposed the Personal Financial Data Rights rule to give people a legal right to give third parties access to their data related to their credit card, checking, prepaid, and digital wallet accounts. This change will allow people to switch service providers and manage multiple accounts without paying junk fees or permitting risky methods of data collection. 

    The French Data Protection Authority (“CNIL”) published a set of guidelines in the form of AI how-to sheets addressing compliance with personal data regulation, including the GDPR, while developing AI systems. The guidelines are intended to provide greater legal certainty to relevant parties. 

    New attorneys for the Fugees rapper, Pras, filed a motion for a new trial on the grounds that his previous defense attorney was ineffective because the attorney used an “experimental” Generative AI program to help him write the closing argument and it caused mistakes. 

    PEW Research published a report on a survey about Americans views on data privacy. Key highlights include:

    • American adults are concerned about and don’t understand how companies and the government use the data they collect. The percent has increased for Republicans responding to the poll
    • Americans don’t trust companies to use AI responsibly and worry that use of AI for data collection and analysis will result in unintended consequences and uses people would not be comfortable with
    • Americans feel their privacy choices don’t really matter
    • There is bipartisan support for increased regulation of company’s use of personal data

    The New York Court of Appeals ruled that independent oversight agencies, the Commission on Forensic Sciences and the DNA Subcommittee, had the authority to promulgate a regulation that permits law enforcement to request a familial DNA search of the state DNA Databank — which stores genetic information of New Yorkers convicted of certain felonies — when an initial search results in no match or a partial match. NY Court of Appeals Decision.

    (Compiled by Student Fellow Lindsey Schwartz)

  • PRG News Roundup, November 1, 2023

    News

    The Biden-Harris Administration issued a landmark executive order entitled “Safe, Secure, and Trustworthy AI.” The order aims to standardize federal procurement of AI, and to lay out the groundwork for establishing new standards for AI safety and security. The order proposes several key measures, including requiring agencies to work with NIST to develop responsible AI testing frameworks and guidances, requiring developers of powerful AI systems to share their safety test and performance results with the government, requiring agencies to evaluate how commercially available PII is collected (including from data brokers), and directing agencies to investigate civil rights violations and unlawful discrimination practices enabled by AI tools.

    The European Data Protection Board (EDPB) adopted a final ban on Meta’s data processing for behavioral advertising across EU member states and European Economic Area countries. This decision follows a petition from the Norwegian Data Protection Authority urging the EDPB to extend and make permanent their own previously-issued interim ban in Norway. In effect, the EDPB decision clarifies that Meta’s subscription-based consent model does not provide a valid legal basis for its behavioral advertising practices under GDPR.

    bipartisan coalition of 42 U.S. attorneys general across the nation filed suit against Meta in federal and state courts, claiming that Meta’s business practices violate state consumer protection laws and the federal Children’s Online Privacy Protection Act (COPPA). The suit alleges that Meta knowingly designed and deployed features on Instagram and other social media platforms that purposefully harm children’s mental health, while falsely assuring the public that these features are safe and suitable for young users. 

    The U.S. Supreme Court will hear arguments in a series of cases concerning state action and constitutional free speech on social media platforms. The cases will examine whether public officials can constitutionally block their constituents on social media, whether social media content moderation laws originating in Texas and Florida violate the First Amendment, and whether the Biden administration’s and social media companies’ joint efforts to curb misinformation online — particularly regarding the COVID-19 vaccine — constitutes censorship by the government. 

    The U.S. Securities and Exchange Commission announced charges against SolarWinds Corporation, a Texas-based software company, for defrauding securities investors. The SEC alleges that SolarWinds’ public statements on their website regarding their cybersecurity practices were overstated and at odds with multiple internal assessments, which identified specific and known deficiencies in their cybersecurity practices. 

    The G7 reached an agreement on a set of International Guiding Principles on Artificial Intelligence (AI) and a Code of Conduct for AI developers. The voluntary Guiding Principles are intended to help organizations mitigate the risks and potential misuses of AI systems. The Code of Conduct is intended to provide detailed and practical guidance for developers of AI. Both documents are intended to be living and voluntary, to be updated and reviewed as necessary to stay responsive to developments in AI technology. 

    (Compiled by Student Fellow Jennifer Kim)