The “Facebook Files,” a series of articles about internal Facebook research reports that were revealed recently to the Wall Street Journal, has provided a window into Facebook’s understanding of many of the flaws on its platform. Notable revelations include that Facebook is aware of Instagram use being harmful to a “sizable percentage” of teenage girls (a finding which led Facebook to delay the introduction of Instagram for Kids), that tweaks to the News Feed algorithm made in 2018 resulted in more engagement but also led to more hate speech and increased anger, and that Mark Zuckerberg’s personally directed efforts to curb vaccination misinformation on the platform were largely a failure. Facebook faces a difficult “Snowden revelation” scenario in responding to the leaks, where it needs to decide whether to release more information about these issues (to show the WSJ’s data is incomplete) or to refuse to (leading to accusations of hypocrisy). (Link, Podcast, Facebook rebuttal)
The Senate Commerce Committee held a hearing about consumer privacy. The main decision points appear to be whether to handle privacy by expanding FTC authority over the field (including by possibly creating a new bureau within the FTC and/or increasing its funding), and/or whether to enact a federal privacy law along the lines of California’s or Colorado’s. (Link, Source)
Amazon released a surveillance robot that is capable of moving autonomously around a house taking pictures and video from a security camera. The robot is designed to look friendly, but privacy advocates have been quick to point out troubling implications for anyone who can afford the $999 sticker price. (Link, Link)
YouTube has updated its internal policies regarding misinformation, specifically becoming more stringent on medical and vaccine misinformation. They will be more proactive on removing content that “falsely alleges that approved vaccines are dangerous and cause chronic health effects, claims that vaccines do not reduce transmission or contraction of disease, or contains misinformation on the substances contained in vaccines.” (Link)
The UK is considering removing or amending Article 22 of the GDPR, which protects people from automated processing by providing a right of human review for automated decisions. This comes after some mixed empirical evidence about the success of human review within the GDPR framework. (Link)
An article highlighted the use of refugees and displaced people to train machine learning datasets, often by labeling videos, transcribing audio, or similar “clickwork.” Major firms, like Microsoft, Facebook, Amazon, and Tesla, rely substantially on this labor. This appears to be an important and concrete instance where machine learning is causing real-world harm. (Link)
ICE recently signed a $3.9 million contract for a “rapid” AI-powered facial recognition tool for use at migrant detention facilities. So far, the agency has released the bare minimum of details on how this will be used, with the contract suggesting only that it will be deployed for “rapid alternatives to detention enrollments through facial confirmation application.” (Link, Link)
(compiled by Student Fellow Andrew Mather)