The Biden administration is urging tech and financial industries to combat the proliferation of AI-generated sexual abuse images, the Time reports. Generative AI tools have made it easy to create explicit deepfakes, often targeting women, children, and LGBTQ+ individuals, with little recourse for the victims. The White House calls for voluntary cooperation from companies to implement measures to stop these nonconsensual images, while no federal legislation addresses the issue.
Biden’s chief science adviser, Arati Prabhakar, noted the rapid increase in such abusive content and the need for companies to take responsibility. A document shared with the Associated Press outlines actions for various stakeholders, including AI developers, financial institutions, cloud providers, and app store gatekeepers, to restrict the monetisation and distribution of explicit images, particularly those involving minors. The administration also stressed the importance of stronger enforcement of terms of service and better mechanisms for victims to remove nonconsensual images online.
Why does it matter?
Last summer, major tech companies committed to AI safeguards, followed by an executive order from Biden to ensure AI development prioritises public safety, including measures to detect AI-generated child abuse imagery. However, high-profile incidents, like AI-generated deepfakes of Taylor Swift and the rise of such images in schools, reveal and urgent need for action and the potential insufficiency of voluntary commitments from companies. Recently, Forbes reported that AI-generated images of young girls in provocative outfits are spreading on TikTok and Instagram, drawing inappropriate comments from older men and raising concerns about potential exploitation.
GLAAD, the LGBTQ media advocacy organisation, gave failing grades to most major social media platforms for their handling of safety, privacy, and expression for the LGBTQ community online, as reported by The Hill. In the fourth annual Social Media Safety Index, GLAAD assessed hate, disinformation, anti-LGBTQ tropes, content suppression, AI, data protection, and the link between online hate and real-world harm.
Five of six leading social media platforms, including X (formerly Twitter), YouTube, Facebook, Instagram, and Threads, received failing grades for the third consecutive year. TikTok was the only platform not to receive an F, instead earning a D+ due to improvements in its Anti-Discrimination Ad Policy, which included preventing advertisers from wrongfully targeting or excluding users from content. Meanwhile, Threads received its first F since its launch in 2023, and Facebook and Instagram’s ratings worsened from the previous year.
Why does it matter?
GLAAD uses this index to urge social media leaders to create safer environments for the LGBTQ community, noting a lack of enforcement of current policies in the digital sector and a clear link between online hate and increasing real-world violence and legislative attacks.
A new UN report, released by the UNESCO latest Global Education Monitor (GEM), explores how technology affects girls’ education from a gender perspective.
The report celebrates two decades of reduced discrimination against girls but also notes technology’s negative effects on their educational outcomes. It addresses challenges such as online harassment, access disparities in ICT, and the harmful influences of social media on mental health and body image, which can impede academic performance. Additionally, the report sheds light on the gender gap in STEM fields, underscoring the underrepresentation of women in STEM education and careers.
While highlighting that appropriately used social media can enhance girls’ awareness and knowledge of social issues, the GEM team also calls for increased educational investment and stricter digital regulations to promote safer, more inclusive environments for girls worldwide.
Why does it matter?
The report coincided with the International Girls in ICT Day, supported by the ITU, during which the UN Secretary-Generalemphasised the need for greater support and resources for girls in Information and Communication Technology (ICT), noting that globally, women (65%) have less access to the internet compared to men (70%). The persistent access gap in ICT and its disproportionately adverse effects on girls, despite years of acknowledgement, suggests a need for a more aggressive approach in policy and resource allocation to truly level the playing field.
The Raisi administration in Iran has allocated millions of dollars towards bolstering the country’s internet infrastructure, focusing on tightening control over information flow and reducing the influence of external media.
This decision, part of a broader financial strategy for the Ministry of Communications and Information Technology, reflects a 25% increase from the previous year’s budget, totalling over IRR 195,830 billion (approximately $300 million). Additionally, over IRR 150,000 billion (over $220 million) in miscellaneous credits have been earmarked to expand the national information network.
The Ministry of Communications and Information Technology’s efforts aim to reduce dependency on the global internet, leading to a more isolated and state-controlled national information network.
Why does it matter?
Popular social media platforms like Instagram and Facebook are blocked in Iran, and the government appears to be tightening internet control. Cloudflare has observed a significant decrease in internet traffic from Iran over the past two years, suggesting a trend of increased control and isolation. However, widespread internet disruptions have sparked discontent, leading the Tehran Chamber of Commerce to call for policy reassessment, citing economic concerns.
Pakistan’s interior ministry confirmed that it had blocked access to the social media platform X (formerly Twitter) around February’s national election due to national security concerns. Despite reports from users experiencing difficulties accessing X since mid-February, the government has not officially acknowledged the shutdown. The interior ministry made this revelation in a written submission to the Islamabad High Court, responding to a petitioner’s plea challenging the ban.
The ministry cited X’s alleged failure to comply with lawful directives and address concerns regarding platform misuse as reasons for imposing the ban. According to the ministry, X was reluctant to resolve these issues, prompting the government’s decision to uphold national security, maintain public order, and preserve the country’s integrity.
Why does it matter?
The temporary ban on X coincided with the 8 February national election, contested by the party that jailed former prime minister Imran Khan, alleging rigging. Khan’s party heavily relies on social media platforms for communication, especially after facing censorship by traditional media ahead of the polls. Khan, with over 20 million followers on X, remains prominent despite being incarcerated on multiple convictions preceding the election.
The decision to block X was based on confidential reports from Pakistan’s intelligence and security agencies, which indicated nefarious intentions by hostile elements on the platform to create chaos and instability in the country. This move has raised concerns among rights groups and marketing advertisers, with activists arguing that such restrictions hinder democratic accountability and access to real-time information crucial for public discourse and transparency. Marketing consultants also highlight challenges in convincing Pakistani advertisers to use X for brand communications due to governmental restrictions on the platform.
Portugal’s far-right political party, Chega, has initiated legal action against Meta Platforms, the parent company of Facebook, following a 10-year ban imposed on the party’s Facebook account. The reasons behind the ban remain unspecified, raising concerns about potential political censorship across Meta’s platforms.
Led by André Ventura, Chega has gained traction in Portugal with its anti-immigration and anti-establishment rhetoric. Chega has responded by calling the restrictions ‘clearly illegal and of unspeakable persecution’ in a post on X.
Why does it matter?
Chega’s legal action against Meta Platforms underscores broader issues surrounding content moderation and political speech on social media platforms. The outcome of this case may establish precedents for how such platforms are held accountable for their moderation policies and their impact on political discourse (see Iran’s recent case). However, the need for more transparency regarding the reasons for Chega’s ban raises questions about the fairness and consistency of content moderation practices.
Rights groups are intensifying their calls for restrictions on using facial recognition technology (FRT) by the US government. The Electronic Frontier Foundation (EFF) has submitted comments to the US Commission on Civil Rights, asserting that FRT lacks reliability for making decisions that impact constitutional rights or social benefits and it poses risks to marginalised communities and privacy. EFF advocates for a ban on government use of FRT and strict limits on private sector use to safeguard against the perceived threats posed by this technology.
Joining EFF, the immigrant advocacy organisation United We Dream and over 30 civil rights partners have also submitted comments to the commission. They highlight concerns that a legal loophole has enabled agencies like ICE and CBP to use facial recognition for extensive surveillance of immigrants and people of colour. The alliance argues that FRT’s algorithmic biases often lead to incorrect identifications, unjust arrests, detentions, and deportations within immigrant communities.
The US Commission on Civil Rights has been conducting hearings with various stakeholders presenting their perspectives on FRT. While rights groups and advocates have raised concerns, government, enforcement agencies, vendors, and institutions, like NIST, have defended the technology. The Department of Justice emphasised its interim facial recognition policy prioritising First Amendment rights, while HUD submitted written testimony in recent weeks.
Why does it matter?
Official data from 2021 reveals that 18 out of 24 federal agencies surveyed were employing facial recognition technology, predominantly for law enforcement and digital access purposes. This ongoing debate underscores the growing scrutiny and debate surrounding using FRT in government operations and its impact on civil liberties and marginalised communities.
The House of Representatives has approved the reauthorisation of Section 702 of the Foreign Intelligence Surveillance Act (FISA), allowing US intelligence agencies to conduct foreign communications surveillance without a warrant. The bill passed by a vote of 273–147, extending Section 702 beyond its April 19th expiration. The debate over amendments to the bill revealed unexpected alliances, with bipartisan efforts to impose a warrant requirement for surveillance of Americans narrowly defeated.
Speaker Mike Johnson faced challenges securing enough votes for reauthorisation, with former President Trump weighing in against FISA on social media. After earlier failures to advance the bill, a revised version shortened the extension to two years to gain support from reluctant Republicans. The amendment requiring a warrant for accessing Americans’ data did not pass, with concerns raised about privacy and national security implications.
The reauthorisation underscores ongoing debates over privacy rights and national security measures in the United States. Senator Ron Wyden strongly criticised the House bill, expressing concerns about increased government surveillance authority and the lack of oversight in accessing Americans’ communications data.
While some lawmakers argued that the bill expanded surveillance powers, supporters emphasised its role in disrupting activities like fentanyl trafficking. However, the Senate must still vote on the reauthorisation before the 19 April deadline.
UK MPs urge the government to develop a TikTok strategy to tackle misinformation targeting young people. A cross-party committee emphasises the need for the government to adapt to new platforms like TikTok, which have become significant sources of news for the youth. The recommendation is part of a broader report advocating for the use of trusted voices, such as scientists and doctors, to combat conspiracy theories and misinformation spreading on social media.
Data from Ofcom reveals that TikTok is cited as the leading news source for one in 10 individuals in the UK aged 12 to 15, while 71% of 16 to 24-year-olds prefer social media over traditional news websites. TikTok welcomes the suggestion for government engagement on social media platforms, highlighting the rapid evolution of information sources and audience habits in the digital age.
The committee stresses the importance of broadcasters being active on social media to counter disinformation effectively. The government’s ban on TikTok from official electronic devices underscores security concerns, although some departments still utilise the platform. MPs advocate for a more transparent approach from the government, urging it to leverage experts and boost trust by publishing evidence used in policymaking, particularly in areas susceptible to misinformation.
The European Parliament approved the Asylum and Migration Pact, a controversial measure that included reforms to the EURODAC biometric database and biometric data collection from minors. Three and a half years in the making, the document aims to bolster border security and streamline asylum processes.
However, critics fear it may usher in repressive policies and expand biometric surveillance, particularly regarding minors, as it provides for the collection of biometric data from children as young as seven. Despite these concerns, proponents argue it aids family reunification efforts and combats document fraud.
The pact’s complexity has sparked debate over its effectiveness and ethics. While some view it as progress, others see it as a missed opportunity for a more compassionate system. The implications of biometrics and facial recognition technology are central to the discourse, which critics warn could grant excessive control over migrants’ movements.