AI protections included in new Hollywood worker’s contracts

The International Alliance of Theatrical Stage Employees (IATSE) has reached a tentative three-year agreement with major Hollywood studios, including Disney and Netflix. The deal promises significant pay hikes and protections against the misuse of AI, addressing key concerns of the workforce.

Under the terms of the agreement, IATSE members, such as lighting technicians and costume designers, will receive pay raises of 7%, 4%, and 3.5% over the three-year period. These increases mark a substantial improvement in compensation for the crew members who are vital to film and television production.

A crucial element of the deal is the inclusion of language that prevents employees from being required to provide AI prompts if it could result in job displacement. The provision aims to safeguard jobs against the potential threats posed by AI technologies in the industry.

The new agreement comes on the heels of a similar labor deal reached in late 2023 between the SAG-AFTRA actors’ union and the studios. That contract, which ended a nearly six-month production halt, provided substantial pay raises, streaming bonuses, and AI protections, amounting to over $1 billion in benefits over three years.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

UNESCO warns of AI’s role in distorting Holocaust history

A new UNESCO report highlights the growing risk of Holocaust distortion through AI-generated content as young people increasingly rely on Generative AI for information. The report, published with the World Jewish Congress, warns that AI can amplify biases and spread misinformation, as many AI systems are trained on internet data that includes harmful content. Such content led to fabricated testimonies and distorted historical records, such as deepfake images and false quotes.

The report notes that Generative AI models can ‘hallucinate’ or invent events due to insufficient or incorrect data. Examples include ChatGPT fabricating Holocaust events that never happened and Google’s Bard generating fake quotes. These kinds of ‘hallucinations’ not only distort historical facts but also undermine trust in experts and simplify complex histories by focusing on a narrow range of sources.

UNESCO calls for urgent action to implement its Recommendation on the Ethics of Artificial Intelligence, emphasising fairness, transparency, and human rights. It urges governments to adopt these guidelines and tech companies to integrate them into AI development. UNESCO also stresses the importance of working with Holocaust survivors and historians to ensure accurate representation and educating young people to develop critical thinking and digital literacy skills.

Former Meta engineer sues over Gaza post suppression

A former Meta engineer has accused the company of bias in its handling of Gaza-related content, alleging he was fired for addressing bugs that suppressed Palestinian Instagram posts. Ferras Hamad, a Palestinian-American who worked on Meta’s machine learning team, filed a lawsuit in California state court for discrimination and wrongful termination. Hamad claims Meta exhibited a pattern of bias against Palestinians, including deleting internal communications about the deaths of Palestinian relatives and investigating the use of the Palestinian flag emoji while not probing similar uses of the Israeli or Ukrainian flag emojis.

Why does it matter?

The lawsuit reflects ongoing criticisms by human rights groups of Meta’s content moderation regarding Israel and the Palestinian territories. These concerns were amplified following the conflict that erupted in Gaza after a Hamas attack in Israel and Israel’s subsequent offensive.

Hamad’s firing, he asserts, was linked to his efforts to fix issues that restricted Palestinian Instagram posts from appearing in searches and feeds, including a misclassified video by a Palestinian photojournalist.

Despite his manager confirming the task was part of his duties, Hamad was later investigated and fired, allegedly for violating a policy on working with accounts of people he knew personally, which he denies.

New York to require parental consent for social media access

New York lawmakers are preparing to ban social media companies from using algorithms to control content seen by youth without parental consent. The legal initiative, expected to be voted on this week, aims to protect minors from automated feeds and notifications during overnight hours unless parents approve. The move comes as social media platforms face increasing scrutiny for their addictive nature and impact on young people’s mental health.

Earlier this year, New York City Mayor Eric Adams announced a lawsuit against major social media companies, including Facebook and Instagram, for allegedly contributing to a mental health crisis among youth. Similar actions have been taken by other states, with Florida recently passing a law requiring parental consent for minors aged 14 and 15 to use social media and banning those under 14 from accessing these platforms.

Why does it matter?

The trend started with Utah, which became the first state to regulate children’s social media access last year. States like Arkansas, Louisiana, Ohio, and Texas have since followed suit. The heightened regulation is affecting social media companies, with shares of Meta and Snap seeing a slight decline in extended trading.

Human rights groups protest Meta’s alleged censorship of pro-Palestinian content

Meta’s annual shareholder meeting on Wednesday sparked online protests from human rights groups, calling for an end to what they describe as systemic censorship of pro-Palestinian content on the company’s platforms and within its workforce. Nearly 200 Meta employees have recently urged CEO Mark Zuckerberg to address alleged internal censorship and biases on public platforms, advocating for greater transparency and an immediate ceasefire in Gaza.

Activists argue that after years of pressing Meta and other platforms for fairer content moderation, shareholders might exert more influence on the company than public pressure alone. Nadim Nashif, founder of the social media watchdog group 7amleh, highlighted that despite a decade of advocacy, the situation has deteriorated, necessitating new strategies like shareholder engagement to spur change.

Recently this month, a public statement from Meta employees followed an internal petition in 2023 with over 450 signatures, whose author faced an investigation by HR for allegedly violating company rules. The latest letter condemns Meta’s actions as creating a ‘hostile and unsafe work environment’ for Palestinian, Arab, Muslim, and ‘anti-genocide’ colleagues, with many employees claiming censorship and dismissiveness from leadership.

During the shareholder meeting, Meta focused on its AI projects and managing disinformation, sidestepping the issue of Palestinian content moderation. Despite external audit findings and a letter from US Senator Elizabeth Warren criticising Meta’s handling of pro-Palestinian content, the company did not immediately address the circulating letters and petitions.

OpenAI’s use of Scarlett Johansson’s voice faces Hollywood backlash

OpenAI’s use of Scarlett Johansson’s voice likeness in its AI model, ChatGPT, has ignited controversy in Hollywood, with Johansson accusing the company of copying her performance from the movie ‘Her’ without consent. The dispute has intensified concerns among entertainment executives about the implications of AI technology for the creative industry, particularly regarding copyright infringement and the right to publicity.

Despite OpenAI’s claims that the voice in question was not intended to resemble Johansson’s, the incident has strained relations between content creators and tech companies. Some industry insiders view OpenAI’s actions as disrespectful and indicative of hubris, potentially hindering future collaborations between Hollywood and the tech giant.

The conflict with Johansson highlights broader concerns about using copyrighted material in OpenAI’s models and the need to protect performers’ rights. While some technologists see AI as a valuable tool for enhancing filmmaking processes, others worry about its potential misuse and infringement on intellectual property.

Johansson’s case could set a precedent for performers seeking to protect their voice and likeness rights in the age of AI. Legal experts and industry figures advocate for federal legislation to safeguard performers’ rights and address the growing impact of AI-generated content, signalling a broader dialogue about the need for regulatory measures in this evolving landscape.

Biden administration urges action against AI-generated sexual abuse images

The Biden administration is urging tech and financial industries to combat the proliferation of AI-generated sexual abuse images, the Time reports. Generative AI tools have made it easy to create explicit deepfakes, often targeting women, children, and LGBTQ+ individuals, with little recourse for the victims. The White House calls for voluntary cooperation from companies to implement measures to stop these nonconsensual images, while no federal legislation addresses the issue.

Biden’s chief science adviser, Arati Prabhakar, noted the rapid increase in such abusive content and the need for companies to take responsibility. A document shared with the Associated Press outlines actions for various stakeholders, including AI developers, financial institutions, cloud providers, and app store gatekeepers, to restrict the monetisation and distribution of explicit images, particularly those involving minors. The administration also stressed the importance of stronger enforcement of terms of service and better mechanisms for victims to remove nonconsensual images online.

Why does it matter?

Last summer, major tech companies committed to AI safeguards, followed by an executive order from Biden to ensure AI development prioritises public safety, including measures to detect AI-generated child abuse imagery. However, high-profile incidents, like AI-generated deepfakes of Taylor Swift and the rise of such images in schools, reveal and urgent need for action and the potential insufficiency of voluntary commitments from companies. Recently, Forbes reported that AI-generated images of young girls in provocative outfits are spreading on TikTok and Instagram, drawing inappropriate comments from older men and raising concerns about potential exploitation.

GLAAD report: major social media platforms fail LGBTQ safety standards

GLAAD, the LGBTQ media advocacy organisation, gave failing grades to most major social media platforms for their handling of safety, privacy, and expression for the LGBTQ community online, as reported by The Hill. In the fourth annual Social Media Safety Index, GLAAD assessed hate, disinformation, anti-LGBTQ tropes, content suppression, AI, data protection, and the link between online hate and real-world harm.

Five of six leading social media platforms, including X (formerly Twitter), YouTube, Facebook, Instagram, and Threads, received failing grades for the third consecutive year. TikTok was the only platform not to receive an F, instead earning a D+ due to improvements in its Anti-Discrimination Ad Policy, which included preventing advertisers from wrongfully targeting or excluding users from content. Meanwhile, Threads received its first F since its launch in 2023, and Facebook and Instagram’s ratings worsened from the previous year.

Why does it matter?

GLAAD uses this index to urge social media leaders to create safer environments for the LGBTQ community, noting a lack of enforcement of current policies in the digital sector and a clear link between online hate and increasing real-world violence and legislative attacks.

UNESCO report reveals technology’s mixed impact on girls’ education

A new UN report, released by the UNESCO latest Global Education Monitor (GEM), explores how technology affects girls’ education from a gender perspective.

The report celebrates two decades of reduced discrimination against girls but also notes technology’s negative effects on their educational outcomes. It addresses challenges such as online harassment, access disparities in ICT, and the harmful influences of social media on mental health and body image, which can impede academic performance. Additionally, the report sheds light on the gender gap in STEM fields, underscoring the underrepresentation of women in STEM education and careers.

While highlighting that appropriately used social media can enhance girls’ awareness and knowledge of social issues, the GEM team also calls for increased educational investment and stricter digital regulations to promote safer, more inclusive environments for girls worldwide.

Why does it matter?

The report coincided with the International Girls in ICT Day, supported by the ITU, during which the UN Secretary-General emphasised the need for greater support and resources for girls in Information and Communication Technology (ICT), noting that globally, women (65%) have less access to the internet compared to men (70%). The persistent access gap in ICT and its disproportionately adverse effects on girls, despite years of acknowledgement, suggests a need for a more aggressive approach in policy and resource allocation to truly level the playing field.

Iran allocates funds to expand state-controlled internet infrastructure

The Raisi administration in Iran has allocated millions of dollars towards bolstering the country’s internet infrastructure, focusing on tightening control over information flow and reducing the influence of external media.

This decision, part of a broader financial strategy for the Ministry of Communications and Information Technology, reflects a 25% increase from the previous year’s budget, totalling over IRR 195,830 billion (approximately $300 million). Additionally, over IRR 150,000 billion (over $220 million) in miscellaneous credits have been earmarked to expand the national information network.

The Ministry of Communications and Information Technology’s efforts aim to reduce dependency on the global internet, leading to a more isolated and state-controlled national information network.

Why does it matter?

Popular social media platforms like Instagram and Facebook are blocked in Iran, and the government appears to be tightening internet control. Cloudflare has observed a significant decrease in internet traffic from Iran over the past two years, suggesting a trend of increased control and isolation. However, widespread internet disruptions have sparked discontent, leading the Tehran Chamber of Commerce to call for policy reassessment, citing economic concerns.