UN adopts China-led AI resolution

The UN General Assembly has adopted a resolution on AI capacity building, led by China. This non-binding resolution seeks to enhance developing countries’ AI capabilities through international cooperation and capacity-building initiatives. It also urges international organisations and financial institutions to support these efforts.

The resolution comes in the context of the ongoing technology rivalry between Beijing and Washington, as both nations strive to influence AI governance and portray each other as destabilising forces. Earlier this year, the US promoted a UN resolution advocating for ‘safe, secure, and trustworthy’ AI systems, gaining the support of over 110 countries, including China.

China’s resolution acknowledges the UN’s role in AI capacity-building and calls on Secretary-General Antonio Guterres to report on the unique challenges developing countries face and provide recommendations to address them.

GenAI revolution: Challenges and opportunities for marketing agencies

In the evolving landscape of marketing and advertising, the integration of generative AI presents both promise and challenges, as highlighted in a recent Forrester report. Key obstacles include a lack of AI expertise among agency employees and concerns over job obsolescence. Also, the human factor poses a significant hurdle that the industry must address urgently to fully harness the potential of genAI.

The potential economic impact of genAI on agencies is profound. Seen as a transformative force akin to the advent of smartphones, genAI promises to redefine creativity in marketing by combining data intelligence with human intuition. Agency leaders overwhelmingly recognise it as a disruptive technology, with 77% acknowledging its potential to fundamentally alter business operations. However, the fear of job displacement among employees remains palpable, exacerbated by recent industry disruptions and the rapid automation of white-collar roles.

To mitigate these concerns and fully embrace genAI, there is a pressing need for comprehensive AI literacy and training within agencies. While existing educational programmes and certifications provide a foundation, they are insufficient to meet the demands of integrating AI into everyday creative processes. Investment in reskilling and upskilling initiatives is crucial to empower agency employees to confidently navigate the AI-driven future of marketing and advertising.

Industry stakeholders, including agencies, technology partners, universities, and trade groups, must collaborate to establish robust training frameworks. In addition, a concerted effort will not only bolster agency capabilities in AI adoption but also ensure that creative workforce remains agile and competitive in an increasingly AI-centric landscape. By prioritising AI literacy and supporting continuous learning initiatives, agencies can position themselves at the forefront of innovation, delivering enhanced value to clients through AI-powered creativity.

Detroit adopts new rules for the use of facial recognition after settlement

The Detroit Police Department has agreed to new rules limiting how it can use facial recognition technology after a legal settlement was reached with Robert Williams, who was wrongfully arrested based on the technology in 2020. Williams was detained for over 30 hours after software identified him with video surveillance of another Black man stealing watches. With the support of the American Civil Liberties Union of Michigan, he submitted a complaint in 2020 and then sued in 2021.

So far, Detroit police are responsible for three of the seven reported instances when the use of facial recognition has led to a wrongful arrest. Detroit’s police chief, James White, has blamed ‘human error’, and not the software, saying his officers relied too much on the technology.

What does this change concretely?

To combat human error, Detroit police officers will now be trained in the risks of facial recognition in policing. Another change states that suspects identified by the technology must be linked to the crime by other evidence before being used in photo lineups. Along with other policy changes, the police department will have to launch an audit into facial recognition searches since 2017, when it first started using the technology. 

In spite of this incident, police say facial recognition technology is too useful a tool to be abandoned entirely. According to the head of informatics with Detroit’s crime intelligence unit, Stephen Lamoreaux, the Police Department remains ‘very keen to use technology in a meaningful way for public safety.’
However, some cities like San Francisco have banned its use because of concerns about privacy and racial bias. Microsoft also said it would not be providing its facial recognition software to the US police until a national framework for the using facial recognition based on human rights is put in place.

Study finds ChatGPT biased against disability in job screening

A recent study from the University of Washington has exposed troubling biases in using AI for job application processes. The research identifies explicitly that OpenAI’s chatbot, ChatGPT, showed significant biases against disabled job applicants when used to screen CVs.

The research underscores concerns about existing AI tools perpetuating biases rather than mitigating them despite being designed to reduce human bias in hiring processes. Many companies rely on AI to streamline and expedite candidate screening, aiming to enhance recruitment efficiency.

Lead author Kate Glazko pointed out that ChatGPT’s biases can adversely affect how disabled jobseekers’ qualifications are perceived. Descriptions generated by ChatGPT tended to overshadow entire resumes based on disability-related content, potentially undermining the comprehensive evaluation of candidates.

Shari Trewin, Program Director of the IBM Accessibility Team, noted that AI systems, which typically rely on established norms, may inherently disadvantage individuals with disabilities. Addressing these biases requires implementing specific rules within AI systems to ensure fair treatment, as suggested by Glazko’s study advocating for AI to adopt principles aligned with Disability Justice values.

Why does it matter?

The study also calls for further efforts to mitigate AI biases and promote a more inclusive approach to technology development. It highlights the need for greater awareness and vigilance in using AI for sensitive real-world tasks like job recruitment, where fairness and equity are paramount concerns.

AI protections included in new Hollywood worker’s contracts

The International Alliance of Theatrical Stage Employees (IATSE) has reached a tentative three-year agreement with major Hollywood studios, including Disney and Netflix. The deal promises significant pay hikes and protections against the misuse of AI, addressing key concerns of the workforce.

Under the terms of the agreement, IATSE members, such as lighting technicians and costume designers, will receive pay raises of 7%, 4%, and 3.5% over the three-year period. These increases mark a substantial improvement in compensation for the crew members who are vital to film and television production.

A crucial element of the deal is the inclusion of language that prevents employees from being required to provide AI prompts if it could result in job displacement. The provision aims to safeguard jobs against the potential threats posed by AI technologies in the industry.

The new agreement comes on the heels of a similar labor deal reached in late 2023 between the SAG-AFTRA actors’ union and the studios. That contract, which ended a nearly six-month production halt, provided substantial pay raises, streaming bonuses, and AI protections, amounting to over $1 billion in benefits over three years.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

UNESCO warns of AI’s role in distorting Holocaust history

A new UNESCO report highlights the growing risk of Holocaust distortion through AI-generated content as young people increasingly rely on Generative AI for information. The report, published with the World Jewish Congress, warns that AI can amplify biases and spread misinformation, as many AI systems are trained on internet data that includes harmful content. Such content led to fabricated testimonies and distorted historical records, such as deepfake images and false quotes.

The report notes that Generative AI models can ‘hallucinate’ or invent events due to insufficient or incorrect data. Examples include ChatGPT fabricating Holocaust events that never happened and Google’s Bard generating fake quotes. These kinds of ‘hallucinations’ not only distort historical facts but also undermine trust in experts and simplify complex histories by focusing on a narrow range of sources.

UNESCO calls for urgent action to implement its Recommendation on the Ethics of Artificial Intelligence, emphasising fairness, transparency, and human rights. It urges governments to adopt these guidelines and tech companies to integrate them into AI development. UNESCO also stresses the importance of working with Holocaust survivors and historians to ensure accurate representation and educating young people to develop critical thinking and digital literacy skills.

Former Meta engineer sues over Gaza post suppression

A former Meta engineer has accused the company of bias in its handling of Gaza-related content, alleging he was fired for addressing bugs that suppressed Palestinian Instagram posts. Ferras Hamad, a Palestinian-American who worked on Meta’s machine learning team, filed a lawsuit in California state court for discrimination and wrongful termination. Hamad claims Meta exhibited a pattern of bias against Palestinians, including deleting internal communications about the deaths of Palestinian relatives and investigating the use of the Palestinian flag emoji while not probing similar uses of the Israeli or Ukrainian flag emojis.

Why does it matter?

The lawsuit reflects ongoing criticisms by human rights groups of Meta’s content moderation regarding Israel and the Palestinian territories. These concerns were amplified following the conflict that erupted in Gaza after a Hamas attack in Israel and Israel’s subsequent offensive.

Hamad’s firing, he asserts, was linked to his efforts to fix issues that restricted Palestinian Instagram posts from appearing in searches and feeds, including a misclassified video by a Palestinian photojournalist.

Despite his manager confirming the task was part of his duties, Hamad was later investigated and fired, allegedly for violating a policy on working with accounts of people he knew personally, which he denies.

New York to require parental consent for social media access

New York lawmakers are preparing to ban social media companies from using algorithms to control content seen by youth without parental consent. The legal initiative, expected to be voted on this week, aims to protect minors from automated feeds and notifications during overnight hours unless parents approve. The move comes as social media platforms face increasing scrutiny for their addictive nature and impact on young people’s mental health.

Earlier this year, New York City Mayor Eric Adams announced a lawsuit against major social media companies, including Facebook and Instagram, for allegedly contributing to a mental health crisis among youth. Similar actions have been taken by other states, with Florida recently passing a law requiring parental consent for minors aged 14 and 15 to use social media and banning those under 14 from accessing these platforms.

Why does it matter?

The trend started with Utah, which became the first state to regulate children’s social media access last year. States like Arkansas, Louisiana, Ohio, and Texas have since followed suit. The heightened regulation is affecting social media companies, with shares of Meta and Snap seeing a slight decline in extended trading.

Human rights groups protest Meta’s alleged censorship of pro-Palestinian content

Meta’s annual shareholder meeting on Wednesday sparked online protests from human rights groups, calling for an end to what they describe as systemic censorship of pro-Palestinian content on the company’s platforms and within its workforce. Nearly 200 Meta employees have recently urged CEO Mark Zuckerberg to address alleged internal censorship and biases on public platforms, advocating for greater transparency and an immediate ceasefire in Gaza.

Activists argue that after years of pressing Meta and other platforms for fairer content moderation, shareholders might exert more influence on the company than public pressure alone. Nadim Nashif, founder of the social media watchdog group 7amleh, highlighted that despite a decade of advocacy, the situation has deteriorated, necessitating new strategies like shareholder engagement to spur change.

Recently this month, a public statement from Meta employees followed an internal petition in 2023 with over 450 signatures, whose author faced an investigation by HR for allegedly violating company rules. The latest letter condemns Meta’s actions as creating a ‘hostile and unsafe work environment’ for Palestinian, Arab, Muslim, and ‘anti-genocide’ colleagues, with many employees claiming censorship and dismissiveness from leadership.

During the shareholder meeting, Meta focused on its AI projects and managing disinformation, sidestepping the issue of Palestinian content moderation. Despite external audit findings and a letter from US Senator Elizabeth Warren criticising Meta’s handling of pro-Palestinian content, the company did not immediately address the circulating letters and petitions.

OpenAI’s use of Scarlett Johansson’s voice faces Hollywood backlash

OpenAI’s use of Scarlett Johansson’s voice likeness in its AI model, ChatGPT, has ignited controversy in Hollywood, with Johansson accusing the company of copying her performance from the movie ‘Her’ without consent. The dispute has intensified concerns among entertainment executives about the implications of AI technology for the creative industry, particularly regarding copyright infringement and the right to publicity.

Despite OpenAI’s claims that the voice in question was not intended to resemble Johansson’s, the incident has strained relations between content creators and tech companies. Some industry insiders view OpenAI’s actions as disrespectful and indicative of hubris, potentially hindering future collaborations between Hollywood and the tech giant.

The conflict with Johansson highlights broader concerns about using copyrighted material in OpenAI’s models and the need to protect performers’ rights. While some technologists see AI as a valuable tool for enhancing filmmaking processes, others worry about its potential misuse and infringement on intellectual property.

Johansson’s case could set a precedent for performers seeking to protect their voice and likeness rights in the age of AI. Legal experts and industry figures advocate for federal legislation to safeguard performers’ rights and address the growing impact of AI-generated content, signalling a broader dialogue about the need for regulatory measures in this evolving landscape.