Microsoft reveals VALL-E 2 AI, achieving human-like speech

Microsoft has made a significant leap forward in AI speech generation with its VALL-E 2 text-to-speech (TTS) system. VALL-E 2 achieves human parity, meaning it can produce voices indistinguishable from real people. The system only needs a few seconds of audio to learn and mimic a speaker’s voice.

Tests on speech datasets like LibriSpeech and VCTK showed that VALL-E 2’s voice quality matches or even surpasses human quality. Features like ‘Repetition Aware Sampling’ and ‘Grouped Code Modeling’ allow the system to handle complex sentences and repetitive phrases naturally, ensuring smooth and realistic speech output.

Despite releasing audio samples, Microsoft considers VALL-E 2 too advanced for public release due to potential misuse like voice spoofing. This cautious approach aligns with the wider industry’s concerns, as seen with OpenAI’s restrictions on its voice technology.

While VALL-E 2 represents a significant breakthrough, it remains a research project for now. The development of AI continues apace, with companies striving to balance innovation with ethical considerations.

GSMA announces global effort to improve smartphone access

The GSMA has announced the formation of a global coalition to make smartphones more accessible and affordable for some of the world’s poorest populations. The coalition will include mobile operators, vendors, and significant institutions such as the World Bank Group, the United Nations’ ITU agency, and the WEF Edison Alliance.

The group aims to reduce the barriers to entering the digital economy for low-income populations, particularly in Sub-Saharan Africa and South Asia. The GSMA highlighted that handset affordability is the most significant obstacle preventing people from going online.

In many low and middle-income countries, mobile phones are often the only means of accessing the internet. Currently, 38% of the global population cannot use mobile internet due to high costs and lack of skills. The coalition will work together to improve access to affordable internet-enabled devices, aiming to close the ‘Usage Gap’ that hinders around three billion people from fully participating in the global digital economy.

Healthcare experts demand transparency in AI use

Healthcare professionals, including researchers and clinicians, are keen to incorporate AI into their daily work but demand greater transparency regarding its application. A survey by Elsevier reveals that 94% of researchers and 96% of clinicians believe AI will accelerate knowledge discovery, while a similar proportion sees it boosting research output and reducing costs. Both groups, however, stress the need for quality content, trust, and transparency before they fully embrace AI tools.

The survey, involving 3,000 participants across 123 countries, indicates that 87% of respondents think AI will enhance overall work quality, and 85% believe it will free up time for higher-value projects. Despite these positive outlooks, there are significant concerns about AI’s potential misuse. Specifically, 95% of researchers and 93% of clinicians fear that AI could be used to spread misinformation. In India, 82% of doctors worry about overreliance on AI in clinical decisions, and 79% are concerned about societal disruptions like unemployment.

To address these issues, 81% of researchers and clinicians expect to be informed if the tools they use depend on generative AI. Moreover, 71% want assurance that AI-dependent tools are based on high-quality, trusted data sources. Transparency in peer-review processes is also crucial, with 78% of researchers and 80% of clinicians expecting to know if AI influences manuscript recommendations. These insights underscore the importance of transparency and trust in the adoption of AI in healthcare.

French study uncovers Russian disinformation tactics amid legislative campaign

Russian disinformation campaigns are targeting social media to destabilise France’s political scene during its legislative campaign, according to a study by the French National Centre for Scientific Research (CNRS). The study highlights Kremlin strategies such as normalising far-right ideologies and weakening the ‘Republican front’ that opposes the far-right Rassemblement National (RN).

Researchers noted that Russia’s influence tactics, including astroturfing and meme wars, have been used previously during the 2016 US presidential elections and the 2022 French presidential elections to support RN figurehead Marine Le Pen. The Kremlin’s current efforts aim to exploit ongoing global conflicts, such as the Israeli-Palestinian conflict, to influence French political dynamics.

Despite these findings, the actual impact of these disinformation campaigns remains uncertain. Some experts argue that while such interference may sway voter behaviour or amplify tensions, the overall effect is limited. The CNRS study focused on activity on X (formerly Twitter) and acknowledged that further research is needed to understand the broader implications of these digital disruptions.

Microsoft settles California leave discrimination case for $14 million

Microsoft will be paying $14 million to settle a discrimination case where it is alleged that the company has illegally penalised workers taking medical and family care leave. The settlement, pending a judge’s approval, will conclude a lengthy investigation by the Civil Rights Department, and the money will go to the affected workers.

The California Civil Rights Department had filed accusations in state court against the tech giant, claiming that since 2017, the company has been unfairly penalising its California employees for taking parental, disability, pregnancy, and family-care leave by withholding raises, promotions, or stock awards. According to the department, many of the affected workers were women and people with disabilities, who received lower performance reviews, thereby impacting their overall career growth.

Microsoft, however, stated that they did nothing wrong and disagreed with the accusations. Nonetheless, alongside the $14.4 million settlement, Microsoft has agreed to bring in an independent consultant to ensure their policies are fair to employees taking leave. The consultant will also ensure that workers can voice their concerns without any repercussions. Additionally, Microsoft will train managers and HR staff to prevent future workplace violations of employment rights.

Meta responds to photo tagging issues with new AI labels

Meta has announced a significant update regarding using AI labels across its platforms, replacing the ‘Made with AI’ tag with ‘AI info’. This change comes after widespread complaints about the incorrect tagging of photos. For instance, a historical photograph captured on film four decades ago was mistakenly labelled AI-generated when uploaded with basic editing tools like Adobe’s cropping feature.

Kate McLaughlin, a spokesperson for Meta, emphasised that the company is continuously refining its AI products and collaborating closely with industry partners on AI labelling standards. The new ‘AI info’ label aims to clarify that content may have been modified with AI tools rather than solely created by AI.

The issue primarily stems from how metadata tools like Adobe Photoshop apply information to images, which platforms interpret. Following the expansion of its AI content labelling policies, daily photos shared on Meta’s platforms, such as Instagram and Facebook, were erroneously tagged as ‘Made with AI’.

Initially, the updated labelling will roll out on mobile apps before extending to web platforms. Clicking on the ‘AI info’ tag will display a message similar to the previous label, explaining why it was applied and acknowledging the use of AI-powered editing tools like Generative Fill. Despite advancements in metadata tagging technology like C2PA, distinguishing between AI-generated and authentic images remains a work in progress.

UN adopts China-led AI resolution

The UN General Assembly has adopted a resolution on AI capacity building, led by China. This non-binding resolution seeks to enhance developing countries’ AI capabilities through international cooperation and capacity-building initiatives. It also urges international organisations and financial institutions to support these efforts.

The resolution comes in the context of the ongoing technology rivalry between Beijing and Washington, as both nations strive to influence AI governance and portray each other as destabilising forces. Earlier this year, the US promoted a UN resolution advocating for ‘safe, secure, and trustworthy’ AI systems, gaining the support of over 110 countries, including China.

China’s resolution acknowledges the UN’s role in AI capacity-building and calls on Secretary-General Antonio Guterres to report on the unique challenges developing countries face and provide recommendations to address them.

GenAI revolution: Challenges and opportunities for marketing agencies

In the evolving landscape of marketing and advertising, the integration of generative AI presents both promise and challenges, as highlighted in a recent Forrester report. Key obstacles include a lack of AI expertise among agency employees and concerns over job obsolescence. Also, the human factor poses a significant hurdle that the industry must address urgently to fully harness the potential of genAI.

The potential economic impact of genAI on agencies is profound. Seen as a transformative force akin to the advent of smartphones, genAI promises to redefine creativity in marketing by combining data intelligence with human intuition. Agency leaders overwhelmingly recognise it as a disruptive technology, with 77% acknowledging its potential to fundamentally alter business operations. However, the fear of job displacement among employees remains palpable, exacerbated by recent industry disruptions and the rapid automation of white-collar roles.

To mitigate these concerns and fully embrace genAI, there is a pressing need for comprehensive AI literacy and training within agencies. While existing educational programmes and certifications provide a foundation, they are insufficient to meet the demands of integrating AI into everyday creative processes. Investment in reskilling and upskilling initiatives is crucial to empower agency employees to confidently navigate the AI-driven future of marketing and advertising.

Industry stakeholders, including agencies, technology partners, universities, and trade groups, must collaborate to establish robust training frameworks. In addition, a concerted effort will not only bolster agency capabilities in AI adoption but also ensure that creative workforce remains agile and competitive in an increasingly AI-centric landscape. By prioritising AI literacy and supporting continuous learning initiatives, agencies can position themselves at the forefront of innovation, delivering enhanced value to clients through AI-powered creativity.

Detroit adopts new rules for the use of facial recognition after settlement

The Detroit Police Department has agreed to new rules limiting how it can use facial recognition technology after a legal settlement was reached with Robert Williams, who was wrongfully arrested based on the technology in 2020. Williams was detained for over 30 hours after software identified him with video surveillance of another Black man stealing watches. With the support of the American Civil Liberties Union of Michigan, he submitted a complaint in 2020 and then sued in 2021.

So far, Detroit police are responsible for three of the seven reported instances when the use of facial recognition has led to a wrongful arrest. Detroit’s police chief, James White, has blamed ‘human error’, and not the software, saying his officers relied too much on the technology.

What does this change concretely?

To combat human error, Detroit police officers will now be trained in the risks of facial recognition in policing. Another change states that suspects identified by the technology must be linked to the crime by other evidence before being used in photo lineups. Along with other policy changes, the police department will have to launch an audit into facial recognition searches since 2017, when it first started using the technology. 

In spite of this incident, police say facial recognition technology is too useful a tool to be abandoned entirely. According to the head of informatics with Detroit’s crime intelligence unit, Stephen Lamoreaux, the Police Department remains ‘very keen to use technology in a meaningful way for public safety.’
However, some cities like San Francisco have banned its use because of concerns about privacy and racial bias. Microsoft also said it would not be providing its facial recognition software to the US police until a national framework for the using facial recognition based on human rights is put in place.

Study finds ChatGPT biased against disability in job screening

A recent study from the University of Washington has exposed troubling biases in using AI for job application processes. The research identifies explicitly that OpenAI’s chatbot, ChatGPT, showed significant biases against disabled job applicants when used to screen CVs.

The research underscores concerns about existing AI tools perpetuating biases rather than mitigating them despite being designed to reduce human bias in hiring processes. Many companies rely on AI to streamline and expedite candidate screening, aiming to enhance recruitment efficiency.

Lead author Kate Glazko pointed out that ChatGPT’s biases can adversely affect how disabled jobseekers’ qualifications are perceived. Descriptions generated by ChatGPT tended to overshadow entire resumes based on disability-related content, potentially undermining the comprehensive evaluation of candidates.

Shari Trewin, Program Director of the IBM Accessibility Team, noted that AI systems, which typically rely on established norms, may inherently disadvantage individuals with disabilities. Addressing these biases requires implementing specific rules within AI systems to ensure fair treatment, as suggested by Glazko’s study advocating for AI to adopt principles aligned with Disability Justice values.

Why does it matter?

The study also calls for further efforts to mitigate AI biases and promote a more inclusive approach to technology development. It highlights the need for greater awareness and vigilance in using AI for sensitive real-world tasks like job recruitment, where fairness and equity are paramount concerns.