WSIS+20 spotlights urgent need for global digital skills

The WSIS+20 High-Level Event in Geneva brought together global leaders to address the digital skills gap as one of the most urgent challenges of our time. As moderator Jacek Oko stated, digital technologies are rapidly reshaping work and learning worldwide, and equipping people with the necessary skills has become a matter of equity and economic resilience.

Dr Cosmas Zavazava of ITU emphasised that the real threat is not AI itself but people being displaced by others who know how to use it. ‘Workers risk losing their jobs, not because of AI, but because someone else knows how to use AI-based tools,’ he warned.

He underscored the importance of including informal workers like artisans and farmers in reskilling initiatives. He noted that 2.6 billion people remain offline while many of the 5.8 billion connected lack meaningful digital capabilities.

Costa Rica’s Vice Minister of Telecommunications, Hubert Vargas Picado shared how the country transformed into a regional tech hub by combining widespread internet access with workforce development. ‘Connectivity alone is insufficient,’ he said, advocating for cross-sectoral training systems and targeted scholarships, especially for rural youth and women.

WSIS+20 High-Level Event 2025
WSIS+20 spotlights urgent need for global digital skills 3

Similarly, Celeste Drake from the ILO pointed to gendered impacts of automation, revealing that administrative roles held mainly by women are most vulnerable. She insisted that upskilling must go hand-in-hand with policies promoting decent work, inclusive social dialogue, and regional equity.

The EU’s Michele Cervone d’Urso acknowledged the bloc’s shortfall in digital specialists and described Europe’s multipronged response, including digital academies and international talent partnerships.

Georgia’s Ekaterine Imedadze shared the success of embedding media literacy in public education and training local ambassadors to support digital inclusion in villages. Meanwhile, Anna Sophie Herken of GIZ warned of ‘massive talent waste’ in the Global South, where highly educated data workers are confined to low-value roles. Herken called for more equitable participation in the global digital economy and local AI innovation.

Private sector voices echoed the need for systemic change. EY’s Gillian Hinde stressed community co-creation and inclusive learning models, noting that only 22% of women pursue AI-related courses.

She outlined EY’s efforts to support neurodiverse learners and validate informal learning through digital badges. India’s Professor Himanshu Rai added a powerful sense of urgency, declaring, ‘AI is not the future. It’s already passing us by.’ He showcased India’s success in scaling low-cost digital access, training 60 million rural citizens, and adapting platforms to local languages and user needs.

His call for ‘compassionate’ policymaking underscored the moral imperative to act inclusively and decisively.

Speakers across sectors agreed that infrastructure without skills development risks widening the digital divide. Targeted interventions, continuous monitoring, and structural reform were repeatedly highlighted as essential.

The event’s parting thought, offered by Jacek Oko, summed up the transformative mindset required: ‘Let AI teach us about AI.’ The road ahead demands urgency, innovation, and collective action to ensure digital transformation uplifts all, especially the most vulnerable.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

The rise and risks of synthetic media

Synthetic media transforms content creation across sectors

The rapid development of AI has enabled significant breakthroughs in synthetic media, opening up new opportunities in healthcare, education, entertainment and many more.

Instead of relying on traditional content creation, companies are now using advanced tools to produce immersive experiences, training simulations and personalised campaigns. But what exactly is synthetic media?

Seattle-based ElastixAI raised $16 million to build a platform that improves how large language models run, focusing on efficient inference rather than training.

Synthetic media refers to content produced partly or entirely by AI, including AI-generated images, music, video and speech. Tools such as ChatGPT, Midjourney and voice synthesisers are now widely used in both creative and commercial settings.

The global market for synthetic media is expanding rapidly. Valued at USD 4.5 billion in 2023, it is projected to reach USD 16.6 billion by 2033, driven mainly by tools that convert text into images, videos or synthetic speech.

The appeal lies in its scalability and flexibility: small teams can now quickly produce a wide range of professional-grade content and easily adapt it for multiple audiences or languages.

However, as synthetic media becomes more widespread, so do the ethical challenges it poses.

How deepfakes threaten trust and security

The same technology has raised serious concerns as deepfakes – highly realistic but fake audio, images and videos – become harder to detect and more frequently misused.

Deepfakes, a subset of synthetic media, go a step further by creating content that intentionally imitates real people in deceptive ways, often for manipulation or fraud.

The technology behind deepfakes involves face swapping through variational autoencoders and voice cloning via synthesised speech patterns. The entry barrier is low, making these tools accessible to the general public.

computer keyboard with red deepfake button key deepfake dangers online

First surfacing on Reddit in 2017, deepfakes have quickly expanded into healthcare, entertainment, and education, yet they also pose a serious threat when misused. For example, a major financial scam recently cost a company USD 25 million due to a deepfaked video call with a fake CFO.

Synthetic media fuels global political narratives

Politicians and supporters have often openly used generative AI to share satirical or exaggerated content, rather than attempting to disguise it as real.

In Indonesia, AI even brought back the likeness of former dictator Suharto to endorse candidates, while in India, meme culture thrived but failed to significantly influence voters’ decisions.

In the USA, figures like Elon Musk and Donald Trump have embraced AI-generated memes and voice parodies to mock opponents or improve their public image.

AI, US elections, Deepfakes

While these tools have made it easier to create misinformation, researchers such as UC Berkeley’s Hany Farid argue that the greater threat lies in the gradual erosion of trust, rather than a single viral deepfake.

It is becoming increasingly difficult for users to distinguish truth from fiction, leading to a contaminated information environment that harms public discourse. Legal concerns, public scrutiny, and the proliferation of ‘cheapfakes’—manipulated media that do not rely on AI—may have limited the worst predictions.

Nonetheless, experts warn that the use of AI in campaigns will continue to become more sophisticated. Without clear regulation and ethical safeguards, future elections may not be able to prevent the disruptive influence of synthetic media as easily.

Children use AI to create harmful deepfakes

School-aged children are increasingly using AI tools to generate explicit deepfake images of their classmates, often targeting girls. What began as a novelty has become a new form of digital sexual abuse.

With just a smartphone and a popular app, teenagers can now create and share highly realistic fake nudes, turning moments of celebration, like a bat mitzvah photo, into weapons of humiliation.

Rather than being treated as simple pranks, these acts have severe psychological consequences for victims and are leaving lawmakers scrambling.

Educators and parents are now calling for urgent action. Instead of just warning teens about criminal consequences, schools are starting to teach digital ethics, consent, and responsible use of technology.

kids using laptops in class

Programmes that explain the harm caused by deepfakes may offer a better path forward than punishment alone. Experts say the core issues—respect, agency, and safety—are not new.

The tools may be more advanced, but the message remains the same: technology must be used responsibly, not to exploit others.

Deepfakes become weapons of modern war

Deepfakes can also be deployed to sow confusion, falsify military orders, and manipulate public opinion. While not all such tactics will succeed, their growing use in psychological and propaganda operations cannot be ignored.

Intelligence agencies are already exploring how to integrate synthetic media into information warfare strategies, despite the risk of backfiring.

A new academic study from University College Cork examined how such videos spread on social media and how users reacted.

While many responded with scepticism and attempts at verification, others began accusing the real footage of being fake. The growing confusion risks creating an online environment where no information feels trustworthy, exactly the outcome hostile actors might seek.

While deception has long been part of warfare, deepfakes challenge the legal boundaries defined by international humanitarian law.

 Crowd, Person, Adult, Male, Man, Press Conference, Head, Face, People

Falsifying surrender orders to launch ambushes could qualify as perfidy—a war crime—while misleading enemies about troop positions may remain lawful.

Yet when civilians are caught in the crossfire of digital lies, violations of the Geneva Conventions become harder to ignore.

Regulation is lagging behind the technology, and without urgent action, deepfakes may become as destructive as conventional weapons, redefining both warfare and the concept of truth.

The good side of deepfake technology

Yet, not all applications are harmful. In medicine, deepfakes can aid therapy or generate synthetic ECG data for research while protecting patient privacy. In education, the technology can recreate historical figures or deliver immersive experiences.

Journalists and human rights activists also use synthetic avatars for anonymity in repressive environments. Meanwhile, in entertainment, deepfakes offer cost-effective ways to recreate actors or build virtual sets.

These examples highlight how the same technology that fuels disinformation can also be harnessed for innovation and the public good.

Governments push for deepfake transparency

However, the risks are rising. Misinformation, fraud, nonconsensual content, and identity theft are all becoming more common.

The danger of copyright infringement and data privacy violations also looms large, particularly when AI-generated material pulls content from social media or copyrighted works without permission.

Policymakers are taking action, but is it enough?

The USA has banned AI robocalls, and Europe’s AI Act aims to regulate synthetic content. Experts emphasise the need for worldwide cooperation, with regulation focusing on consent, accountability, and transparency.

eu artificial intelligence act 415652543

Embedding watermarks and enforcing civil liabilities are among the strategies being considered. To navigate the new landscape, a collaborative effort across governments, industry, and the public is crucial, not just to detect deepfakes but also to define their responsible use.

Some emerging detection methods include certifying content provenance, where creators or custodians attach verifiable information about the origin and authenticity of media.

Automated detection systems analyse inconsistencies in facial movements, speech patterns, or visual blending to identify manipulated media. Additionally, platform moderation based on account reputation and behaviour helps filter suspicious sources.

Systems that process or store personal data must also comply with privacy regulations, ensuring individuals’ rights to correct or erase inaccurate data.

Yet, despite these efforts, many of these systems still struggle to reliably distinguish synthetic content from real one.

As detection methods lag, some organisations like Reality Defender and Witness work to raise awareness and develop countermeasures.

The rise of AI influencers on social media

Another subset of synthetic media is the AI-generated influencers. AI (or synthetic) influencers are virtual personas powered by AI, designed to interact with followers, create content, and promote brands across social media platforms.

Unlike traditional influencers, they are not real people but computer-generated characters that simulate human behaviour and emotional responses. Developers use deep learning, natural language processing, and sophisticated graphic design to make these influencers appear lifelike and relatable.

Finfluencers face legal action over unregulated financial advice.

Once launched, they operate continuously, often in multiple languages and across different time zones, giving brands a global presence without the limitations of human engagement.

These virtual influencers offer several key advantages for brands. They can be precisely controlled to maintain consistent messaging and avoid the unpredictability that can come with human influencers.

Their scalability allows them to reach diverse markets with tailored content, and over time, they may prove more cost-efficient due to their ability to produce content at scale without the ongoing costs of human talent.

Brands can also experiment with creative storytelling in new and visually compelling ways that might be difficult for real-life creators.

Synthetic influencers have also begun appearing in the healthcare sector, although their widespread popularity in the sector remains limited. However, it is expected to grow rapidly.

Their rise also brings significant challenges. AI influencers lack genuine authenticity and emotional depth, which can hinder the formation of meaningful connections with audiences.

Their use raises ethical concerns around transparency, especially if followers are unaware that they are interacting with AI.

Data privacy is another concern, as these systems often rely on collecting and analysing large amounts of user information to function effectively.

Additionally, while they may save money in the long run, creating and maintaining a sophisticated AI influencer involves a substantial upfront investment.

Study warns of backlash from synthetic influencers

A new study from Northeastern University urges caution when using AI-powered influencers, despite their futuristic appeal and rising prominence.

While these digital figures may offer brands a modern edge, they risk inflicting greater harm on consumer trust compared to human influencers when problems arise.

The findings show that consumers are more inclined to hold the brand accountable if a virtual influencer promotes a faulty product or spreads misleading information.

Rather than viewing these AI personas as independent agents, users tend to see them as direct reflections of the company behind them. Instead of blaming the influencer, audiences shift responsibility to the brand itself.

Interestingly, while human influencers are more likely to be held personally liable, virtual influencers still cause deeper reputational damage.

 Accessories, Jewelry

People assume that their actions are fully scripted and approved by the business, making any error seem deliberate or embedded in company practices rather than a personal mistake.

Regardless of the circumstances, AI influencers are reshaping the marketing landscape by providing an innovative and highly adaptable tool for brands. While they are unlikely to replace human influencers entirely, they are expected to play a growing role in digital marketing.

Their continued rise will likely force regulators, brands, and developers to establish clearer ethical standards and guidelines to ensure responsible and transparent use.

Shaping the future of synthetic media

In conclusion, the growing presence of synthetic media invites both excitement and reflection. As researchers, policymakers, and creators grapple with its implications, the challenge lies not in halting progress but in shaping it thoughtfully.

All forms of synthetic media, like any other form of technology, have a dual capacity to empower and exploit, demanding a new digital literacy — one that prioritises critical engagement, ethical responsibility, and cross-sector collaboration.

On the one hand, deepfakes threaten democratic stability, information integrity, and civilian safety, blurring the line between truth and fabrication in conflict, politics, and public discourse.

On the other hand, AI influencers are transforming marketing and entertainment by offering scalable, controllable, and hyper-curated personas that challenge notions of authenticity and human connection.

Rather than fearing the tools themselves, we as human beings need to focus on cultivating the norms and safeguards that determine how, and for whom, they are used. Ultimately, these tools are meant to enhance our way of life, not undermine it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sam Altman shrugs off Meta poaching, backs Trump, jabs at Musk

OpenAI CEO Sam Altman addressed multiple hot topics during the Sun Valley conference, including Meta’s aggressive recruitment of top AI researchers, his strained relationship with Elon Musk, and a surprising show of support for Donald Trump.

Altman downplayed Meta’s talent raids, saying he had not spoken to Mark Zuckerberg since the Meta CEO lured away three OpenAI researchers with a $100 million signing bonus. All three had worked at OpenAI’s Zurich office, which opened in 2024.

Despite the losses, Altman described the situation as ‘fine’ and ‘good’, suggesting OpenAI’s mission continues to retain top talent.

The OpenAI chief also took a subtle swipe at Meta’s smartglasses, saying he doesn’t like wearable tech and implying his company has no plans to follow suit.

On the topic of Elon Musk, Altman laughed off their rivalry, saying only that Musk’s bust-ups with everybody, and hinting at the long-running tension between the two former co-founders.

Perhaps most notably, Altman expressed disillusionment with the Democratic Party, saying he no longer feels represented by mainstream figures he once supported.

He praised Donald Trump’s focus on AI infrastructure. He even donated $1 million to Trump’s inaugural fund — a gesture reflecting a broader shift among Silicon Valley leaders warming to Trump as his popularity rises.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Humanitarian, peace, and media sectors join forces to tackle harmful information

At the WSIS+20 High-Level Event in Geneva, a powerful session brought together humanitarian, peacebuilding, and media development actors to confront the growing threat of disinformation, more broadly reframed as ‘harmful information.’ Panellists emphasised that false or misleading content, whether deliberately spread or unintentionally harmful, can have dire consequences for already vulnerable populations, fueling violence, eroding trust, and distorting social narratives.

The session moderator, Caroline Vuillemin of Fondation Hirondelle, underscored the urgency of uniting these sectors to protect those most at risk.

Hans-Peter Wyss of the Swiss Agency for Development and Cooperation presented the ‘triple nexus’ approach, advocating for coordinated interventions across humanitarian, development, and peacebuilding efforts. He stressed the vital role of trust, institutional flexibility, and the full inclusion of independent media as strategic actors.

Philippe Stoll of the ICRC detailed an initiative that focuses on the tangible harms of information—physical, economic, psychological, and societal—rather than debating truth. That initiative, grounded in a ‘detect, assess, respond’ framework, works from local volunteer training up to global advocacy and research on emerging challenges like deepfakes.

Donatella Rostagno of Interpeace shared field experiences from the Great Lakes region, where youth-led efforts to counter misinformation have created new channels for dialogue in highly polarised societies. She highlighted the importance of inclusive platforms where communities can express their own visions of peace and hear others’.

Meanwhile, Tammam Aloudat of The New Humanitarian critiqued the often selective framing of disinformation, urging support for local journalism and transparency about political biases, including the harm caused by omission and silence.

The session concluded with calls for sustainable funding and multi-level coordination, recognising that responses must be tailored locally while engaging globally. Despite differing views, all panellists agreed on the need to shift from a narrow focus on disinformation to a broader and more nuanced understanding of information harm, grounded in cooperation, local agency, and collective responsibility.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Accelerating SDGs through digital innovation: SMEs take center stage at WSIS+20

At the WSIS+20 High-Level Event in Geneva, the session ‘Collaborative Innovation Ecosystem and Digital Transformation’ spotlighted how digital ecosystems can empower small and medium-sized enterprises (SMEs) to drive global progress toward the Sustainable Development Goals (SDGs). Organised by the China Academy of Information and Communication Technology (CAICT) and the International Telecommunication Union (ITU), the event drew experts from governments, industry, and international organisations to strategise on digital solutions for sustainable development.

Dr Cosmas Zavazava of ITU emphasised that SMEs are the heartbeat of global economies, yet many still lack the digital capacity to thrive. Through the ITU Innovation and Entrepreneurship Alliance—comprising over 100 stakeholders and 17 acceleration centres—efforts are underway to provide universal connectivity and foster sustainable digital transformation.

Xiaohui Yu of CAICT echoed this vision, highlighting the crucial role of developing nations in closing the digital gap and announcing CAICT’s expanded role as an ITU acceleration centre dedicated to tech innovation and SME support.

One key milestone from the session was launching a global case collection initiative to identify best practices in ICT-enabled SME transformation. Countries like South Africa and Kenya shared success stories—South Africa’s Digitech platform and foresight-driven policymaking, and Kenya’s Hustler Fund, which digitises SME financing via mobile platforms like M-Pesa while integrating over 20,000 government services. These examples underscore the need for inclusive infrastructure, affordable digital tools, and coherent policies to bridge divides.

The discussion culminated in a unified call for action: build a ‘platform of platforms’ that connects regional innovation efforts, harmonises cross-border policies, and fosters capacity-building to ensure digital transformation reaches even the most marginalised entrepreneurs. As participants agreed, collaboration must move beyond goodwill to coordinated, sustained action if SMEs are to unlock their full potential in achieving the SDGs.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Rights before risks: Rethinking quantum innovation at WSIS+20

At the WSIS+20 High-Level Event in Geneva, a powerful call was made to ensure the development of quantum technologies remains rooted in human rights and inclusive governance. A UNESCO-led session titled ‘Human Rights-Centred Global Governance of Quantum Technologies’ presented key findings from a new issue brief co-authored with Sciences Po and the European University Institute.

It outlined major risks—such as quantum’s dual-use nature threatening encryption, a widening technological divide, and severe gender imbalances in the field—and urged immediate global action to build safeguards before quantum capabilities mature.

UNESCO’s Guilherme Canela emphasised that innovation and human rights are not mutually exclusive but fundamentally interlinked, warning against a ‘false dichotomy’ between the two. Lead author Shamira Ahmed highlighted the need for proactive frameworks to ensure quantum benefits are equitably distributed and not used to deepen global inequalities or erode rights.

With 79% of quantum firms lacking female leadership and a mere 1 in 54 job applicants being women, the gender gap was called ‘staggering.’ Ahmed proposed infrastructure investment, policy reforms, capacity development, and leveraging the UN’s International Year of Quantum to accelerate global discussions.

Panellists echoed the urgency. Constance Bommelaer de Leusse from Sciences Po advocated for embedding multistakeholder participation into governance processes and warned of a looming ‘quantum arms race.’ Professor Pieter Vermaas of Delft University urged moving from talk to international collaboration, suggesting the creation of global quantum research centres.

Journalist Elodie Vialle raised alarms about quantum’s potential to supercharge surveillance, endangering press freedom and digital privacy, and underscored the need to close the cultural gap between technologists and civil society.

Overall, the session championed a future where quantum technology is developed transparently, governed globally, and serves as a digital public good, bridging divides rather than deepening them. Speakers agreed that the time to act is now, before today’s opportunities become tomorrow’s crises.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

How agentic AI is transforming cybersecurity

Cybersecurity is gaining a new teammate—one that never sleeps and acts independently. Agentic AI doesn’t wait for instructions. It detects threats, investigates, and responds in real-time. This new class of AI is beginning to change the way we approach cyber defence.

Unlike traditional AI systems, Agentic AI operates with autonomy. It sets objectives, adapts to environments, and self-corrects without waiting for human input. In cybersecurity, this means instant detection and response, beyond simple automation.

With networks more complex than ever, security teams are stretched thin. Agentic AI offers relief by executing actions like isolating compromised systems or rewriting firewall rules. This technology promises to ease alert fatigue and keep up with evasive threats.

A 2025 Deloitte report says 25% of GenAI-using firms will pilot Agentic AI this year. SailPoint found that 98% of organisations will expand AI agent use in the next 12 months. But rapid adoption also raises concern—96% of tech workers see AI agents as security risks.

The integration of AI agents is expanding to cloud, endpoints, and even physical security. Yet with new power comes new vulnerabilities—from adversaries mimicking AI behaviour to the risk of excessive automation without human checks.

Key challenges include ethical bias, unpredictable errors, and uncertain regulation. In sectors like healthcare and finance, oversight and governance must keep pace. The solution lies in balanced control and continuous human-AI collaboration.

Cybersecurity careers are shifting in response. Hybrid roles such as AI Security Analysts and Threat Intelligence Automation Architects are emerging. To stay relevant, professionals must bridge AI knowledge with security architecture.

Agentic AI is redefining cybersecurity. It boosts speed and intelligence but demands new skills and strong leadership. Adaptation is essential for those who wish to thrive in tomorrow’s AI-driven security landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US targets Southeast Asia to stop AI chip leaks to China

The US is preparing stricter export controls on high-end Nvidia AI chips destined for Malaysia and Thailand, in a move to block China’s indirect access to advanced GPU hardware.

According to sources cited by Bloomberg, the new restrictions would require exporters to obtain licences before sending AI processors to either country.

The change follows reports that Chinese engineers have hand-carried data to Malaysia for AI training after Singapore began restricting chip re-exports.

Washington suspects Chinese firms are using Southeast Asian intermediaries, including shell companies, to bypass existing export bans on AI chips like Nvidia’s H100.

Although some easing has occurred between the US and China in areas such as ethane and engine components, Washington remains committed to its broader decoupling strategy. The proposed measures will reportedly include safeguards to prevent regional supply chain disruption.

Malaysia’s Trade Minister confirmed earlier this year that the US had requested detailed monitoring of all Nvidia chip shipments into the country.

As the global race for AI dominance intensifies, Washington appears determined to tighten enforcement and limit Beijing’s access to advanced computing power.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung profits slump as US chip ban hits AI exports

Samsung Electronics expects its second-quarter operating profits to exceed half, citing Washington’s export controls on advanced AI chips to China.

The company announced a projected 56% year-on-year drop in operating profit, falling to 4.6 trillion won ($3.3 billion), with revenue down 6.5% from the previous quarter.

The semiconductor division, a core part of Samsung’s business, suffered due to reduced utilisation and inventory value adjustments.

US restrictions have made it difficult for South Korea’s largest conglomerate to ship high-end chips to China, forcing some of its production lines to run below capacity.

Despite weak performance in the foundry sector, the memory business remained relatively stable. Analysts pointed to weaker-than-expected sales of HBM chips used for AI and a drop in NAND storage prices, while a declining won-dollar exchange rate further pressured earnings.

Looking ahead, Samsung expects a modest recovery as demand for memory chips, mainly from AI-driven data centres, improves in the year’s second half.

The company is also facing political pressure from Washington, with threats of new tariffs prompting talks between Seoul and the US administration.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta hires Apple’s top AI executive amid tech talent war

Apple has lost a key AI executive to Meta, dealing a fresh blow to the tech giant’s internal AI ambitions.

Ruoming Pang, who led Apple’s foundation models team, is joining Meta’s newly formed superintelligence group, according to people familiar with the matter.

Meta reportedly offered Pang a lucrative package worth tens of millions annually, continuing its aggressive hiring streak.

The company, led by Mark Zuckerberg, has already brought in several high-profile AI experts from Scale AI, OpenAI, Anthropic and elsewhere, with Zuckerberg personally involved in recruitment efforts.

Pang’s team at Apple had been responsible for the core language models behind Apple Intelligence and Siri.

However, internal dissatisfaction has been mounting as the company considered shifting to third-party models, including from OpenAI and Anthropic.

That shift, combined with recent leadership changes and reduced responsibilities for Apple’s AI chief John Giannandrea, has weakened morale across the team.

Following Pang’s exit, the team will now be managed by Zhifeng Chen under a new multi-tier structure.

Several engineers are also reportedly planning to leave, raising concerns about Apple’s ability to retain AI talent as Meta increases its investment and influence in the race for advanced AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!