Meta brings back Robert Fergus to lead AI lab

Meta Platforms has brought back Robert Fergus to lead its AI research lab, FAIR, which he helped found in 2014 alongside Yann LeCun. After spending five years as a research director at Google’s DeepMind, Fergus returns to replace Joelle Pineau, who steps down on 30 May.

Fergus, who previously spent six years as a research scientist at Facebook, announced his return on LinkedIn, expressing gratitude to Pineau and reaffirming Meta’s long-term commitment to AI.

FAIR, Meta’s Fundamental AI Research division, focuses on innovations such as voice translation and image recognition to support its open-source Llama language model.

The move comes as Meta ramps up its AI investment, with CEO Mark Zuckerberg allocating up to $65 billion in capital spending for 2025 to expand the company’s AI infrastructure.

AI is now deeply integrated into Meta’s services, including Facebook, Instagram, Messenger, WhatsApp, and a new standalone app meant to rival OpenAI.

By bringing Fergus back instead of appointing a new outsider, Meta signals its intent to build on its existing AI legacy while pushing further toward human-level machine experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK police struggle to contain online misinformation

Sir Andy Cooke has urged that Ofcom be granted stronger powers to swiftly remove harmful online posts, particularly misinformation linked to public unrest. He criticised delays in tackling false content during the 2024 riots, which allowed damaging narratives to spread unchecked.

The UK Online Safety Act, though recently passed, does not permit Ofcom to delete individual posts. Ofcom acknowledged the connection between online posts and the disorder but stated it is responsible for overseeing platforms’ safety systems, not moderating content directly.

Critics argue this leaves a gap in quickly stopping harmful material from spreading. The regulator has faced scrutiny for its perceived lack of action during last summer’s violence. Over 30 people have already been arrested for riot-related posts, with some receiving prison sentences.

Police forces were found to have limited capability to counter online misinformation, according to a new report. Sir Andy stressed the need for improved policing strategies and called for legal changes to deter inflammatory online behaviour.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Chefs quietly embrace AI in the kitchen

At this year’s Michelin Guide awards in France, AI sparked nearly as much conversation as the stars themselves.

Paris-based chef Matan Zaken, of the one-star restaurant Nhome, said AI dominated discussions among chefs, even though many are hesitant to admit they already rely on tools like ChatGPT for inspiration and recipe development.

Zaken openly embraces AI in his kitchen, using platforms like ChatGPT Premium to generate ingredient pairings—such as peanuts and wild garlic—that he might not have considered otherwise. Instead of starting with traditional tastings, he now consults vast databases of food imagery and chemical profiles.

In a recent collaboration with the digital collective Obvious Art, AI-generated food photos came first, and Zaken created dishes to match them.

Still, not everyone is sold on AI’s place in haute cuisine. Some top chefs insist that no algorithm can replace the human palate or creativity honed by years of training.

Philippe Etchebest, who just earned a second Michelin star, argued that while AI may be helpful elsewhere, it has no place in the artistry of the kitchen. Others worry it strays too far from the culinary traditions rooted in local produce and craftsmanship.

Many chefs, however, seem more open to using AI behind the scenes. From managing kitchen rotas to predicting ingredient costs or carbon footprints, phone apps like Menu and Fullsoon are gaining popularity.

Experts believe molecular databases and cookbook analysis could revolutionise flavour pairing and food presentation, while robots might one day take over laborious prep work—peeling potatoes included.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta introduces face recognition to help UAE users recover hacked accounts

Meta is introducing facial recognition tools to help UAE users recover hacked accounts on Facebook and Instagram and stop scams that misuse public figures’ images. The technology compares suspicious ads to verified profile photos and removes them automatically if a match is found.

Well-known individuals in the region are automatically enrolled in the programme but can opt out if they choose. A new video selfie feature has also been rolled out to help users regain access to compromised accounts.

This allows identity verification through a short video matched with existing profile photos, offering a faster and more secure alternative to document-based checks.

Meta confirmed that all facial data used for verification is encrypted, deleted immediately after use, and never repurposed.

The company says this is part of a broader effort to fight impersonation scams and protect both public figures and regular users, not just in the UAE but elsewhere too.

Meta’s regional director highlighted the emotional and financial harm such scams can cause, reinforcing the need for proactive defences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft says AI now writes nearly a third of its code

Microsoft CEO Satya Nadella revealed that AI now writes between 20% and 30% of the company’s internal code.

He shared this figure during a fireside conversation with Meta CEO at the recent LlamaCon conference. Nadella added that AI-generated output varies by programming language.

Nadella’s comments came in response to a question from Zuckerberg, who admitted he didn’t know the figure for Meta. Google’s CEO Sundar Pichai recently reported similar figures, saying AI now generates over 30% of Google’s code.

Despite these bold claims, there’s still no industry-wide standard for measuring AI-written code. The ambiguity suggests such figures should be interpreted cautiously. Nevertheless, the trend highlights the growing impact of generative AI on software development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon launches first Kuiper satellites to challenge Starlink

Amazon has launched the first 27 satellites of its Project Kuiper broadband network into low-Earth orbit, marking a major step in its $10bn plan to deliver global internet coverage and rival Elon Musk’s Starlink.

The satellites were launched aboard a United Launch Alliance Atlas V rocket from Cape Canaveral, Florida, after weather delays earlier this month. They are the first of over 3,200 that Amazon intends to deploy, with the aim of reaching underserved and remote areas around the world.

Project Kuiper, announced in 2019, has been slow to get off the ground. Amazon must deploy at least half its satellite constellation—1,618 units—by mid-2026 to meet US regulatory requirements, though analysts expect the company to seek an extension.

The launch puts Amazon into direct competition with SpaceX, which has already deployed over 8,000 Starlink satellites and serves more than 5 million users across 125 countries.

While SpaceX dominates the sector, Amazon hopes its strengths in cloud computing and consumer devices will give Kuiper an edge.

Jeff Bezos said he expects both Kuiper and Starlink to succeed, citing strong global demand for satellite internet. Kuiper consumer terminals will sell for under $400 and come in various sizes, including one comparable to a Kindle.

Amazon has booked 83 future launches with partners including ULA, Arianespace, and Bezos’s Blue Origin, making it the biggest satellite launch programme in history.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake victims gain new rights with House-approved bill

The US House of Representatives has passed the ‘Take It Down’ Act with overwhelming bipartisan support, aiming to protect Americans from the spread of deepfake and revenge pornography.

The bill, approved by a 409-2 vote, criminalises the distribution of non-consensual intimate imagery—including AI-generated content—and now heads to President Donald Trump for his signature.

First Lady Melania Trump, who returned to public advocacy earlier this year, played a key role in supporting the legislation. She lobbied lawmakers last month and celebrated the bill’s passage, saying she was honoured to help guide it through Congress.

The White House confirmed she will attend the signing ceremony.

The law requires social media platforms and similar websites to remove such harmful content upon request from victims, instead of allowing it to remain unchecked.

Victims of deepfake pornography have included both public figures such as Taylor Swift and Alexandria Ocasio-Cortez, and private individuals like high school students.

Introduced by Republican Senator Ted Cruz and backed by Democratic lawmakers including Amy Klobuchar and Madeleine Dean, the bill reflects growing concern across party lines about online abuse.

Melania Trump, echoing her earlier ‘Be Best’ initiative, stressed the need to ensure young people—especially girls—can navigate the internet safely instead of being left vulnerable to digital exploitation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI voice hacks put fake Musk and Zuckerberg at crosswalks

Crosswalk buttons in several Californian cities have been hacked to play AI-generated voices impersonating tech moguls Elon Musk and Mark Zuckerberg, delivering bizarre and satirical messages to pedestrians.

The spoof messages, which mock the CEOs with lines like ‘Can we be friends?’ and ‘Cooking our grandparents’ brains with AI slop,’ have been heard in Palo Alto, Redwood City, and Menlo Park.

US Palo Alto officials confirmed that 12 intersections were affected and the audio systems have since been disabled.

While the crosswalk signals themselves remain operational, authorities are investigating how the hack was carried out. Similar issues are being addressed in nearby cities, with local governments moving quickly to secure the compromised systems.

The prank, which uses AI voice cloning, appears to layer these spoofed messages on top of the usual accessibility features rather than replacing them entirely.

Though clearly comedic in intent, the incident has raised concerns about the growing ease with which public systems can be manipulated using generative technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT accused of enabling fake document creation

Concerns over digital security have intensified after reports revealed that OpenAI’s ChatGPT has been used to generate fake identification cards.

The incident follows the recent introduction of a popular Ghibli-style feature, which led to a sharp rise in usage and viral image generation across social platforms.

Among the fakes circulating online were forged versions of India’s Aadhaar ID, created with fabricated names, photos, and even QR codes.

While the Ghibli release helped push ChatGPT past 150 million active users, the tool’s advanced capabilities have now drawn criticism.

Some users demonstrated how the AI could replicate Aadhaar and PAN cards with surprising accuracy, even using images of well-known figures like OpenAI CEO Sam Altman and Tesla’s Elon Musk. The ease with which these near-perfect replicas were produced has raised alarms about identity theft and fraud.

The emergence of AI-generated IDs has reignited calls for clearer AI regulation and transparency. Critics are questioning how AI systems have access to the formatting of official documents, with accusations that sensitive datasets may be feeding model development.

As generative AI continues to evolve, pressure is mounting on both developers and regulators to address the growing risk of misuse.

For more information on these topics, visit diplomacy.edu.

OpenAI’s Sam Altman responds to Miyazaki’s AI animation concerns

The recent viral trend of AI-generated Ghibli-style images has taken the internet by storm. Using OpenAI’s GPT-4o image generator, users have been transforming photos, from historic moments to everyday scenes, into Studio Ghibli-style renditions.

A trend like this has caught the attention of notable figures, including celebrities and political personalities, sparking both excitement and controversy.

While some praise the trend for democratising art, others argue that it infringes on copyright and undermines the efforts of traditional artists. The debate intensified when Hayao Miyazaki, the co-founder of Studio Ghibli, became a focal point.

In a 2016 documentary, Miyazaki expressed his disdain for AI in animation, calling it ‘an insult to life itself’ and warning that humanity is losing faith in its creativity.

OpenAI’s CEO, Sam Altman, recently addressed these concerns, acknowledging the challenges posed by AI in art but defending its role in broadening access to creative tools. Altman believes that technology empowers more people to contribute, benefiting society as a whole, even if it complicates the art world.

Miyazaki’s comments and Altman’s response highlight a growing divide in the conversation about AI and creativity. As the debate continues, the future of AI in art remains a contentious issue, balancing innovation with respect for traditional artistic practices.

For more information on these topics, visit diplomacy.edu.