OpenAI’s chief people officer, Julia Villagra, has left the company, marking the latest leadership change at the AI pioneer. Villagra, who joined the San Francisco firm in early 2024 and was promoted in March, previously led its human resources operations.
Her responsibilities will temporarily be overseen by chief strategy officer Jason Kwon, while chief applications officer Fidji Simo will lead the search for her successor.
OpenAI said Villagra is stepping away to pursue her personal interest in art, music and storytelling as tools to help people understand the shift towards artificial general intelligence, a stage when machines surpass human performance in most forms of work.
The departure comes as OpenAI navigates a period of intense competition for AI expertise. Microsoft-backed OpenAI is valued at about $300 billion, with a potential share sale set to raise that figure to $500 billion.
The company faces growing rivalry from Meta, where Mark Zuckerberg has reportedly offered $100 million signing bonuses to attract OpenAI talent.
While OpenAI expands, public concerns over the impact of AI on employment continue. A Reuters/Ipsos poll found 71% of Americans fear AI could permanently displace too many workers, despite the unemployment rate standing at 4.2% in July.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has published new estimates on the environmental footprint of Gemini, claiming a single text prompt uses about five drops of water and 0.24 watt-hours of electricity. The company says this equates to 0.03 grams of carbon dioxide emissions.
According to Google, efficiencies have reduced Gemini’s energy consumption and carbon footprint per text prompt by factors of 33 and 44 over the past year. Chief technologist Ben Gomes said the model now delivers higher-quality responses with a significantly lower footprint.
The company argued that these figures are significantly lower than those suggested in earlier research. However, Shaolei Ren, the author of one of the cited papers, said Google’s comparisons were misleading and incomplete.
Ren noted that Google compared its latest onsite-only water figures against his study’s highest total figures, creating the impression that Gemini was far more efficient. He also said Google omitted indirect water use, such as electricity-related consumption, from its estimates.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The discovery of hundreds of non-consensual deepfake images on a student’s laptop at the University of Hong Kong has reignited debate about privacy, technology, and accountability. The scandal echoes the 2008 Edison Chen photo leak, which exposed gaps in law and gender double standards.
Unlike stolen private images, today’s fabrications are AI-generated composites that can tarnish reputations with a single photo scraped from social media. The dismissal that such content is ‘not real’ fails to address the damage caused by its existence.
The legal system of Hong Kong struggles to keep pace with this shift. Its privacy ordinance, drafted in the 1990s, was not designed for machine-learning fabrications, while traditional harassment and defamation laws predate the advent of AI. Victims risk harm before distribution is even proven.
The city’s privacy watchdog has launched a criminal investigation, but questions remain over whether creation or possession of deepfakes is covered by existing statutes. Critics warn that overreach could suppress legitimate uses, yet inaction leaves space for abuse.
Observers argue that just as the snapshot camera spurred the development of modern privacy law, deepfakes must drive a new legal boundary to safeguard dignity. Without reform, victims may continue facing harm without recourse.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A controversial study that once claimed evidence of elusive Majorana quasiparticles has received a 20-page correction in Science five years after its publication.
The paper, led by researchers in Copenhagen and affiliated with Microsoft, originally described signals from nanowires that were said to match those expected from Majoranas, exotic entities believed to be useful for quantum computing due to their resistance to noise.
Independent investigators concluded that, although the data selection was narrow, no misconduct occurred. The omitted data did not invalidate the main claims, but raised concerns about transparency and selection bias in reporting.
The authors argue the correction merely clarifies their methods. Yet the wider research community remains divided, and no group has successfully replicated the findings. Some experts now see the approach as too fragile for practical use in quantum computing.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Korea’s new administration has unveiled a five-year economic plan to build what it calls a ‘super-innovation economy’ by integrating AI across all sectors of society.
The strategy, led by President Lee Jae-myung, commits 100 trillion won (approximately US$71.5 billion) to position the country among the world’s top three AI powerhouses. Private firms will drive development, with government support for nationwide adoption.
Plans include a sovereign Korean-language AI model, humanoid robots for logistics and industry, and commercialising autonomous vehicles by 2027. Unmanned ships are targeted for completion by 2030, alongside widespread use of drones in firefighting and aviation.
AI will also be introduced into drug approvals, smart factories, welfare services, and tax administration, with AI-based tax consultations expected by 2026. Education initiatives and a national AI training data cluster will nurture talent and accelerate innovation.
Five domestic firms, including Naver Cloud, SK Telecom, and LG AI Research, will receive state support to build homegrown AI foundation models. Industry reports currently rank South Korea between sixth and 10th in global AI competitiveness.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has expanded its AI Mode in Search to 180 additional countries and territories, introducing new agentic features to help users make restaurant reservations. The service remains limited to English and is not yet available in the European Union.
The update enables users to specify their dining preferences and constraints, allowing the system to scan multiple platforms and present real-time availability. Once a choice is made, users are directed to the restaurant’s booking page.
Partners supporting the service include OpenTable, Resy, SeatGeek, StubHub, Booksy, Tock, and Ticketmaster. The feature is part of Google’s Search Labs experiment, available to subscribers of Google AI Ultra in the United States.
AI Mode also tailors suggestions based on previous searches and introduces a Share function, letting users share restaurant options or planning results with others, with the option to delete links.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
According to sworn interrogations, OpenAI said Musk had discussed possible financing arrangements with Zuckerberg as part of the bid. Musk’s AI startup xAI, a competitor to OpenAI, did not respond to requests for comment.
In the filing, OpenAI asked a federal judge to order Meta to provide documents related to any bid for OpenAI, including internal communications about restructuring or recapitalisation. The firm argued these records could clarify motivations behind the bid.
Meta countered that such documents were irrelevant and suggested OpenAI seek them directly from Musk or xAI. A US judge ruled that Musk must face OpenAI’s claims of attempting to harm the company through public remarks and what it described as a sham takeover attempt.
The legal dispute follows Musk’s lawsuit against OpenAI and Sam Altman over its for-profit transition, with OpenAI filing a countersuit in April. A jury trial is scheduled for spring 2026.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI-enabled cameras in Devon and Cornwall have detected 6,000 people failing to wear seat belts over the past year. The number caught was 50 percent higher than those penalised for using mobile phones while driving, police confirmed.
Road safety experts warn that the long-standing culture of belting up may be fading among newer generations of drivers. Geoff Collins of Acusensus noted a rise in non-compliance and said stronger legal penalties could help reverse the trend.
Current UK law imposes a £100 fine for not wearing a seat belt, with no points added to a driver’s licence. Campaigners now urge the government to make such offences endorsable, potentially adding penalty points and risking licence loss.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new study in Robot Learning has introduced a robotic system that combines machine learning with decision-making to analyse water samples. The approach enables robots to detect, classify, and distinguish drinking water on Earth and potentially other planets.
Researchers used a hybrid method that merged the TOPSIS decision-making technique with a Random Forest Classifier trained on the Water Quality and Potability Dataset from Kaggle. By applying data balancing techniques, classification accuracy rose from 69% to 73%.
The robotic prototype includes thrusters, motors, solar power, sensors, and a robotic arm for sample collection. Water samples are tested in real time, with the onboard model classifying them as drinkable.
The system has the potential for rapid crisis response, sustainable water management, and planetary exploration, although challenges remain regarding sensor accuracy, data noise, and scalability. Researchers emphasise that further testing is necessary before real-world deployment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new MIT study has found that 95% of corporate AI projects fail to deliver returns, mainly due to difficulties integrating them with existing workflows.
The report, ‘The GenAI Divide: State of AI in Business 2025’, examined 300 deployments and interviewed 350 employees. Only 5% of projects generated value, typically when focused on solving a single, clearly defined problem.
Executives often blamed model performance, but researchers pointed to a workforce ‘learning gap’ as the bigger barrier. Many projects faltered because staff were unprepared to adapt processes effectively.
More than half of GenAI budgets were allocated to sales and marketing, yet the most substantial returns came from automating back-office tasks, such as reducing agency costs and streamlining roles.
The study also found that tools purchased from specialised vendors were nearly twice as successful as in-house systems, with success rates of 67% compared to 33%.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!