Meta offers $100M bonuses to poach OpenAI talent but Altman defends mission-driven culture

Meta has reportedly attempted to lure top talent from OpenAI with signing bonuses exceeding $100 million, according to OpenAI’s CEO Sam Altman.

Speaking on a podcast hosted by his brother, Jack Altman, he revealed that Meta has offered extremely high compensation to key OpenAI staff, yet none have accepted the offers.

Meta CEO Mark Zuckerberg is said to be directly involved in recruiting for a new ‘superintelligence’ team as part of the latest AI push.

The tech giant recently announced a $14.3 billion investment in Scale AI and brought Scale’s CEO, Alexandr Wang, on board. Altman believes Meta sees ChatGPT not only as competition for Google but as a potential rival to Facebook regarding user attention.

Altman questioned whether such high-compensation strategies foster the right environment, suggesting that culture cannot be built on upfront financial incentives alone.

He stressed that OpenAI prefers aligning rewards with its mission instead of offering massive pay packets. In his view, sustainable innovation stems from purpose, not payouts.

While recognising Meta’s persistence in the AI race, Altman suggested that the company will likely try again if the current effort fails. He highlighted a cultural difference, saying OpenAI has built a team focused on consistent innovation — something he believes Meta still struggles to understand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake technology fuels new harassment risks

A growing threat of AI-generated media is reshaping workplace harassment, with deepfakes used to impersonate colleagues and circulate fabricated explicit content in the US. Recent studies found that almost all deepfakes were sexually explicit by 2023, often targeting women.

Organisations risk liability under existing laws if deepfake incidents create hostile work environments. New legislation like the TAKE IT DOWN Act and Florida’s Brooke’s Law now mandates rapid removal of non-consensual intimate imagery.

Employers are also bracing for proposed rules requiring strict authentication of AI-generated evidence in legal proceedings. Industry experts advise an urgent review of harassment and acceptable use policies, clear incident response plans and targeted training for HR, legal and IT teams.

Protective measures include auditing insurance coverage for synthetic media claims and staying abreast of evolving state and federal regulations. Forward-looking employers already embed deepfake awareness into their harassment prevention and cybersecurity training to safeguard workplace dignity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Plumbing still safe as AI replaces office jobs, says AI pioneer

Nobel Prize-winning scientist Geoffrey Hinton, often called the ‘Godfather of AI,’ has warned that many intellectual jobs are at risk of being replaced by AI—while manual trades like plumbing may remain safe for years to come.

Speaking on the Diary of a CEO podcast, Hinton predicted that AI will eventually surpass human capabilities across most fields, but said it will take far longer to master physical skills. ‘A good bet would be to be a plumber,’ he noted, citing the complexity of physical manipulation as a barrier for AI.

Hinton, known for his pioneering work on neural networks, said ‘mundane intellectual labour’ would be among the first to go. ‘AI is just going to replace everybody,’ he said, naming paralegals and call centre workers as particularly vulnerable.

He added that while highly skilled roles or those in sectors with overwhelming demand—like healthcare—may endure, most jobs are unlikely to escape the wave of disruption. ‘Most jobs, I think, are not like that,’ he said, forecasting widespread upheaval in the labour market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Workplace deepfake abuse: What employers must know

Deepfake technology—AI-generated videos, images, and audio—has entered the workplace in alarming ways.

Once difficult to produce, deepfakes are now widely accessible and are being used to harass, impersonate, or intimidate employees. These synthetic media attacks can cause deep psychological harm, damage reputations, and expose employers to serious legal risks.

While US federal law hasn’t yet caught up, new legislation like the Take It Down Act and Florida’s Brooke’s Law require platforms to remove non-consensual deepfake content within 48 hours.

Meanwhile, employers could face claims under existing workplace laws if they fail to act on deepfake harassment. Inaction may lead to lawsuits for creating a hostile environment or for negligent oversight.

Most workplace policies still don’t mention synthetic media and something like this creates blind spots, especially during investigations, where fake images or audio could wrongly influence decisions.

Employers need to shift how they assess evidence and protect both accused and accuser fairly. It’s time to update handbooks, train staff, and build clear response plans that include digital impersonation and deepfake abuse.

By treating deepfakes as a modern form of harassment instead of just a tech issue, organisations can respond faster, protect staff, and maintain trust. Proactive training, updated policies, and legal awareness will be crucial to workplace safety in the age of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Oxford physicists set new qubit accuracy record

Physicists at the University of Oxford have achieved a ground‑breaking error rate in quantum logic operations, reducing it to just 0.000015 percent, one mistake in 6.7 million operations. The result marks nearly a ten‑fold improvement over their previous record set in 2014.

The team used a trapped calcium ion qubit controlled by microwave signals instead of lasers to achieve high stability at room temperature and eliminate the need for magnetic shielding. However, this method offers cheaper, more robust control that fits with ion‑trap chip technology.

Reducing the error rate helps shrink the infrastructure needed for error correction, meaning future quantum computers could be smaller, faster and more efficient. They still lag, with around one in 2,000 error rates, highlighting further challenges for full‑scale quantum systems.

The findings, published in Physical Review Letters, bring practical quantum computing a significant step closer. The Oxford researchers involved include Professor David Lucas, Molly Smith, Aaron Leu and Dr Mario Gely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia’s Jensen Huang clashes with Anthropic CEO over AI Job loss predictions

A fresh dispute has erupted between Nvidia and Anthropic after CEO Dario Amodei warned that AI could eliminate 50% of entry-level white-collar jobs in the next five years, potentially causing a 20% unemployment spike.

Nvidia’s Jensen Huang dismissed the claim, saying at VivaTech in Paris that he ‘pretty much disagreed with almost everything’ Amodei says, accusing him of fearmongering and advocating for a monopoly on AI development.

Huang emphasized the importance of open, transparent development, stating, ‘If you want things to be done safely and responsibly, you do it in the open… Don’t do it in a dark room and tell me it’s safe.’

Anthropic pushed back, saying Amodei supports national AI transparency standards and never claimed only Anthropic can build safe AI.

The clash comes amid growing scrutiny of Anthropic, which faces a lawsuit from Reddit for allegedly scraping content without consent and controversy over a Claude 4 Opus test that simulated blackmail scenarios.

The companies have also clashed over AI export controls to China, with Anthropic urging tighter rules and Nvidia denying reports that its chips were smuggled using extreme methods like fake pregnancies or shipments with live lobsters.

Huang maintains an optimistic outlook, saying AI will create new jobs in fields like prompt engineering. At the same time, Amodei has consistently warned that the economic fallout could be severe, rejecting universal basic income as a long-term solution.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

G7 trip could shift political balance for President Lee

President Lee Jae-myung is making his first major diplomatic appearance at the G7 summit in Canada, just two weeks into office. The trip marks a reset of South Korea’s foreign policy, focusing on pragmatic diplomacy prioritising national interest.

Officials say the visit aims to restart high-level talks after six months of stagnation, and could include a pivotal meeting with US President Donald Trump. Trade tensions, defence costs and the future of US troops in South Korea are expected to dominate any bilateral agenda.

Lee is also preparing for potential talks with Japanese Prime Minister Shigeru Ishiba as his administration tests its strategy amid rising US-China rivalry. A trilateral summit is considered, adding further weight to this diplomatic debut.

The summit’s outcome could influence Lee’s political standing at home, where leaders have often used foreign success to strengthen domestic reforms. However, failure to secure tangible results could expose the new administration to early criticism.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Switzerland’s unique AI path: Blending innovation, governance, and local empowerment

In his recent blog post ‘Advancing Swiss AI Trinity: Zurich’s entrepreneurship, Geneva’s governance, and Communal subsidiarity,’ Jovan Kurbalija proposes a distinctive roadmap for Switzerland to navigate the rapidly evolving landscape of AI. Rather than mimicking the AI power plays of the United States or China, Kurbalija argues that Switzerland can lead by integrating three national strengths: Zurich’s thriving innovation ecosystem, Geneva’s global leadership in governance, and the country’s foundational principle of subsidiarity rooted in local decision-making.

Zurich, already a global tech hub, is positioned to drive cutting-edge development through its academic excellence and robust entrepreneurial culture. Institutions like ETH Zurich and the presence of major tech firms provide a fertile ground for collaborations that turn research into practical solutions.

With AI tools becoming increasingly accessible, Kurbalija emphasises that success now depends on how societies harness the interplay of human and machine intelligence—a field where Switzerland’s education and apprenticeship systems give it a competitive edge. Meanwhile, Geneva is called upon to spearhead balanced international governance and standard-setting for AI.

Kurbalija stresses that AI policy must go beyond abstract discussions and address real-world issues—health, education, the environment—by embedding AI tools in global institutions and negotiations. He notes that Geneva’s experience in multilateral diplomacy and technical standardisation offers a strong foundation for shaping ethical, inclusive AI frameworks.

The third pillar—subsidiarity—empowers Swiss cantons and communities to develop AI that reflects local values and needs. By supporting grassroots innovation through mini-grants, reimagining libraries as AI learning hubs, and embedding AI literacy from primary school to professional training, Switzerland can build an AI model that is democratic and inclusive.

Why does it matter?

Kurbalija’s call to action is clear: with its tools, talent, and traditions aligned, Switzerland must act now to chart a future where AI serves society, not the other way around.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Santa Clara offers AI training with Silicon Valley focus

Santa Clara University has launched a new master’s programme in AI designed to equip students with technical expertise and ethical insight.

The interdisciplinary degree, offered through the School of Engineering, blends software and hardware tracks to address the growing need for professionals who can manage AI systems responsibly.

The course offers two concentrations: one focusing on algorithms and computation for computer science students and another tailored to engineering students interested in robotics, devices, and AI chip design. Students will also engage in real-world practicums with Silicon Valley companies.

Faculty say the programme integrates ethical training into its core, aiming to produce graduates who can develop intelligent technologies with social awareness. As AI tools increasingly shape society and education, the university hopes to prepare students for both innovation and accountability.

Professor Yi Fang, director of the Responsible AI initiative, said students will leave with a deeper understanding of AI’s societal impact. The initiative reflects a broader trend in higher education, where demand for AI-related skills continues to rise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia’s Huang: ‘The new programming language is human’

Speaking at London Tech Week, Nvidia CEO Jensen Huang called AI ‘the great equaliser,’ explaining how AI has transformed who can access and control computing power.

In the past, computing was limited to a select few with technical skills in languages like C++ or Python. ‘We had to learn programming languages. We had to architect it. We had to design these computers that are very complicated,’ Huang said.

That’s no longer necessary, he explained. ‘Now, all of a sudden, there’s a new programming language. This new programming language is called ‘human’,’ Huang said, highlighting how AI now understands natural language commands. ‘Most people don’t know C++, very few people know Python, and everybody, as you know, knows human.’

He illustrated his point with an example: asking an AI to write a poem in the style of Shakespeare. The AI delivers, he said—and if you ask it to improve, it will reflect and try again, just like a human collaborator.

For Huang, this shift is not just technical but transformational. It makes the power of advanced computing accessible to billions, not just a trained few.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!