Linguists find new purpose in the age of AI

In his latest blog, part of a series expanding on ‘Don’t Waste the Crisis: How AI Can Help Reinvent International Geneva’, Dr Jovan Kurbalija explores how linguists shift from fearing AI to embracing a new era of opportunity. Geneva, home to over a thousand translators and interpreters, has felt the pressure as AI tools like ChatGPT began automating language tasks.

Yet, rather than rendering linguists obsolete, AI is transforming their role, highlighting the enduring importance of human expertise in bridging syntax and semantics—AI’s persistent blind spot. Dr Kurbalija emphasises that while AI excels at recognising patterns, it often fails to grasp meaning, nuance, and cultural context.

This is where linguists step in, offering critical value by enhancing AI’s understanding of language beyond mere structure. From supporting low-resource languages to ensuring ethical AI outputs in sensitive fields like law and diplomacy, linguists are positioned as key players in shaping responsible and context-aware AI systems.

Calling for adaptation over resistance, Dr Kurbalija advocates for linguists to upskill, specialise in areas where human judgement is irreplaceable, collaborate with AI developers, and champion ethical standards. Rather than facing decline, the linguistic profession is entering a renaissance, where embracing syntax and semantics ensures that AI amplifies human expression instead of diminishing it.

With Geneva’s vibrant multilingual community at the forefront, linguists have a pivotal role in guiding how language and technology evolve together in this new frontier.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TSMC struggles to block chip exports to China

Taiwan Semiconductor Manufacturing Company (TSMC) has acknowledged it faces significant challenges in ensuring its advanced chips do not end up with sanctioned entities in China, despite tightening export controls.

The company admitted in its latest annual report that its position as a contract chipmaker limits its visibility into how and where its semiconductors are ultimately used.

Instead of directly selling finished products, TSMC manufactures chips for firms like Nvidia and Qualcomm, which are then integrated into a wide range of devices by third parties.

Α layered supply chain structure like this makes it difficult for the company to guarantee full compliance with export restrictions, especially when intermediaries may divert shipments intentionally.

TSMC halted deliveries to a customer last year after discovering one of its AI chips had been diverted to Huawei, a Chinese tech giant on the US sanctions list. The company promptly notified both Washington and Taipei and has since cooperated with official investigations and information requests.

The US continues to tighten restrictions on advanced chip exports to China, urging companies like TSMC and Samsung to apply stricter scrutiny.

Recently, Washington blacklisted 16 Chinese entities, including firms allegedly linked to the unauthorised transfer of TSMC chips. Despite best efforts, TSMC says there is no assurance it can completely prevent such incidents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta uses AI to spot teens lying about age

Meta has announced it is ramping up efforts to protect teenagers on Instagram by deploying AI to detect users who may have lied about their age. The technology will automatically place suspected underage users into Teen Accounts, even if their profiles state they are adults.

These special accounts come with stricter safety settings designed for users under 16. Those who believe they’ve been misclassified will have the option to adjust their settings manually.

Instead of relying solely on self-reported birthdates, Meta is using its AI to analyse behaviour and signals that suggest a user might be younger than claimed.

While the company has used this technology to estimate age ranges before, it is now applying it more aggressively to catch teens who attempt to bypass the platform’s safeguards. The tech giant insists it’s working to ensure the accuracy of these classifications to prevent mistakes.

Alongside this new AI tool, Meta will also begin sending notifications to parents about their children’s Instagram settings.

These alerts, which are sent only to parents who have Instagram accounts of their own, aim to encourage open conversations at home about the importance of honest age representation online.

Teen Accounts were first introduced last year and are designed to limit access to harmful content, reduce contact from strangers, and promote healthier screen time habits.

Instead of granting unrestricted access, these accounts are private by default, block unsolicited messages, and remind teens to take breaks after prolonged scrolling.

Meta says the goal is to adapt to the digital age and partner with parents to make Instagram a safer space for young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hamburg Declaration champions responsible AI

The Hamburg Declaration on Responsible AI for the Sustainable Development Goals (SDGs) is a new global initiative jointly launched by the United Nations Development Programme (UNDP) and Germany’s Federal Ministry for Economic Cooperation and Development (BMZ).

The Declaration seeks to build a shared vision for AI that supports fair, inclusive, and sustainable global development. It is set to be officially adopted at the Hamburg Sustainability Conference in June 2025.

The initiative brings together voices from across sectors—governments, civil society, academia, and industry—to shape how AI can ethically and effectively align with the SDGs. Central to this effort is an open consultation process inviting stakeholders to provide feedback on the draft declaration, participate in expert discussions, and endorse its principles.

In addition to the declaration itself, the initiative also features the AI SDG Compendium, a global registry of AI projects contributing to sustainable development. The process has already gained visibility at major international forums like the Internet Governance Forum and the AI Action Summit in Paris, reflecting its growing significance in leveraging responsible AI for the SDGs.

The Declaration aims to ensure that AI is developed and used in ways that respect human rights, reduce inequalities, and foster sustainable progress. Establishing shared principles and promoting collaboration across sectors and regions sets a foundation for responsible AI that serves both people and the planet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft unveils powerful lightweight AI model for CPUs

Microsoft researchers have introduced the largest 1-bit AI model to date, called BitNet b1.58 2B4T, designed to run efficiently on standard CPUs instead of relying on GPUs. This ‘bitnet’ model, now openly available under the MIT license, can even operate on Apple’s M2 chips.

Bitnets use extreme weight quantisation, storing only -1, 0, or 1 as values, making them far more memory- and compute-efficient than most conventional models.

With 2 billion parameters and trained on 4 trillion tokens, roughly the equivalent of 33 million books, BitNet b1.58 2B4T outperforms several similarly sized models in key benchmarks.

Microsoft claims it beats Meta’s Llama 3.2 1B, Google’s Gemma 3 1B, and Alibaba’s Qwen 2.5 1.5B on tasks like grade-school maths and physical reasoning. It also runs up to twice as fast while using significantly less memory, offering a potential edge for lower-end or energy-constrained devices.

The main limitation lies in its dependence on Microsoft’s custom bitnet.cpp framework, which supports only select hardware and does not yet work with GPUs.

Instead of being broadly compatible with existing AI systems, BitNet’s performance depends on a narrower infrastructure, a hurdle that may limit adoption, despite its promise for lightweight AI deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Apple AI model uses private email comparisons

Apple has outlined a new approach to improving its AI features by privately analysing user data with the help of synthetic data. The move follows criticism of the company’s AI products, especially notification summaries, which have underperformed compared to competitors.

The new method relies on ‘differential privacy,’ where Apple generates synthetic messages that resemble real user data without containing any actual content.

These messages are used to create embeddings—abstract representations of message characteristics—which are then compared with real emails on user’ devices that have opted in to share analytics.

Devices send back signals indicating which synthetic data most closely matches real content, without sharing the actual messages with Apple.

Apple said the technique is already being used to improve its Genmoji models and will soon be applied to other features, including Image Playground, Image Wand, Memories Creation, Writing Tools, and Visual Intelligence.

The company also confirmed plans to improve email summaries using the same privacy-focused method, aiming to refine its AI tools while maintaining a strong commitment to user data protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google uses AI and human reviews to fight ad fraud

Google has revealed it suspended 39.2 million advertiser accounts in 2024, more than triple the number from the previous year, as part of its latest push to combat ad fraud.

The tech giant said it is now able to block most bad actors before they even run an advert, thanks to advanced large language models and detection signals such as fake business details and fraudulent payments.

Instead of relying solely on AI, a team of over 100 experts from across Google and DeepMind also reviews deepfake scams and develops targeted countermeasures.

The company rolled out more than 50 LLM-based safety updates last year and introduced over 30 changes to advertising and publishing policies. These efforts, alongside other technical reinforcements, led to a 90% drop in reports of deepfake ads.

While the US saw the highest number of suspensions, with all 39.2 million accounts coming from there alone, India followed with 2.9 million accounts taken down. In both countries, ads were removed for violations such as trademark abuse, misleading personalisation, and financial service scams.

Overall, Google blocked 5.1 billion ads globally and restricted another 9.1 billion, instead of allowing harmful content to spread unchecked. Nearly half a billion of those removed were linked specifically to scam activity.

In a year when half the global population headed to the polls, Google also verified over 8,900 election advertisers and took down 10.7 million political ads.

While the scale of suspensions may raise concerns about fairness, Google said human reviews are included in the appeals process.

The company acknowledged previous confusion over enforcement clarity and is now updating its messaging to ensure advertisers understand the reasons behind account actions more clearly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Inephany raises $2.2M to make AI training more efficient

London-based AI startup Inephany has secured $2.2 million in pre-seed funding to develop technology aimed at making the training of neural networks—particularly large language models—more efficient and affordable.

The investment round was led by Amadeus Capital Partners, with participation from Sure Valley Ventures and AI pioneer Professor Steve Young, who joins as both chair and angel investor.

Founded in July 2024 by Dr John Torr, Hami Bahraynian, and Maurice von Sturm, Inephany is building an AI-driven platform that improves training efficiency in real time.

By increasing sample efficiency and reducing computing demands, the company hopes to dramatically cut the cost and time of training cutting-edge models.

The team claims their solution could make AI model development at least ten times more cost-effective compared to current methods.

The funding will support growth of Inephany’s engineering team and accelerate the launch of its first product later this year.

With the costs of training state-of-the-art models now reaching into the hundreds of millions, the startup’s platform aims to make high-performance AI development more sustainable and accessible across industries such as healthcare, weather forecasting, and drug discovery.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chip production begins at TSMC’s Arizona facility

Nvidia has announced a major initiative to produce AI supercomputers in the US in collaboration with Taiwan Semiconductor Manufacturing Co. (TSMC) and several other partners.

The effort aims to create up to US$500 billion worth of AI infrastructure products domestically over the next four years, marking a significant shift in Nvidia’s manufacturing strategy.

Alongside TSMC, other key contributors include Taiwanese firms Hon Hai Precision Industry Co. and Wistron Corp., both known for producing AI servers. US-based Amkor Technology and Taiwan’s Siliconware Precision Industries will also provide advanced packaging and testing services.

Nvidia’s Blackwell AI chips have already begun production at TSMC’s Arizona facility, with large-scale operations planned in Texas through partnerships with Hon Hai in Houston and Wistron in Dallas.

The move could impact Taiwan’s economy, as many Nvidia components are currently produced there. Taiwan’s Economic Affairs Minister declined to comment specifically on the project but assured that the government will monitor overseas investments by Taiwanese firms.

Nvidia said the initiative would help meet surging AI demand while strengthening semiconductor supply chains and increasing resilience amid shifting global trade policies, including new US tariffs on Taiwanese exports.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia hit by the new US export rules

Nvidia is facing fresh US export restrictions on its H20 AI chips, dealing a blow to the company’s operations in China.

In a filing on Tuesday, Nvidia revealed it now needs a licence to export these chips indefinitely, after the US government cited concerns they could be used in a Chinese supercomputer.

The company expects a $5.5 billion charge linked to the controls in its first fiscal quarter of 2026, which ends on 27 April. Shares dropped around 6% in after-hours trading.

The H20 is currently the most advanced AI chip Nvidia can sell to China under existing regulations.

Last week, reports suggested CEO Jensen Huang might have temporarily eased tensions during a dinner at Donald Trump’s Mar-a-Lago resort, by promising investments in US-based AI data centres instead of opposing the rules directly.

Just a day before the filing, Nvidia announced plans to manufacture some chips in the US over the next four years, though the specifics were left vague.

Calls for tighter controls had been building, especially after it emerged that China’s DeepSeek used the H20 to train its R1 model, a system that surprised the US AI sector earlier this year.

Government officials had pushed for action, saying the chip’s capabilities posed a strategic risk. Nvidia declined to comment on the new restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!