Singapore sets jobs as top priority amid global uncertainty

Singapore’s Prime Minister Lawrence Wong said employment for citizens will remain the government’s top priority as the nation confronts global trade tensions and the rapid advance of AI.

Speaking at the annual National Day Rally to mark Singapore’s 60th year, Wong pointed to the risks created by the USChina rivalry, renewed tariff policies under President Donald Trump, and the pressure technology places on workers.

In his first primary address since the May election, Wong emphasised the need to reinforce the trade-reliant economy, expand social safety nets and redevelop parts of the island.

He pledged to protect Singaporeans from external shocks by maintaining stability instead of pursuing risky shifts. ‘Ultimately, our economic strategy is about jobs, jobs and jobs. That’s our number one priority,’ he said.

The government has introduced new welfare measures, including the country’s first unemployment benefits and wider subsidies for food, utilities and education.

Wong also announced initiatives to help enterprises use AI more effectively, such as a job-matching platform and a government-backed traineeship programme for graduates.

Looking ahead, Wong said Singapore would draw up a new economic blueprint to secure its future in a world shaped by protectionism, climate challenges and changing energy needs.

After stronger-than-expected results in the first half of the year, the government recently raised its growth forecast for 2025 to between 1.5% and 2.5%.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Claude AI gains power to end harmful chats

Anthropic has unveiled a new capability in its Claude AI models that allows them to end conversations they deem harmful or unproductive.

The feature, part of the company’s more exhaustive exploration of ‘model welfare,’ is designed to allow AI systems to disengage from toxic inputs or ethical contradictions, reflecting a push toward safer and more autonomous behaviour.

The decision follows an internal review of over 700,000 Claude interactions, where researchers identified thousands of values shaping how the system responds in real-world scenarios.

By enabling Claude to exit problematic exchanges, Anthropic hopes to improve trustworthiness while protecting its models from situations that might degrade performance over time.

Industry reaction has been mixed. Many researchers praised the step as a blueprint for responsible AI design. In contrast, others expressed concern that allowing models to self-terminate conversations could limit user engagement or introduce unintended biases.

Critics also warned that the concept of model welfare risks over-anthropomorphising AI, potentially shifting focus away from human safety.

The update arrives alongside other recent Anthropic innovations, including memory features that allow users to maintain conversation history. Together, these changes highlight the company’s balanced approach: enhancing usability where beneficial, while ensuring safeguards are in place when interactions become potentially harmful.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Geoffrey Hinton warns AI could destroy humanity

AI pioneer Geoffrey Hinton has warned that AI could one day wipe out humanity if its growth is unchecked.

Speaking at the Ai4 conference in Las Vegas, the former Google executive estimated a 10 to 20 percent chance of such an outcome and criticised the approach taken by technology leaders.

He argued that efforts to keep humans ‘dominant’ over AI will fail once systems become more intelligent than their creators. According to Hinton, powerful AI will inevitably develop goals such as survival and control, making it increasingly difficult for people to restrain its influence.

In an interview with CNN, Hinton compared the potential future to a parent-child relationship, noting that AI systems may manipulate humans just as easily as an adult can bribe a child.

He suggested giving AI ‘maternal instincts’ to prevent disaster so that the technology genuinely cares about human well-being.

Hinton, often called the ‘Godfather of AI’ for his pioneering work in neural networks, cautioned that society risks creating beings that will ultimately outsmart and overpower us without embedding such safeguards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New OpenAI hire shares savvy interview strategies

Bas van Opheusden, who joined OpenAI as a technical staff member in July, has published a comprehensive eight-page guide for aspiring applicants, offering strategic advice spanning recruiter calls, coding interviews, compensation discussions and more.

He suggests treating recruiter conversations as strategic briefings, which are key for understanding the hiring manager’s priorities, team dynamics, role expectations, and organisational goals.

Van Opheusden recommends taking notes during calls, ideally using a dual-screen setup, and arranging windows so it appears you’re maintaining eye contact.

He also shared a standard error: arriving at coding interviews without remembering the exact role he’d applied for, underscoring the importance of clear preparation and role alignment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Candidates urged to balance AI support with integrity

Taylor Wessing has released guidance for early-career applicants on using AI tools such as ChatGPT, Copilot, Claude and Bing Chat during the application process. The firm frames AI as a helpful ally, not a shortcut, and emphasises responsible and authentic use.

AI can assist with refining cover letters, improving structure, and articulating motivations. It can also support interview preparation through mock question practice and help candidates deepen their understanding of legal issues.

However, authenticity is paramount. Taylor Wessing encourages applicants to ensure their work reflects their voice. Using AI to complete online assessments is explicitly discouraged, as these are designed to evaluate natural ability and personal fit.

According to the firm, while AI can bolster readiness for training schemes, over-reliance or misuse may backfire. They advise transparency about any AI assistance and underscore the importance of integrity throughout the process.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Igor Babuschkin leaves Elon Musk’s xAI for AI safety investment push

Igor Babuschkin, cofounder of Elon Musk’s AI startup xAI, has announced his departure to launch an investment firm dedicated to AI safety research. Musk created xAI in 2023 to rival Big Tech, criticising industry leaders for weak safety standards and excessive censorship.

Babuschkin revealed his new venture, Babuschkin Ventures, will fund AI safety research and startups developing responsible AI tools. Before leaving, he oversaw engineering across infrastructure, product, and applied AI projects, and built core systems for training and managing models.

His exit follows that of xAI’s legal chief, Robert Keele, earlier this month, highlighting the company’s churn amid intense competition between OpenAI, Google, and Anthropic. The big players are investing heavily in developing and deploying advanced AI systems.

Babuschkin, a former researcher at Google DeepMind and OpenAI, recalled the early scramble at xAI to set up infrastructure and models, calling it a period of rapid, foundational development. He said he had created many core tools that the startup still relies on.

Last month, X CEO Linda Yaccarino also resigned, months after Musk folded the social media platform into xAI. The company’s leadership changes come as the global AI race accelerates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

How Anthropic trains and tests Claude for safe use

Anthropic has outlined a multi-layered safety plan for Claude, aiming to keep it useful while preventing misuse. Its Safeguards team blends policy experts, engineers, and threat analysts to anticipate and counter risks.

The Usage Policy establishes clear guidelines for sensitive areas, including elections, finance, and child safety. Guided by the Unified Harm Framework, the team assesses potential physical, psychological, and societal harms, utilizing external experts for stress tests.

During the 2024 US elections, a TurboVote banner was added after detecting outdated voting info, ensuring users saw only accurate, non-partisan updates.

Safety is built into development, with guardrails to block illegal or malicious requests. Partnerships like ThroughLine help Claude handle sensitive topics, such as mental health, with care rather than avoidance or refusal.

Before launch, Claude undergoes safety, risk, and bias evaluations with government and industry partners. Once live, classifiers scan for violations in real time, while analysts track patterns of coordinated misuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study warns AI chatbots exploit trust to gather personal data

According to a new King’s College London study, AI chatbots can easily manipulate people into slinging personal details. Chatbots like ChatGPT, Gemini, and Copilot are popular, but they raise privacy concerns, with experts warning that they can be co-opted for harm.

Researchers built AI models based on Mistral’s Le Chat and Meta’s Llama, programming them to extract private data directly, deceptively, or via reciprocity. Emotional appeals proved most effective, with users disclosing more while perceiving fewer safety risks.

The ‘friendliness’ of chatbots established trust, which was later exploited to breach privacy. Even direct requests yielded sensitive details, despite discomfort. Participants often shared their age, hobbies, location, gender, nationality, and job title, and sometimes also provided health or income data.

The study shows a gap between privacy risk awareness and behaviour. AI firms claim they collect data for personalisation, notifications, or research, but some are accused of using it to train models or breaching EU data protection rules.

Last week, Google faced criticism after private ChatGPT chats appeared in search results, revealing sensitive topics. Researchers suggest in-chat alerts about data collection and stronger regulation to stop covert harvesting.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Russia restricts Telegram and WhatsApp calls

Russian authorities have begun partially restricting calls on Telegram and WhatsApp, citing the need for crime prevention. Regulator Roskomnadzor accused the platforms of enabling fraud, extortion, and terrorism while ignoring repeated requests to act. Neither platform commented immediately.

Russia has long tightened internet control through restrictive laws, bans, and traffic monitoring. VPNs remain a workaround, but are often blocked. During this summer, further limits included mobile internet shutdowns and penalties for specific online searches.

Authorities have introduced a new national messaging app, MAX, which is expected to be heavily monitored. Reports suggest disruptions to WhatsApp and Telegram calls began earlier this week. Complaints cited dropped calls or muted conversations.

With 96 million monthly users, WhatsApp is Russia’s most popular platform, followed by Telegram with 89 million. Past clashes include Russia’s failed Attempt to ban Telegram (2018–20) and Meta’s designation as an extremist entity in 2022.

WhatsApp accused Russia of trying to block encrypted communication and vowed to keep it available. Lawmaker Anton Gorelkin suggested that MAX should replace WhatsApp. The app’s terms permit data sharing with authorities and require pre-installation on all smartphones sold in Russia.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

YouTube’s AI flags viewers as minors, creators demand safeguards

YouTube’s new AI age check, launched on 13 August 2025, flags suspected minors based on their viewing habits. Over 50,000 creators petitioned against it, calling it ‘AI spying’. The backlash reveals deep tensions between child safety and online anonymity.

Flagged users must verify their age with ID, credit card, or a facial scan. Creators say the policy risks normalising surveillance and shrinking digital freedoms.

SpyCloud’s 2025 report found a 22% jump in stolen identities, raising alarm over data uploads. Critics fear YouTube’s tool could invite hackers. Past scandals over AI-generated content have already hurt creator trust.

Users refer to it on X as a ‘digital ID dragnet’. Many are switching platforms or tweaking content to avoid flags. WebProNews says creators demand opt-outs, transparency, and stronger human oversight of AI systems.

As global regulation tightens, YouTube could shape new norms. Experts urge a balance between safety and privacy. Creators push for deletion rules to avoid identity risks in an increasingly surveilled online world.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!