Singapore sets jobs as top priority amid global uncertainty

Singapore’s Prime Minister Lawrence Wong said employment for citizens will remain the government’s top priority as the nation confronts global trade tensions and the rapid advance of AI.

Speaking at the annual National Day Rally to mark Singapore’s 60th year, Wong pointed to the risks created by the USChina rivalry, renewed tariff policies under President Donald Trump, and the pressure technology places on workers.

In his first primary address since the May election, Wong emphasised the need to reinforce the trade-reliant economy, expand social safety nets and redevelop parts of the island.

He pledged to protect Singaporeans from external shocks by maintaining stability instead of pursuing risky shifts. ‘Ultimately, our economic strategy is about jobs, jobs and jobs. That’s our number one priority,’ he said.

The government has introduced new welfare measures, including the country’s first unemployment benefits and wider subsidies for food, utilities and education.

Wong also announced initiatives to help enterprises use AI more effectively, such as a job-matching platform and a government-backed traineeship programme for graduates.

Looking ahead, Wong said Singapore would draw up a new economic blueprint to secure its future in a world shaped by protectionism, climate challenges and changing energy needs.

After stronger-than-expected results in the first half of the year, the government recently raised its growth forecast for 2025 to between 1.5% and 2.5%.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

West Midlands to train 2.3 million adults in AI skills

All adults in the West Midlands will be offered free training on using AI in daily life, work and community activities. Mayor Richard Parker confirmed the £10m initiative, designed to reach 2.3 million residents, as part of a wider £30m skills package.

A newly created AI Academy will lead the programme, working with tech companies, education providers and community groups. The aim is to equip people with everyday AI know-how and the advanced skills needed for digital and data-driven jobs.

Parker said AI should become as fundamental as English or maths and warned that failure to prioritise training would risk deepening a skills divide. The programme will sit alongside other £10m projects focused on bespoke business training and a more inclusive skills system.

The WMCA, established in 2017, covers Birmingham, Coventry, Wolverhampton and 14 other local authority areas in the UK. Officials say the AI drive is central to the region’s Growth Plan and ambition to become the UK’s leading hub for AI skills.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPT-5 doubles usage limits and adds smarter features

OpenAI has rolled out GPT-5 as the default AI model powering ChatGPT, bringing new features designed to boost productivity for personal and business users.

The new model seamlessly switches between quick search and in-depth reasoning, allowing more fluid and intelligent responses. Users can prompt ChatGPT to ‘think hard’ to trigger the deeper reasoning mode.

ChatGPT Plus users now benefit from double the previous message limit, with 160 messages allowed every three hours. Meanwhile, Team and Pro plan subscribers enjoy unlimited GPT-5 access unless accounts are misused.

Free users have a limit of 10 messages every five hours and one daily ‘Thinking’ mode message. Older GPT models such as GPT-4.1 and GPT-3 have been discontinued but remain accessible via web settings for paying customers.

All built-in tools are automatically enabled according to user needs, removing the need to toggle features like web search, image generation, or data analysis on and off. OpenAI also revealed plans to support third-party plugins to expand ChatGPT’s development capabilities further.

The new voice mode now follows instructions more accurately and will be available to all users.

Overall, GPT-5 marks a significant leap forward, improving reasoning, creativity, and alignment with user intent. OpenAI aims to make ChatGPT an even more powerful assistant by integrating enhanced capabilities and streamlining the user experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches ‘study mode’ to curb AI-fuelled cheating

OpenAI has introduced a new ‘study mode’ to help students use AI for learning rather than cheating. The update arrives amid a spike in academic dishonesty linked to generative AI tools.

According to The Guardian, a UK survey found nearly 7,000 confirmed cases of AI misuse during the 2023–24 academic year. Universities are under pressure to adapt assessments in response.

Under the chatbot’s Tools menu, the new mode walks users through questions with step-by-step guidance, acting more like a tutor than a solution engine.

Jayna Devani, OpenAI’s international education lead, said the aim is to foster productive use of AI. ‘It’s guiding me towards an answer, rather than just giving it to me first-hand,’ she explained.

The tool can assist with homework and exam prep and even interpret uploaded images of past papers. OpenAI cautions it may still produce errors, underscoring the need for broader conversations around AI in education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use steganography to evade Windows defences

North Korea-linked hacking group APT37 is using malicious JPEG image files to deploy advanced malware on Windows systems, according to Genians Security Centre. The new campaign showcases a more evasive version of RoKRAT malware, which hides payloads in image files through steganography.

These attacks rely on large Windows shortcut files embedded in email attachments or cloud storage links, enticing users with decoy documents while executing hidden code. Once activated, the malware launches scripts to decrypt shellcode and inject it into trusted apps like MS Paint and Notepad.

This fileless strategy makes detection difficult, avoiding traditional antivirus tools by leaving minimal traces. The malware also exfiltrates data through legitimate cloud services, complicating efforts to trace and block the threat.

Researchers stress the urgency for organisations to adopt cybersecurity measures, behavioural monitoring, robust end point management, and ongoing user education. Defenders must prioritise proactive strategies to protect critical systems as threat actors evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Moflin, Japan’s AI-powered robot pet with a personality

A fluffy, AI-powered robot pet named Moflin is capturing the imagination of consumers in Japan with its unique ability to develop distinct personalities based on how it is ‘raised.’ Developed by Casio, Moflin recognises its owner and learns their preferences through interactions such as cuddling and stroking, boasting over four million possible personality variations.

Priced at ¥59,400, Moflin has become more than just a companion at home, with some owners even taking it along on day trips. To complement the experience, Casio offers additional services, including a specialised salon to clean and maintain the robot’s fur, further enhancing its pet-like feel.

Erina Ichikawa, the lead developer, says the aim was to create a supportive sidekick capable of providing comfort during challenging moments, blending technology with emotional connection in a new way.

A similar ‘pet’ was also seen in China. Namely, AI-powered ‘smart pets’ like BooBoo are gaining popularity in China, especially among youth, offering emotional support and companionship. Valued for easing anxiety and isolation, the market is set to reach $42.5 billion by 2033, reflecting shifting social and family dynamics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft study flags 40 jobs highly vulnerable to AI automation

Microsoft Research released a comprehensive AI impact assessment, ranking 80 occupations by exposure to generative AI tools such as Copilot and ChatGPT. Roles heavily involved in language, writing, client communication, and routine digital tasks showed the highest AI overlap. Notable examples include translators, historians, customer service agents, political scientists, and data scientists.

By contrast, jobs requiring hands-on work, empathy, real-time physical or emotional engagement, such as nurses, phlebotomists, construction trades, embalmers, and housekeeping staff, were classified as low risk under current AI capabilities. Experts suggest that these kinds of positions remain essential because they involve physical presence, human interaction, and complex real-time decision making.

Although certain professions scored high for AI exposure, Microsoft and independent analysts emphasise that most jobs won’t disappear entirely. Instead, generative AI tools are expected to augment workflows, creating hybrid roles where human judgement and oversight remain critical, especially in sectors such as financial services, healthcare, and creative industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Physicists remain split on what quantum theory really means

One hundred years after its birth, quantum mechanics continues to baffle physicists, despite underpinning many of today’s technologies. While its equations accurately describe the behaviour of subatomic particles, experts remain deeply divided on what those equations actually reveal about reality.

A recent survey by Nature, involving more than 1,100 physicists, highlighted the lack of consensus within the field. Just over a third supported the Copenhagen interpretation, which claims a particle only assumes a definite state once it is observed.

Others favour alternatives like the many worlds theory, which suggests every possible outcome exists in parallel universes rather than collapsing into a single reality. The concept challenges traditional notions of observation, space and causality.

Physicists also remain split on whether there is a boundary between classical and quantum systems. Only a quarter expressed confidence in their chosen interpretation, with most believing a better theory will eventually replace today’s understanding.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Concerns grow over children’s use of AI chatbots

The growing use of AI chatbots and companions among children has raised safety concerns, with experts warning of inadequate protections and potential emotional risks.

Often not designed for young users, these apps lack sufficient age verification and moderation features, making them vulnerable spaces for children. The eSafety Commissioner noted that many children are spending hours daily with AI companions, sometimes discussing topics like mental health and sex.

Studies in Australia and the UK show high engagement, with many young users viewing the chatbots as real friends and sources of emotional advice.

Experts, including Professor Tama Leaver, warn that these systems are manipulative by design, built to keep users engaged without guaranteeing appropriate or truthful responses.

Despite the concerns, initiatives like Day of AI Australia promote digital literacy to help young people understand and navigate such technologies critically.

Organisations like UNICEF say AI could offer significant educational benefits if applied safely. However, they stress that Australia must take childhood digital safety more seriously as AI rapidly reshapes how young people interact, learn and socialise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI age detection to protect teen users

In a move aimed at enhancing online protections for minors, Google has started rolling out a machine learning-based age estimation system for signed-in users in the United States.

The new system uses AI to identify users who are likely under the age of 18, with the goal of providing age-appropriate digital experiences and strengthening privacy safeguards.

Initially deployed to a small number of users, the system is part of Google’s broader initiative to align its platforms with the evolving needs of children and teenagers growing up in a digitally saturated world.

‘Children today are growing up with technology, not growing into it like previous generations. So we’re working directly with experts and educators to help you set boundaries and use technology in a way that’s right for your family,’ the company explained in a statement.

The system builds on changes first previewed earlier this year and reflects Google’s ongoing efforts to comply with regulatory expectations and public demand for better youth safety online.

Once a user is flagged by the AI as likely underage, Google will introduce a range of restrictions—most notably in advertising, content recommendation, and data usage.

According to the company, users identified as minors will have personalised advertising disabled and will be shielded from ad categories deemed sensitive. These protections will be enforced across Google’s entire advertising ecosystem, including AdSense, AdMob, and Ad Manager.

The company’s publishing partners were informed via email this week that no action will be required on their part, as the changes will be implemented automatically.

Google’s blog post titled ‘Ensuring a safer online experience for US kids and teens’ explains that its machine learning model estimates age based on behavioural signals, such as search history and video viewing patterns.

If a user is mistakenly flagged or wishes to confirm their age, Google will offer verification tools, including the option to upload a government-issued ID or submit a selfie.

The company stressed that the system is designed to respect user privacy and does not involve collecting new types of data. Instead, it aims to build a privacy-preserving infrastructure that supports responsible content delivery while minimising third-party data sharing.

Beyond advertising, the new protections extend into other parts of the user experience. For those flagged as minors, Google will disable Timeline location tracking in Google Maps and also add digital well-being features on YouTube, such as break reminders and bedtime prompts.

Google will also tweak recommendation algorithms to avoid promoting repetitive content on YouTube, and restrict access to adult-rated applications in the Play Store for flagged minors.

The initiative is not Google’s first foray into child safety technology. The company already offers Family Link for parental controls and YouTube Kids as a tailored platform for younger audiences.

However, the deployment of automated age estimation reflects a more systemic approach, using AI to enforce real-time, scalable safety measures. Google maintains that these updates are part of a long-term investment in user safety, digital literacy, and curating age-appropriate content.

Similar initiatives have already been tested in international markets, and the company announces it will closely monitor the US rollout before considering broader implementation.

‘This is just one part of our broader commitment to online safety for young users and families,’ the blog post reads. ‘We’ve continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms.’

Nonetheless, the programme is likely to attract scrutiny. Critics may question the accuracy of AI-powered age detection and whether the measures strike the right balance between safety, privacy, and personal autonomy — or risk overstepping.

Some parents and privacy advocates may also raise concerns about the level of visibility and control families will have over how children are identified and managed by the system.

As public pressure grows for tech firms to take greater responsibility in protecting vulnerable users, Google’s rollout may signal the beginning of a new industry standard.

The shift towards AI-based age assurance reflects a growing consensus that digital platforms must proactively mitigate risks for young users through smarter, more adaptive technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!