GPT-5 doubles usage limits and adds smarter features

OpenAI has rolled out GPT-5 as the default AI model powering ChatGPT, bringing new features designed to boost productivity for personal and business users.

The new model seamlessly switches between quick search and in-depth reasoning, allowing more fluid and intelligent responses. Users can prompt ChatGPT to ‘think hard’ to trigger the deeper reasoning mode.

ChatGPT Plus users now benefit from double the previous message limit, with 160 messages allowed every three hours. Meanwhile, Team and Pro plan subscribers enjoy unlimited GPT-5 access unless accounts are misused.

Free users have a limit of 10 messages every five hours and one daily ‘Thinking’ mode message. Older GPT models such as GPT-4.1 and GPT-3 have been discontinued but remain accessible via web settings for paying customers.

All built-in tools are automatically enabled according to user needs, removing the need to toggle features like web search, image generation, or data analysis on and off. OpenAI also revealed plans to support third-party plugins to expand ChatGPT’s development capabilities further.

The new voice mode now follows instructions more accurately and will be available to all users.

Overall, GPT-5 marks a significant leap forward, improving reasoning, creativity, and alignment with user intent. OpenAI aims to make ChatGPT an even more powerful assistant by integrating enhanced capabilities and streamlining the user experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches ‘study mode’ to curb AI-fuelled cheating

OpenAI has introduced a new ‘study mode’ to help students use AI for learning rather than cheating. The update arrives amid a spike in academic dishonesty linked to generative AI tools.

According to The Guardian, a UK survey found nearly 7,000 confirmed cases of AI misuse during the 2023–24 academic year. Universities are under pressure to adapt assessments in response.

Under the chatbot’s Tools menu, the new mode walks users through questions with step-by-step guidance, acting more like a tutor than a solution engine.

Jayna Devani, OpenAI’s international education lead, said the aim is to foster productive use of AI. ‘It’s guiding me towards an answer, rather than just giving it to me first-hand,’ she explained.

The tool can assist with homework and exam prep and even interpret uploaded images of past papers. OpenAI cautions it may still produce errors, underscoring the need for broader conversations around AI in education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use steganography to evade Windows defences

North Korea-linked hacking group APT37 is using malicious JPEG image files to deploy advanced malware on Windows systems, according to Genians Security Centre. The new campaign showcases a more evasive version of RoKRAT malware, which hides payloads in image files through steganography.

These attacks rely on large Windows shortcut files embedded in email attachments or cloud storage links, enticing users with decoy documents while executing hidden code. Once activated, the malware launches scripts to decrypt shellcode and inject it into trusted apps like MS Paint and Notepad.

This fileless strategy makes detection difficult, avoiding traditional antivirus tools by leaving minimal traces. The malware also exfiltrates data through legitimate cloud services, complicating efforts to trace and block the threat.

Researchers stress the urgency for organisations to adopt cybersecurity measures, behavioural monitoring, robust end point management, and ongoing user education. Defenders must prioritise proactive strategies to protect critical systems as threat actors evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Moflin, Japan’s AI-powered robot pet with a personality

A fluffy, AI-powered robot pet named Moflin is capturing the imagination of consumers in Japan with its unique ability to develop distinct personalities based on how it is ‘raised.’ Developed by Casio, Moflin recognises its owner and learns their preferences through interactions such as cuddling and stroking, boasting over four million possible personality variations.

Priced at ¥59,400, Moflin has become more than just a companion at home, with some owners even taking it along on day trips. To complement the experience, Casio offers additional services, including a specialised salon to clean and maintain the robot’s fur, further enhancing its pet-like feel.

Erina Ichikawa, the lead developer, says the aim was to create a supportive sidekick capable of providing comfort during challenging moments, blending technology with emotional connection in a new way.

A similar ‘pet’ was also seen in China. Namely, AI-powered ‘smart pets’ like BooBoo are gaining popularity in China, especially among youth, offering emotional support and companionship. Valued for easing anxiety and isolation, the market is set to reach $42.5 billion by 2033, reflecting shifting social and family dynamics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft study flags 40 jobs highly vulnerable to AI automation

Microsoft Research released a comprehensive AI impact assessment, ranking 80 occupations by exposure to generative AI tools such as Copilot and ChatGPT. Roles heavily involved in language, writing, client communication, and routine digital tasks showed the highest AI overlap. Notable examples include translators, historians, customer service agents, political scientists, and data scientists.

By contrast, jobs requiring hands-on work, empathy, real-time physical or emotional engagement, such as nurses, phlebotomists, construction trades, embalmers, and housekeeping staff, were classified as low risk under current AI capabilities. Experts suggest that these kinds of positions remain essential because they involve physical presence, human interaction, and complex real-time decision making.

Although certain professions scored high for AI exposure, Microsoft and independent analysts emphasise that most jobs won’t disappear entirely. Instead, generative AI tools are expected to augment workflows, creating hybrid roles where human judgement and oversight remain critical, especially in sectors such as financial services, healthcare, and creative industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Physicists remain split on what quantum theory really means

One hundred years after its birth, quantum mechanics continues to baffle physicists, despite underpinning many of today’s technologies. While its equations accurately describe the behaviour of subatomic particles, experts remain deeply divided on what those equations actually reveal about reality.

A recent survey by Nature, involving more than 1,100 physicists, highlighted the lack of consensus within the field. Just over a third supported the Copenhagen interpretation, which claims a particle only assumes a definite state once it is observed.

Others favour alternatives like the many worlds theory, which suggests every possible outcome exists in parallel universes rather than collapsing into a single reality. The concept challenges traditional notions of observation, space and causality.

Physicists also remain split on whether there is a boundary between classical and quantum systems. Only a quarter expressed confidence in their chosen interpretation, with most believing a better theory will eventually replace today’s understanding.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Concerns grow over children’s use of AI chatbots

The growing use of AI chatbots and companions among children has raised safety concerns, with experts warning of inadequate protections and potential emotional risks.

Often not designed for young users, these apps lack sufficient age verification and moderation features, making them vulnerable spaces for children. The eSafety Commissioner noted that many children are spending hours daily with AI companions, sometimes discussing topics like mental health and sex.

Studies in Australia and the UK show high engagement, with many young users viewing the chatbots as real friends and sources of emotional advice.

Experts, including Professor Tama Leaver, warn that these systems are manipulative by design, built to keep users engaged without guaranteeing appropriate or truthful responses.

Despite the concerns, initiatives like Day of AI Australia promote digital literacy to help young people understand and navigate such technologies critically.

Organisations like UNICEF say AI could offer significant educational benefits if applied safely. However, they stress that Australia must take childhood digital safety more seriously as AI rapidly reshapes how young people interact, learn and socialise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI age detection to protect teen users

In a move aimed at enhancing online protections for minors, Google has started rolling out a machine learning-based age estimation system for signed-in users in the United States.

The new system uses AI to identify users who are likely under the age of 18, with the goal of providing age-appropriate digital experiences and strengthening privacy safeguards.

Initially deployed to a small number of users, the system is part of Google’s broader initiative to align its platforms with the evolving needs of children and teenagers growing up in a digitally saturated world.

‘Children today are growing up with technology, not growing into it like previous generations. So we’re working directly with experts and educators to help you set boundaries and use technology in a way that’s right for your family,’ the company explained in a statement.

The system builds on changes first previewed earlier this year and reflects Google’s ongoing efforts to comply with regulatory expectations and public demand for better youth safety online.

Once a user is flagged by the AI as likely underage, Google will introduce a range of restrictions—most notably in advertising, content recommendation, and data usage.

According to the company, users identified as minors will have personalised advertising disabled and will be shielded from ad categories deemed sensitive. These protections will be enforced across Google’s entire advertising ecosystem, including AdSense, AdMob, and Ad Manager.

The company’s publishing partners were informed via email this week that no action will be required on their part, as the changes will be implemented automatically.

Google’s blog post titled ‘Ensuring a safer online experience for US kids and teens’ explains that its machine learning model estimates age based on behavioural signals, such as search history and video viewing patterns.

If a user is mistakenly flagged or wishes to confirm their age, Google will offer verification tools, including the option to upload a government-issued ID or submit a selfie.

The company stressed that the system is designed to respect user privacy and does not involve collecting new types of data. Instead, it aims to build a privacy-preserving infrastructure that supports responsible content delivery while minimising third-party data sharing.

Beyond advertising, the new protections extend into other parts of the user experience. For those flagged as minors, Google will disable Timeline location tracking in Google Maps and also add digital well-being features on YouTube, such as break reminders and bedtime prompts.

Google will also tweak recommendation algorithms to avoid promoting repetitive content on YouTube, and restrict access to adult-rated applications in the Play Store for flagged minors.

The initiative is not Google’s first foray into child safety technology. The company already offers Family Link for parental controls and YouTube Kids as a tailored platform for younger audiences.

However, the deployment of automated age estimation reflects a more systemic approach, using AI to enforce real-time, scalable safety measures. Google maintains that these updates are part of a long-term investment in user safety, digital literacy, and curating age-appropriate content.

Similar initiatives have already been tested in international markets, and the company announces it will closely monitor the US rollout before considering broader implementation.

‘This is just one part of our broader commitment to online safety for young users and families,’ the blog post reads. ‘We’ve continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms.’

Nonetheless, the programme is likely to attract scrutiny. Critics may question the accuracy of AI-powered age detection and whether the measures strike the right balance between safety, privacy, and personal autonomy — or risk overstepping.

Some parents and privacy advocates may also raise concerns about the level of visibility and control families will have over how children are identified and managed by the system.

As public pressure grows for tech firms to take greater responsibility in protecting vulnerable users, Google’s rollout may signal the beginning of a new industry standard.

The shift towards AI-based age assurance reflects a growing consensus that digital platforms must proactively mitigate risks for young users through smarter, more adaptive technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT gets smarter with Study Mode to support active learning

OpenAI has launched a new Study Mode in ChatGPT to help users engage more deeply with learning. Rather than simply providing answers, the feature guides users through concepts and problem-solving step-by-step. It is designed to support critical thinking and improve long-term understanding.

The company developed the feature with educators, scientists, and pedagogy experts. They aimed to ensure the AI supports active learning and doesn’t just deliver quick fixes. The result is a mode that encourages curiosity, reflection, and metacognitive development.

According to OpenAI, Study Mode allows users to approach subjects more critically and thoroughly. It breaks down complex ideas, asks questions, and helps manage cognitive load during study. Instead of spoon-feeding, the AI acts more like a tutor than a search engine.

The shift reflects a broader trend in educational technology — away from passive learning tools. Many students turn to AI for homework help, but educators have warned of over-reliance. Study Mode attempts to strike a balance by promoting engagement over shortcuts.

For instance, rather than giving the complete solution to a maths problem, Study Mode might ask: ‘What formula might apply here?’ or ‘How could you simplify this expression first?’ This approach nudges students to participate in the process and build fundamental problem-solving skills.

It also adapts to different learning needs. In science, it might walk through hypotheses and reasoning. It may help analyse a passage or structure an essay in the humanities. Prompting users to think aloud mirrors effective tutoring strategies.

OpenAI says feedback from teachers helped shape the feature’s tone and pacing. One key aim was to avoid overwhelming learners with too much information at once. Instead, Study Mode introduces concepts incrementally, supporting better retention and understanding.

The company also consulted cognitive scientists to align with best practices in memory and comprehension. However, this includes encouraging users to reflect on their learning and why specific steps matter. Such strategies are known to improve both academic performance and self-directed learning.

While the feature is part of ChatGPT, it can be toggled on or off. Users can activate Study Mode when tackling a tricky topic or exploring new material. They can then switch to normal responses for broader queries or summarised answers.

Educators have expressed cautious optimism about the update. Some see it as a tool supporting homework, revision, or assessment preparation. However, they also warn that no AI can replace direct teaching or personalised guidance.

Tools like this could be valuable in under-resourced settings or for independent learners.

Study Mode’s interactive style may help level the playing field for students without regular academic support. It also gives parents and tutors a new way to guide learners without doing the work for them.

Earlier efforts included teacher guides and classroom use cases. However, Study Mode marks a more direct push to reshape how students use AI in learning.

It positions ChatGPT not as a cheat sheet, but as a co-pilot for intellectual growth.

Looking ahead, OpenAI says it plans to iterate based on user feedback and teacher insights. Future updates may include subject-specific prompts, progress tracking, or integrations with educational platforms. The goal is to build a tool that adapts to learning styles without compromising depth or rigour.

As AI continues to reshape education, tools like Study Mode may help answer a central question: Can technology support genuine understanding, instead of just faster answers? With Study Mode, OpenAI believes the answer is yes, if used wisely.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbot captures veteran workers’ knowledge to support UK care teams

Peterborough City Council has turned the knowledge of veteran therapy practitioner Geraldine Jinks into an AI chatbot to support adult social care workers.

After 35 years of experience, colleagues frequently approached Jinks seeking advice, leading to time pressures despite her willingness to help.

In response, the council developed a digital assistant called Hey Geraldine, built on the My AskAI platform, which mimics her direct and friendly communication style to provide instant support to staff.

Developed in 2023, the chatbot offers practical answers to everyday care-related questions, such as how to support patients with memory issues or discharge planning. Jinks collaborated with the tech team to train the AI, writing all the responses herself to ensure consistency and clarity.

Thanks to its natural tone and humanlike advice, some colleagues even mistook the chatbot for the honest Geraldine.

The council hopes Hey Geraldine will reduce hospital discharge delays and improve patient access to assistive technology. Councillor Shabina Qayyum, who also works as a GP, said the tool empowers staff to help patients regain independence instead of facing unnecessary delays.

The chatbot is seen as preserving valuable institutional knowledge while improving frontline efficiency.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!