Teachers and students warn: AI is eroding engagement

A student from San Jose and an English teacher in Chicago co-authored a Boston Globe opinion warning that widespread use of AI in schools damages the vital student-teacher bond.

While marketed as efficiency boosters, AI tools encourage students to forgo independent thinking.

Many simply generate entire assignments via AI, reformat the text to avoid detection, and undermine honest academic interaction.

Educators report feeling increasingly marginalised as AI handles much of their workload, including grading, lesson planning, and feedback within classrooms.

Though schools and tech companies promote these tools as educational enhancements, many schools have eroded trust, as teachers struggle to assess real student ability.

The authors call for a return to supervised in-class assignments, using pen and paper, strict scrutiny of AI vendors in education, and outright bans on unsupervised AI classroom tools to help reset the learning relationship.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube Shorts brings image-to-video AI tool

Google has rolled out new AI features for YouTube Shorts, including an image-to-video tool powered by its Veo 2 model. The update lets users convert still images into six-second animated clips, such as turning a static group photo into a dynamic scene.

Creators can also experiment with immersive AI effects that stylise selfies or simple drawings into themed short videos. These features aim to enhance creative expression and are currently available in the US, Canada, Australia and New Zealand, with global rollout expected later this year.

A new AI Playground hub has also been launched to house all generative tools, including video effects and inspiration prompts. Users can find the hub by tapping the Shorts camera’s ‘create’ button and then the sparkle icon in the top corner.

Google plans to introduce even more advanced tools with the upcoming Veo 3 model, which will support synchronised audio generation. The company is positioning YouTube Shorts as a key platform for AI-driven creativity in the video content space.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teens turn to AI for advice and friendship

A growing number of US teens rely on AI for daily decision‑making and emotional support, with chatbots such as ChatGPT, Character.AI and Replika. One Kansas student admits she uses AI to simplify everyday tasks, using it to choose clothes or plan events while avoiding schoolwork.

A survey by Common Sense Media reveals that over 70 per cent of teenagers have tried AI companions, with around half using them regularly. Roughly a third reported discussing serious issues with AI, sometimes finding it as or more satisfying than talking with friends.

Experts express concern that such frequent AI interactions could hinder development of creativity, critical thinking and social skills in young people. The study warns adolescents may become overly validated by AI, missing out on real‑world emotional growth.

Educators caution that while AI offers constant, non‑judgemental feedback, it is not a replacement for authentic human relationships. They recommend AI use be carefully supervised to ensure it complements rather than replaces real interaction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New benchmark exposes limits of current AI tools

A new coding competition has exposed the limitations of current AI models, with the winner solving just 7.5% of programming problems. The K Prize, launched by Databricks and Perplexity co-founder, aims to challenge smaller models using real-world GitHub issues in a contamination-free format.

Despite the low score, Eduardo Rocha de Andrade took home the $50,000 top prize. Konwinski says the intentionally tough benchmark helps avoid inflated results and encourages realistic assessments of AI capability.

Unlike the better-known SWE-Bench, which may allow models to train on test material, the K Prize uses only new issues submitted after a set deadline. Its design prevents exposure during training, making it a more reliable measure of generalisation.

A $1 million prize remains for any open-source model that scores over 90%. The low results are being viewed as a necessary wake-up call in the race to build competent AI software engineers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon buys Bee AI, the startup that listens to your day

Amazon has acquired Bee AI, a San Francisco-based startup known for its $50 wearable that listens to conversations and provides AI-generated summaries and reminders.

The deal was confirmed by Bee co-founder Maria de Lourdes Zollo in a LinkedIn post on Wednesday, but the acquisition terms were not disclosed. Bee gained attention earlier this year at CES in Las Vegas, where it unveiled a Fitbit-like bracelet using AI to deliver personal insights.

The device received strong feedback for its ability to analyse conversations and create to-do lists, reminders, and daily summaries. Bee also offers a $19-per-month subscription and an Apple Watch app. It raised $7 million before being acquired by Amazon.

‘When we started Bee, we imagined a world where AI is truly personal,’ Zollo wrote. ‘That dream now finds a new home at Amazon.’ Amazon confirmed the acquisition and is expected to integrate Bee’s technology into its expanding AI device strategy.

The company recently updated Alexa with generative AI and added similar features to Ring, its home security brand. Amazon’s hardware division is now led by Panos Panay, the former Microsoft executive who led Surface and Windows 11 development.

Bee’s acquisition suggests Amazon is exploring its own AI-powered wearable to compete in the rapidly evolving consumer tech space. It remains unclear whether Bee will operate independently or be folded into Amazon’s existing device ecosystem.

Privacy concerns have surrounded Bee, as its wearable records audio in real time. The company claims no recordings are stored or used for AI training. Bee insists that users can delete their data at any time. However, privacy groups have flagged potential risks.

The AI hardware market has seen mixed success. Meta’s Ray-Ban smart glasses gained traction, but others like the Rabbit R1 flopped. The Humane AI Pin also failed commercially and was recently sold to HP. Consumers remain cautious of always-on AI devices.

OpenAI is also moving into hardware. In May, it acquired Jony Ive’s AI startup, io, for a reported $6.4 billion. OpenAI has hinted at plans to develop a screenless wearable, joining the race to create ambient AI tools for daily life.

Bee’s transition from startup to Amazon acquisition reflects how big tech is absorbing innovation in ambient, voice-first AI. Amazon’s plans for Bee remain to be seen, but the move could mark a turning point for AI wearables if executed effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canadian researchers expose watermark flaws

A team at the University of Maryland found that adversarial attacks easily strip most watermarking technologies designed to label AI‑generated images. Their study reveals that even visible watermarks fail to indicate content provenance reliably.

The US researchers tested low‑perturbation invisible watermarks and more robust visible ones, demonstrating that adversaries can easily remove or forge marks. Lead author Soheil Feizi noted the technology is far from foolproof, warning that ‘we broke all of them’.

Despite these concerns, experts argue that watermarking can still be helpful in a broader detection strategy. UC Berkeley professor Hany Farid said robust watermarking is ‘part of the solution’ when combined with other forensic methods.

Tech giants and researchers continue to develop watermarking tools like Google DeepMind’s SynthID, though such systems are not considered infallible. The consensus emerging from recent tests is that watermarking alone cannot be relied upon to counter deepfake threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK and OpenAI deepen AI collaboration on security and public services

OpenAI has signed a strategic partnership with the UK government aimed at strengthening AI security research and exploring national infrastructure investment.

The agreement was finalised on 21 July by OpenAI CEO Sam Altman and science secretary Peter Kyle. It includes a commitment to expand OpenAI’s London office. Research and engineering teams will grow to support AI development and provide assistance to UK businesses and start-ups.

Under the collaboration, OpenAI will share technical insights with the UK’s AI Security Institute to help government bodies better understand risks and capabilities. Planned deployments of AI will focus on public sectors such as justice, defence, education, and national security.

According to the UK government, all applications will follow national standards and guidelines to improve taxpayer-funded services. Peter Kyle described AI as a critical tool for national transformation. ‘AI will be fundamental in driving the change we need to see across the country,’ he said.

He emphasised its potential to support the NHS, reduce barriers to opportunity, and power economic growth. The deal signals a deeper integration of OpenAI’s operations in the UK, with promises of high-skilled jobs, investment in infrastructure, and stronger domestic oversight of AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI governance needs urgent international coordination

A GIS Reports analysis emphasises that as AI systems become pervasive, they create significant global challenges, including surveillance risks, algorithmic bias, cyber vulnerabilities, and environmental pressures.

Unlike legacy regulatory regimes, AI technology blurs the lines among privacy, labour, environmental, security, and human rights domains, demanding a uniquely coordinated governance approach.

The report highlights that leading AI research and infrastructure remain concentrated in advanced economies: over half of general‑purpose AI models originated in the US, exacerbating global inequalities.

Meanwhile, facial recognition or deepfake generators threaten civic trust, amplify disinformation, and even provoke geopolitical incidents if weaponised in defence systems.

The analysis calls for urgent public‑private cooperation and a new regulatory paradigm to address these systemic issues.

Recommendations include forming international expert bodies akin to the IPCC, and creating cohesive governance that bridges labour rights, environmental accountability, and ethical AI frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT stuns users by guessing object in viral video using smart questions

A video featuring ChatGPT Live has gone viral after it correctly guessed an object hidden in a user’s hand using only a series of questions.

The clip, shared on the social media platform X, shows the chatbot narrowing down its guesses until it lands on the correct answer — a pen — within less than a minute. The video has fascinated viewers by showing how far generative AI has come since its initial launch.

Multimodal AI like ChatGPT can now process audio, video and text together, making interactions more intuitive and lifelike.

Another user attempted the same challenge with Gemini AI by holding an AC remote. Gemini described it as a ‘control panel for controlling temperature’, which was close but not entirely accurate.

The fun experiment also highlights the growing real-world utility of generative AI. During Google’s I/O conference during the year, the company demonstrated how Gemini Live can help users troubleshoot and repair appliances at home by understanding both spoken instructions and visual input.

Beyond casual use, these AI tools are proving helpful in serious scenarios. A UPSC aspirant recently explained how uploading her Detailed Application Form to a chatbot allowed it to generate practice questions.

She used those prompts to prepare for her interview and credited the AI with helping her boost her confidence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Isambard-AI launch brings Britain closer to AI breakthroughs

The UK’s most powerful AI supercomputer, Isambard-AI, has officially launched in Bristol. Developed with HPE and NVIDIA, the £225 million system marks a major step in national research capability.

It can compute in one second what the global population would take 80 years to process. Housed at the National Composites Centre, it aims to drive breakthroughs in healthcare, robotics, climate science and more.

Built by the University of Bristol’s Centre for Supercomputing, the machine is part of the UK Government’s AI Research Resource (AIRR) and was launched by Science Secretary Peter Kyle.

Alongside the Dawn supercomputer in Cambridge, Isambard-AI will deliver 23 AI ExaFLOPs — equal to the UK population working non-stop for 85,000 years. It is 100,000 times faster than an average laptop and supports drug discovery, personalised medicine and advanced data analysis.

Powered by 5,400 Nvidia GH200 Grace Hopper Superchips and HPE’s Cray EX platform, it is among the greenest supercomputers globally, running on zero-carbon electricity and using direct liquid cooling to cut energy use by up to 90%.

Plans are underway to reuse its waste heat for nearby homes and businesses. Its sustainable design cuts emissions by 72% versus traditional builds.

The University of Bristol, chosen for its AI and HPC expertise, also offers a government-backed master’s through the Sparck AI scholarship. Vice-Chancellor Professor Evelyn Welch called the launch a milestone for British AI.

Researchers are already using Isambard-AI to analyse data from wearable cameras for assisted living support, and to scan MRI data to speed up cancer detection and treatment planning.

Other teams are modelling disease-related proteins and using AI to detect illness in dairy cattle by monitoring herd behaviour — showing the system’s broad real-world impact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!