Nearly half of UK youths aged 16 to 21 say they would prefer to grow up without the internet, a new survey reveals. The British Standards Institution found that 68% feel worse after using social media and half would support a digital curfew past 10 p.m.
These findings come as the government considers app usage limits for platforms like TikTok and Instagram. The study also showed that many UK young people feel compelled to hide their online behaviour: 42% admitted lying to parents, and a similar number have fake or burner accounts.
More worryingly, 27% said they have shared their location with strangers, while others admitted pretending to be someone else entirely. Experts argue that digital curfews alone won’t reduce exposure to online harms without broader safeguards in place.
Campaigners and charities are calling for urgent legislation that puts children’s safety before tech profits. The Molly Rose Foundation stressed the danger of algorithms pushing harmful content, while the NSPCC urged a shift towards less addictive and safer online spaces.
The majority of young people surveyed want more protection online and clearer action from tech firms and policymakers.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Canva has introduced Sheets, a new spreadsheet platform combining data, design, and AI to simplify and visualise analytics. Announced at the Canva Create: Uncharted event, it redefines spreadsheets by enabling users to turn raw data into charts, reports and content without leaving the Canva interface.
Built-in tools like Magic Formulas, Magic Insights, and Magic Charts, Canva Sheets supports automated analysis and visual storytelling. Users can generate dynamic charts and branded content across platforms in seconds, thanks to Canva AI and features like bulk editing and multilingual translation.
Data Connectors allow seamless integration with platforms such as Google Analytics and HubSpot, ensuring live updates across all connected visuals. The platform is designed to reduce manual tasks in recurring reports and keep teams synchronised in real time.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
That initiative aims to integrate digital connectivity as a core component of property development. The draft manual provides a standardised methodology for Digital Connectivity Rating Agencies (DCRAs) to evaluate properties and offers guidance for Property Managers (PMs) to plan and build Digital Connectivity Infrastructure (DCI).
It also promotes a collaborative approach among all stakeholders, including service providers, to ensure consistent and transparent assessments. The rating system addresses the growing importance of in-building digital connectivity, as most data usage occurs indoors and high-frequency 4G/5G signals often struggle to penetrate walls.
Properties will be evaluated on factors such as fibre readiness, mobile network availability, Wi-Fi infrastructure, and service performance, enabling prospective tenants and buyers to compare properties based on digital connectivity. Well-rated properties are expected to attract more users, buyers, and investors, increasing their market value.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
With Google I/O 2025 around the corner, concerns are growing about artificial intelligence creeping into every corner of Google’s ecosystem. While AI has enhanced tools like Gmail and Photos, some users are urging Google to leave certain apps untouched.
These include fan favourites like Emoji Kitchen, Google Keep, and Google Wallet, which continue to shine due to their simplicity and human-focused design. Critics argue that introducing generative AI to these apps could diminish what makes them special.
Emoji Kitchen’s handcrafted stickers, for example, are widely praised compared to Apple’s AI-driven alternatives. Likewise, Google Keep and Wallet are valued for their light, efficient interfaces that serve clear purposes without AI interference.
Even in environments where AI might seem useful, such as Android Auto and Google Flights, the call is for restraint. Users appreciate clear menus and limited distractions over chatbots making unsolicited suggestions.
As AI continues to dominate tech conversations, a growing number of voices are asking Google to preserve the balance between innovation and usability.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI tools such as pain-detecting apps, night-time sensors, and even training robots are increasingly shaping social care in the UK.
Care homes now use the Painchek app to scan residents’ faces for pain indicators, while sensors like AllyCares monitor unusual activity, reducing preventable hospital visits.
Meanwhile, Oxford researchers have created a robot that helps train carers by mimicking patients’ reactions to pain. Families often adjust to the technology after seeing improvements in their loved ones’ care, but transparency and human oversight remain essential.
Despite the promise of these innovations, experts urge caution. Dr Caroline Green from the University of Oxford warns that AI must remain a support, not a replacement, and raises concerns about bias, data privacy, and potential overdependence on technology.
With the UK ageing population and staffing shortages straining social care, technology offers valuable assistance.
Specialists stress that investment in skilled human carers is crucial and the government has endorsed the role of AI in care but has yet to establish clear national policies guiding its ethical use
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has announced that children under the age of 13 will soon be able to access its Gemini AI chatbot through Family Link accounts. The service will allow parents to monitor their child’s use, set screen time limits, and disable access if desired.
Gemini, designed to assist with tasks like homework and storytelling, includes safeguards to prevent inappropriate content and protect child users. Google acknowledged the possibility of errors in the AI’s responses and urged parental oversight.
Google emphasised that data collected from child users will not be used to train AI models. Parents will be notified when their child first uses Gemini and are advised to encourage critical thinking and remind children not to share personal information with the chatbot.
Despite these precautions, child safety advocates have voiced concerns. Organisations such as Fairplay argue that allowing young children to interact with AI chatbots could expose them to risks, citing previous incidents involving other AI platforms.
International bodies, including UNICEF, have also highlighted the need for stringent regulations to safeguard children’s rights in an increasingly digital world.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Huawei Technologies is preparing to test its newest AI processor, the Ascend 910D, as it seeks to offer an alternative to Nvidia’s products following US export restrictions. The company has approached several Chinese tech firms to assess the technical feasibility of the new chip.
Extensive testing will follow to ensure the chip’s performance before it reaches the wider market. Sources claim Huawei aims for the Ascend 910D to outperform Nvidia’s H100 chip, widely used for AI training since 2022.
Huawei is already shipping large volumes of its earlier Ascend 910B and 910C models to state-owned carriers and private AI developers like ByteDance. Demand for these processors has risen as US restrictions tightened Nvidia’s ability to sell its H20 chip to China.
Increased domestic demand for Huawei’s AI hardware signals a shift in China’s semiconductor market amid geopolitical tensions. Analysts believe this development strengthens Huawei’s ambition to compete globally in the AI chip market.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A group of former OpenAI employees, supported by Nobel laureates and AI experts, has urged the attorneys general of California and Delaware to block the company’s proposed transition from a nonprofit to a for-profit structure.
They argue that such a shift could compromise OpenAI’s founding mission to develop artificial general intelligence (AGI) that benefits all of humanity, potentially prioritising profit over public safety and accountability, not just in the US, but globally.
The coalition, including notable figures like economists Oliver Hart and Joseph Stiglitz, and AI pioneers Geoffrey Hinton and Stuart Russell, expressed concerns that the restructuring would reduce nonprofit oversight and increase investor influence.
They fear this change could lead to diminished ethical safeguards, especially as OpenAI advances toward creating AGI. OpenAI responded by stating that any structural changes would aim to ensure broader public benefit from AI advancements.
The company plans to adopt a public benefit corporation model while maintaining a nonprofit arm to uphold its mission. The final decision rests with the state authorities, who are reviewing the proposed restructuring.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Smartphones, computers, and key tech components have been granted exemption from the latest round of US tariffs, providing relief to American technology firms heavily reliant on Chinese manufacturing.
The decision, which includes products such as semiconductors, solar cells, and memory cards, marks the first major rollback in President Donald Trump’s trade war with China.
The exemptions, retroactively effective from 5 April, come amid concerns from US tech giants that consumer prices would soar.
Analysts say this move could be a turning point, especially for companies like Apple and Nvidia, which source most of their hardware from China. Industry reaction has been overwhelmingly positive, with suggestions that the policy shift could reshape global tech supply chains.
Despite easing tariffs on electronics, Trump has maintained a strict stance on Chinese trade, citing national security and economic independence.
The White House claims the reprieve gives firms time to shift manufacturing to the US. However, electronic goods will still face a separate 20% tariff due to China’s ties to fentanyl-related trade. Meanwhile, Trump insists high tariffs are essential leverage to renegotiate fairer global trade terms.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google DeepMind is enforcing strict non-compete agreements in the United Kingdom, preventing employees from joining rival AI companies for up to a year. The length of the restriction depends on an employee’s seniority and involvement in key projects.
Some DeepMind staff, including those working on Google’s Gemini AI, are reportedly being paid not to work while their non-competes run. The policy comes as competition for AI talent intensifies worldwide.
Employees have voiced concern that these agreements could stall their careers in a rapidly evolving industry. Some are seeking ways around the restrictions, such as moving to countries with less rigid employment laws.
While DeepMind claims the contracts are standard for sensitive work, critics say they may stifle innovation and mobility. The practice remains legal in the UK, even though similar agreements have been banned in the US.
For more information on these topics, visit diplomacy.edu.