Vodafone UK has teamed up with IBM to explore quantum-safe cryptography as part of a new Proof of Concept (PoC) test for its mobile and broadband services, particularly for users of its ‘Secure Net’ anti-malware service. While quantum computers are still in the early stages of development, they could eventually break current internet encryption methods. In anticipation of this, Vodafone and IBM are testing how to integrate new post-quantum cryptographic standards into Vodafone’s existing Secure Net service, which already protects millions of users from threats like phishing and malware.
IBM’s cryptography experts have co-developed two algorithms now recognised in the US National Institute of Standards and Technology’s first post-quantum cryptography standards. This collaboration, supported by Akamai Technologies, aims to make Vodafone’s services more resilient against future quantum computing risks. Vodafone’s Head of R&D, Luke Ibbetson, stressed the importance of future-proofing digital security to ensure customers can continue enjoying safe internet experiences.
Although the PoC is still in its feasibility phase, Vodafone hopes to implement quantum-safe cryptography across its networks and products soon, ensuring stronger protection for both business and consumer users.
For more information on these topics, visit diplomacy.edu.
Britain’s privacy regulator, the Information Commissioner’s Office (ICO), has launched an investigation into the child privacy practices of TikTok, Reddit, and Imgur. The ICO is scrutinising how these platforms manage personal data and age verification for users, particularly teenagers, to ensure they comply with UK data protection laws.
The investigation focuses on TikTok’s use of data from 13-17-year-olds to recommend content via its algorithm. The ICO is also examining how Reddit and Imgur assess and protect the privacy of child users. If evidence of legal breaches is found, the ICO will take action, as it did in 2023 when TikTok was fined £12.7 million for mishandling data from children under 13.
Both Reddit and Imgur have expressed a commitment to adhering to UK regulations. Reddit, for example, stated that it plans to roll out updates to meet new age-assurance requirements. Meanwhile, TikTok and Imgur have not yet responded to requests for comment.
The investigation comes amid stricter UK legislation aimed at safeguarding children online, including measures requiring social media platforms to limit harmful content and enforce age checks to prevent underage access to inappropriate material.
For more information on these topics, visit diplomacy.edu.
Apple has rolled out a new feature called ‘age assurance’ to help protect children’s privacy while using apps. The technology allows parents to input their child’s age when setting up an account without disclosing sensitive information like birthdays or government IDs. Instead, parents can share a general ‘age range’ with app developers, putting them in control of what data is shared.
This move comes amid growing pressure from US lawmakers, including those in Utah and South Carolina, who are considering age-verification laws for social media apps. Apple has expressed concerns about collecting sensitive personal data for such verifications, arguing it would require users to hand over unnecessary details for apps that don’t require it.
The age assurance tool allows parents to maintain control over their children’s data while limiting what third parties can access. Meta, which has supported legislation for app stores to verify children’s ages, welcomed the new tech as a step in the right direction, though it raised concerns about its practical implementation.
For more information on these topics, visit diplomacy.edu.
Europol announced on Friday that two dozen people have been arrested for their involvement in a criminal network distributing AI-generated images of child sexual abuse. This operation marks one of the first of its kind, highlighting concerns over the use of AI in creating illegal content. Europol noted that there is currently a lack of national legislation addressing AI-generated child abuse material.
The primary suspect, a Danish national, operated an online platform where he distributed the AI-generated content he created. Users from around the world paid a ‘symbolic online payment’ to access the material. The platform has raised significant concerns about the potential misuse of AI tools for such criminal purposes.
The ongoing operation, which involves authorities from 19 countries, resulted in 25 arrests, with most occurring simultaneously on Wednesday under the leadership of Danish authorities. Europol indicated that more arrests are expected in the coming weeks as the investigation continues.
For more information on these topics, visit diplomacy.edu.
Instagram is considering launching a separate app for its Reels feature, which focuses on short-form videos, according to remarks made by Instagram chief Adam Mosseri this week. The potential move is seen as an effort to capitalise on the uncertain future of TikTok in the US, aiming to offer a similar video-scrolling experience. Meta, the parent company of Instagram, has yet to comment on the report.
This comes just months after Meta introduced a new video-editing app, Edits, in January, which appears to target users of CapCut, a popular video editor owned by TikTok’s parent company, ByteDance. Meta’s previous attempt to launch a standalone video-sharing app, Lasso, in 2018 failed to gain traction and was eventually discontinued.
By exploring a dedicated app for Reels, Instagram hopes to strengthen its position in the competitive short-form video market, where TikTok currently dominates.
For more information on these topics, visit diplomacy.edu.
Estonia has launched a new initiative aimed at preparing students and teachers for the age of AI. The ‘AI Leap’ programme will provide access to popular AI chatbots, including an educational version of ChatGPT, to help build digital skills. Starting in September 2025, the programme will involve 20,000 high school students and 3,000 teachers, with plans to expand to vocational schools and an additional 38,000 students and 3,000 teachers in 2026.
Education Minister Kristina Kallas emphasised that Estonia’s economic competitiveness depends on how well the country adapts to AI, ensuring young people are equipped for the future. As part of the initiative, Estonia will also invest in teacher training to support the integration of AI in classrooms.
The programme is a public-private partnership, with negotiations underway with major AI companies, including OpenAI and Anthropic. OpenAI has expressed its pride in collaborating with Estonia to bring ChatGPT Edu to the education system, aiming to better prepare students for the workforce. Estonia’s use of AI in education is seen as a model that other countries may follow as the EU pushes to increase digital skills across Europe by 2030.
For more information on these topics, visit diplomacy.edu.
British universities have been urged to reassess their assessment methods after new research revealed a significant rise in students using genAI for their projects. A survey of 1,000 undergraduates found that 88% of students used AI tools like ChatGPT for assessments in 2025, up from 53% last year. Overall, 92% of students now use some form of AI, marking a substantial shift in academic behaviours in just a year.
The report, by the Higher Education Policy Institute and Kortext, highlights how AI is being used for tasks such as summarising articles, explaining concepts, and suggesting research ideas. While AI can enhance the quality of work and save time, some students admitted to directly including AI-generated content in their assignments, raising concerns about academic misconduct.
The research also found that concerns over AI’s potential impact on academic integrity vary across demographics. Women, wealthier students, and those studying STEM subjects were more likely to embrace AI, while others expressed fears about getting caught or receiving biased results. Despite these concerns, students generally feel that universities are addressing the issue of academic integrity, with many believing their institutions have clear policies on AI use.
Experts argue that universities need to adapt quickly to the changing landscape, with some suggesting that AI should be integrated into teaching rather than being seen solely as a threat to academic integrity. As AI tools become an essential part of education, institutions must find a balance between leveraging the technology and maintaining academic standards.
For more information on these topics, visit diplomacy.edu.
Canada’s telecommunications regulator, the CRTC, announced on Wednesday that it will impose a fee on Google to cover the costs of enforcing the Online News Act, which requires large tech platforms to pay for news content shared on their sites. The levy, which will be implemented from April 1, will vary each year and has no upper limit. This move comes amid rising tensions between Canada and the US over issues like trade and a digital services tax on American tech firms.
The CRTC stated that most of its operations are funded by fees from the companies it regulates, and the new charge aims to recover costs related to the law. Google, which had previously raised concerns about the fairness of such a rule, had argued that it was unreasonable to impose 100% of the costs on one company. Despite this, Google has agreed to pay C$100 million annually to Canadian publishers in a deal that ensures its search results continue to feature news content.
The law, which is part of a global trend to make internet giants pay for news, was introduced last year in response to concerns that tech firms were crowding out news businesses in the online advertising market. While both Google and Meta were identified as major platforms required to make payments, Meta chose to block news from its platforms in Canada instead. Google, however, has continued to negotiate with the Canadian government, although it has yet to comment further on the CRTC’s decision.
For more information on these topics, visit diplomacy.edu.
For over a century, the Rorschach inkblot test has been used to explore human psychology by revealing the hidden facets of the mind through personal interpretations of ambiguous shapes. The test leverages a phenomenon known as pareidolia, where individuals perceive patterns, such as animals or faces, in random inkblots. Now, thanks to the advances in artificial intelligence, this test has been used to explore how AI interprets these same images.
In an intriguing experiment, ChatGPT was shown five common inkblots to see how it would respond. Unlike humans, who often project their emotions or personal experiences onto the images, the AI offered more literal interpretations, identifying symmetrical shapes or common visual features. However, these responses were based purely on patterns it has been trained to recognise, rather than any true emotional connection to the inkblots.
The AI’s responses were consistent with what it had learned from vast datasets of human interpretations. But while humans might see a butterfly or a skull, the AI merely recognised a shape, demonstrating a key difference between human cognition and machine processing. This experiment highlights the unique human ability to attach emotional or symbolic meaning to abstract visuals, something AI is not equipped to replicate.
For more information on these topics, visit diplomacy.edu.
Europe’s top court has ruled that Google’s decision to block an Enel e-mobility app from Android Auto could be considered an abuse of market power. The judgment reinforces competition rules and may push major tech firms to allow easier access for rival apps.
The case stemmed from a €102 million fine imposed by Italy’s antitrust authority in 2021 for restricting access to Enel’s JuicePass app.
Google challenged the penalty, arguing security concerns and the absence of a specific app template. However, the Court of Justice of the European Union backed the Italian regulator, stating that dominant companies must ensure interoperability unless valid security risks exist.
The court clarified that companies should develop necessary templates within a reasonable timeframe.
Although Google has since introduced the requested feature, the ruling may set a precedent for similar cases. Legal experts see it as aligning with EU competition law, citing past decisions against IBM and Microsoft.
The ruling also supports the objectives of the Digital Markets Act, which aims to regulate dominant digital platforms.
The decision is final and unappealable, meaning the Italian Council of State must now rule on Google’s appeal in line with the court’s findings.
For more information on these topics, visit diplomacy.edu.