Teachers in Stoke-on-Trent gathered for a full-day event to discuss the role of AI in education. Organised by the Good Future Foundation, the session saw more than 40 educators, including Stoke-on-Trent South MP Allison Gardner, explore how AI can enhance teaching and learning. Gardner emphasised the government’s belief that AI represents a ‘generational opportunity’ for education in the UK.
The event highlighted both the promise and the challenges of integrating AI into UK schools. Attendees shared ideas on using AI to improve communication, particularly with families who speak English as an additional language, and to streamline access to school resources through automated chatbots. While the potential benefits are clear, many teachers expressed concerns about the risks associated with new technology.
Daniel Emmerson, executive director of the Good Future Foundation, stressed the importance of supporting educators in understanding and implementing AI. He explained that AI can help prepare students for a future dominated by this technology. Meanwhile, schools like Belgrave St Bartholomew’s Academy are already leading the way in using AI to improve lessons and prepare students for the opportunities AI will bring.
For more information on these topics, visit diplomacy.edu.
Google has announced the addition of its HD voice model, Chirp 3, to its Vertex AI platform, marking a significant step in its push into voice AI. Starting next week, developers will be able to use the platform to build applications such as voice assistants, audiobooks, and video voice-overs with eight new voices available in 31 languages.
The launch comes at a time when other companies, including startups like Sesame, are also advancing in the field of realistic-sounding AI voices. Despite this growing competition, Google remains cautious about potential misuse, with CEO Thomas Kurian noting that the company is working closely with its safety team to establish proper usage guidelines for Chirp 3.
Google’s move with Chirp 3 positions it alongside other tools from its Vertex AI platform, which includes machine learning and generative AI services like its Gemini and Imagen models. With AI voice applications rapidly gaining traction, it will be interesting to see how Google expands its offerings to stay competitive in this evolving space.
For more information on these topics, visit diplomacy.edu.
Brave Software has filed a lawsuit against News Corp in a bid to preempt legal action over the indexing of copyrighted articles from publications such as The Wall Street Journal and the New York Post.
The legal dispute stems from a cease-and-desist letter issued by News Corp, which accused Brave of ‘scraping’ its websites and misappropriating content. Brave argues that indexing is standard practice for search engines and falls under ‘fair use.’
The lawsuit also raises concerns about the impact of such legal challenges on generative AI. Brave claims that search indexing is essential for AI models like ChatGPT and Google’s Gemini, which rely on search engine responses.
The company, which holds less than 1% of the search market compared to Google’s 90%, accuses News Corp of attempting to stifle competition and raise barriers for smaller search providers.
News Corp has rejected Brave’s arguments, with CEO Robert Thomson calling the company’s practices ‘parasitical’ and accusing it of unauthorised content scraping.
The dispute is part of a broader conflict between publishers and tech firms over the use of copyrighted material in AI training. News Corp previously sued AI startup Perplexity AI for allegedly copying its content without permission.
Brave is seeking a court declaration that its indexing practices do not constitute copyright infringement.
For more information on these topics, visit diplomacy.edu.
Dapr, the open-source microservices runtime introduced by Microsoft in 2019, has added new capabilities to support AI agents, broadening its appeal to developers creating scalable distributed applications.
Initially designed to simplify microservice-based app development, Dapr’s new functionality builds on its existing concept of virtual actors, making it easier to incorporate AI agents into systems.
The newly launched Dapr Agents offer developers a framework to efficiently run AI agents at scale with statefulness, making it ideal for applications involving large language models (LLMs).
However, this update allows seamless integration with popular AI providers, such as AWS Bedrock, OpenAI, and Hugging Face. Developers also benefit from Dapr’s orchestration and resource-efficient model, ensuring agents can spin up quickly when needed and retain state after tasks are completed.
Dapr Agents currently support Python, with plans for .NET and other languages like Java and Go coming soon.
For more information on these topics, visit diplomacy.edu.
Technology Secretary Peter Kyle has been using ChatGPT to seek advice on media appearances and to define technical terms related to his role.
His records, obtained by New Scientist through freedom of information laws, reveal that he asked the AI tool for recommendations on which podcasts to feature and for explanations of terms like ‘digital inclusion’ and ‘anti-matter.’
ChatGPT suggested The Infinite Monkey Cage and The Naked Scientists due to their broad reach and scientific focus.
Kyle also inquired why small and medium-sized businesses in the UK have been slow to adopt AI. The chatbot pointed to factors such as a lack of awareness about government initiatives, funding limitations, and concerns over data protection regulations like GDPR.
While AI adoption remains a challenge, Labour leader Sir Keir Starmer has praised its potential, arguing that the UK government should embrace AI more to improve efficiency.
Despite Kyle’s enthusiasm for AI, he has faced criticism for allegedly prioritising the interests of Big Tech over Britain’s creative industries. Concerns have been raised over a proposed policy that could allow tech firms to train AI on copyrighted material without permission unless creators opt out.
His department defended his use of AI, stating that while he utilises the tool, it does not replace expert advice from officials.
For more information on these topics, visit diplomacy.edu.
Google has announced an update to its Gemini AI assistant, enhancing personalisation to better anticipate user needs and deliver responses that feel more like those of a personal assistant.
The feature, initially available on desktop before rolling out to mobile, allows Gemini to offer tailored recommendations, such as travel ideas, based on search history and, in the future, data from apps like Photos and YouTube.
Users can opt in to the new personalisation features, sharing details like dietary preferences or past conversations to refine responses further.
Google assures that users must explicitly grant permission for Gemini to access search history and other services, and they can disconnect at any time.
However, this level of contextual awareness could give Google an advantage over competitors like ChatGPT by leveraging its vast ecosystem of user data.
The update signals a shift in how users interact with AI, bringing it closer to traditional search while raising questions for publishers and SEO professionals.
As Gemini increasingly provides direct, personalised answers, it may reduce the need for users to visit external websites. While currently experimental, the potential for Google to push broader adoption of AI-driven personalisation could reshape digital content discovery and search behaviour in the future.
For more information on these topics, visit diplomacy.edu.
The National Institute of Standards and Technology (NIST) has introduced HQC, a backup encryption algorithm designed to protect sensitive data from potential threats posed by future quantum computers.
As part of its ongoing efforts to strengthen cybersecurity, the agency selected HQC to complement the existing post-quantum cryptography (PQC) standard, ML-KEM, in case quantum advancements compromise current encryption methods.
HQC relies on error-correcting codes, a mathematical approach used in data protection for decades, including in NASA missions.
The algorithm is larger than ML-KEM and requires more computing power, but experts determined it to be a secure and reliable alternative. A draft standard for HQC is expected within a year, with final approval anticipated by 2027.
NIST has been working to prepare for the so-called ‘Q day,’ when quantum computers could break conventional encryption. Three PQC algorithms were finalized in 2024, including ML-KEM and two digital signature standards.
In addition to announcing HQC, NIST is preparing to release a draft standard for the FALCON algorithm, further strengthening protections against future cyber threats.
For more information on these topics, visit diplomacy.edu.
OpenAI has unveiled new tools to help developers and businesses build AI agents, which are automated systems that can independently perform tasks. These tools are part of OpenAI’s new Responses API, allowing enterprises to create custom AI agents that can search the web, navigate websites, and scan company files, similar to OpenAI’s existing Operator product. The company plans to phase out its older Assistants API by 2026, replacing it with the new capabilities.
The Responses API provides developers with access to powerful AI models, such as GPT-4o search and GPT-4o mini search, which are designed for high factual accuracy. OpenAI claims these models can offer more reliable answers than previous versions, with GPT-4o search achieving a 90% accuracy rate. Additionally, the platform includes a file search feature to help companies quickly retrieve information from their databases. The CUA model, which automates tasks like data entry, is also available, allowing developers to automate workflows with more precision.
Despite its promise, OpenAI acknowledges that there are still challenges to address, such as AI hallucinations and occasional errors in task automation. However, the company continues to improve its models, and the introduction of the Agents SDK gives developers the tools they need to build, debug, and optimise AI agents. OpenAI’s goal is to move beyond demos and create impactful tools that will shape the future of AI in enterprise applications.
For more information on these topics, visit diplomacy.edu.
Spain’s government has approved a bill imposing heavy fines on companies that fail to label AI-generated content, aiming to combat the spread of deepfakes.
The legislation, which aligns with the European Union’s AI Act, classifies non-compliance as a serious offence, with penalties reaching up to €35 million or 7% of a company’s global revenue.
Digital Transformation Minister Oscar Lopez stressed that AI can be a force for good but also a tool for misinformation and threats to democracy.
The bill also bans manipulative AI techniques, such as subliminal messaging targeting vulnerable groups, and restricts the use of AI-driven biometric profiling, except in cases of national security.
Spain is one of the first EU nations to implement these strict AI regulations, going beyond the looser US approach, which relies on voluntary compliance.
A newly established AI supervisory agency, AESIA, will oversee enforcement, alongside sector-specific regulators handling privacy, financial markets, and law enforcement concerns.
For more information on these topics, visit diplomacy.edu.
The Trump administration has cut funding for two key cybersecurity initiatives, including one supporting election security, sparking concerns over potential vulnerabilities in future US elections.
The Cybersecurity and Infrastructure Security Agency (CISA) announced it would end around $10 million in annual funding to the non-profit Center for Internet Security, which manages election-related cybersecurity programmes.
However, this move comes as part of a broader review of CISA’s election-related work, during which over a dozen staff members were placed on administrative leave.
The decision follows another controversial step by the administration to dismantle an FBI task force that investigated foreign influence in US elections.
Critics warn that reducing government involvement in election security weakens safeguards against interference, with Larry Norden from the Brennan Center for Justice calling the cuts a serious risk for state and local election officials.
The National Association of Secretaries of State is now seeking clarification on CISA’s decision and its wider implications.
CISA has faced Republican criticism in recent years for its role in countering misinformation related to the 2020 election and the coronavirus pandemic. However, previous leadership maintained that the agency’s work was limited to assisting states in identifying and addressing misinformation.
While CISA argues the funding cuts will streamline its focus on critical security areas, concerns remain over the potential impact on election integrity and cybersecurity protections across local and state governments.
For more information on these topics, visit diplomacy.edu.