UK users lose access to Imgur amid watchdog probe

Imgur has cut off access for UK users after regulators warned its parent company, MediaLab AI, of a potential fine over child data protection.

Visitors to the platform since 30 September have been met with a notice saying that content is unavailable in their region, with embedded Imgur images on other sites also no longer visible.

The UK’s Information Commissioner’s Office (ICO) began investigating the platform in March, questioning whether it complied with data laws and the Children’s Code.

The regulator said it had issued MediaLab with a notice of intent to fine the company following provisional findings. Officials also emphasised that leaving the UK would not shield Imgur from responsibility for any past breaches.

Some users speculated that the withdrawal was tied to new duties under the Online Safety Act, which requires platforms to check whether visitors are over 18 before allowing access to harmful content.

However, both the ICO and Ofcom stated that Imgur decided on a commercial choice. Other MediaLab services, such as Kik Messenger, continue to operate in the UK with age verification measures in place.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MIT explores AI solutions to reduce emissions

Rapid growth in AI data centres is raising global energy use and emissions, prompting MIT scientists to cut the carbon footprint through more intelligent computing, greater efficiency, and improved data centre design.

Innovations include cutting energy-heavy training, using optimised or lower-power processors, and improving algorithms to achieve results with fewer computations. Known as ‘negaflops,’ these efficiency gains can dramatically lower energy consumption without compromising AI performance.

Adjusting workloads to coincide with periods of higher renewable energy availability also helps cut emissions.

Location and infrastructure play a significant role in reducing carbon impact. Data centres in cooler climates, flexible multi-user facilities, and long-duration energy storage systems can all decrease reliance on fossil fuels.

Meanwhile, AI is being applied to accelerate renewable energy deployment, optimise solar and wind generation, and support predictive maintenance for green infrastructure.

Experts stress that effective solutions require collaboration among academia, companies, and regulators. Combining AI efficiency, more innovative energy use, and clean energy aims to cut emissions while supporting generative AI’s rapid growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic unveils Claude Sonnet 4.5 as the best AI coding model yet

Anthropic has released Claude Sonnet 4.5, its most advanced AI model yet, claiming state-of-the-art results in coding benchmarks. The company says the model can build production-ready applications, rather than limited prototypes, making it more reliable than earlier versions.

Claude Sonnet 4.5 is available through the Claude API and chatbot at the same price as its predecessor, with $3 per million input tokens and $15 per million output tokens.

Early enterprise tests suggest the model can autonomously code for extended periods, integrate databases, secure domains, and perform compliance checks such as SOC 2 audits.

Industry leaders have endorsed the launch, with Cursor and Windsurf calling it a new generation of AI coding models. Anthropic also emphasises more substantial alignment, noting reduced risks of deception and sycophancy, and improved resistance to prompt injection attacks.

Alongside the model, the company has introduced a Claude Agent SDK to let developers build customised agents, and launched ‘Imagine with Claude’, a research preview showing real-time code generation.

A release that highlights the intense competition in AI, with Anthropic pushing frequent updates to keep pace with rivals such as OpenAI, which has recently gained ground on coding performance with GPT-5.

Claude Sonnet 4.5 follows just weeks after Anthropic’s Claude Opus 4.1, underlining the rapid development cycles driving the sector.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Lufthansa turns to automation and AI for efficiency

Lufthansa Group has unveiled a transformation strategy that places digitalisation and AI at the centre of its future operations. At Capital Markets Day, the company said efficiency will come from automation and streamlined processes.

Around 4,000 administrative roles are set to be cut by 2030, mainly in Germany, as Lufthansa consolidates functions and reduces duplication of work. Executives stressed that the focus will be on non-operational roles, with staff reductions to be conducted in consultation with social partners.

The airline group also confirmed continued investment in fleet renewal, with more than 230 new aircraft expected by 2030. Digital transformation and AI aim to cut costs, accelerate decisions, and boost competitiveness across the group’s airlines, cargo, and technical services.

By 2030, Lufthansa aims for an 8-10 percent EBIT margin, 15-20 percent return on capital, and over €2.5 billion in annual free cash flow. The company said these measures will ensure long-term resilience in a changing industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California enacts first state-level AI safety law

In the US, California Governor Gavin Newsom has signed SB 53, a landmark law establishing transparency and safety requirements for large AI companies.

The legislation obliges major AI developers such as OpenAI, Anthropic, Meta, and Google DeepMind to disclose their safety protocols. It also introduces whistle-blower protections and a reporting mechanism for safety incidents, including cyberattacks and autonomous AI behaviour not covered by the EU AI Act.

Reactions across the industry have been mixed. Anthropic supported the law, while Meta and OpenAI lobbied against it, with OpenAI publishing an open letter urging Newsom not to sign. Tech firms have warned that state-level measures could create a patchwork of regulation that stifles innovation.

Despite resistance, the law positions California as a national leader in AI governance. Newsom said the state had demonstrated that it was possible to safeguard communities without stifling growth, calling AI ‘the new frontier in innovation’.

Similar legislation is under consideration in New York, while California lawmakers are also debating SB 243, a separate bill that would regulate AI companion chatbots.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New Facebook tools help creators boost fan engagement

Facebook has introduced new tools designed to help creators increase engagement and build stronger communities on the platform. The update includes fan challenges, custom badges for top contributors, and new insights to track audience loyalty.

Fan challenges allow creators with over 100,000 followers to issue prompts inviting fans to share content on a theme or event. Contributions are displayed in a dedicated feed, with a leaderboard ranking entries by reactions.

Challenges can run for a week or stretch over several months, giving creators flexibility in engaging their audiences.

Meta has also launched custom fan badges for creators with more than one million followers, enabling them to rename Top Fan badges each month. The feature gives elite-level fans extra recognition and strengthens the sense of community. Fans can choose whether to accept the custom badge.

To complement these features, Facebook adds new metrics showing the number of Top Fans on a page. These insights help creators measure engagement efforts and reward their most dedicated followers.

The tools are now available to eligible creators worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT gets family safety update with parental controls

OpenAI has introduced new parental controls for ChatGPT, giving families greater oversight of how teens use the AI platform. The tools, which are live for all users, allow parents to link accounts with their children and manage settings through a simple control dashboard.

The system introduces stronger safeguards for teen accounts, including filters on graphic or harmful content and restrictions on roleplay involving sex, violence or extreme beauty ideals.

Parents can also fine-tune features such as voice mode, memory, image generation, or set quiet hours when ChatGPT cannot be accessed.

A notification mechanism has been added to alert parents if a teen shows signs of acute distress, escalating to emergency services in critical cases. OpenAI said the controls were shaped by consultation with experts, advocacy groups, and policymakers and will be expanded as research evolves.

To complement the parental controls, a new online resource hub has been launched to help families learn how ChatGPT works and explore positive uses in study, creativity and daily life.

OpenAI also plans to roll out an age-prediction system that automatically applies teen-appropriate settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kazakhstan launches Alem Crypto Fund for digital assets

Kazakhstan has launched the Alem Crypto Fund to strengthen its presence in digital finance. The state-backed fund, created by the Ministry of Artificial Intelligence and Digital Development, will focus on long-term investments in digital assets and forming strategic reserves.

The initiative is managed by Qazaqstan Venture Group and registered within the Astana International Financial Centre (AIFC), a hub for financial innovation. Officials have suggested the fund could evolve into a tool for state-level savings, enhancing the country’s economic resilience.

Binance Kazakhstan, a locally licensed arm of the global exchange, has been named the fund’s strategic partner. They made their first investment in BNB, the native token of BNB Chain, which holds a market capitalisation of over $138 billion.

Government representatives and Binance Kazakhstan described the collaboration as a milestone for institutional recognition of cryptocurrencies in Kazakhstan. It signals a move toward a more transparent and secure digital asset market integrated with global technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK’s Stockton secures £100m AI data centre to strengthen local economy

A £100m AI data centre has been approved for construction on the outskirts of Stockton, with developers Latos Data Centres pledging up to 150 new jobs.

The Preston Farms Industrial Estate site will feature two commercial units, plants, substations and offices, designed to support the growing demands of AI and advanced computing.

Work on the Neural Data Centre is set to begin at the end of the year, with full operations expected by 2028. The project has been welcomed by Industry Minister and Stockton North MP Chris McDonald, who described it as a significant investment in skills and opportunities for the future.

Latos managing director Andy Collin said the facility was intended to be ‘future proof’, calling it a purpose-built factory for the modern digital economy. Local leaders hope the investment will help regenerate Teesside’s industrial base, positioning the region as a hub for cutting-edge infrastructure.

The announcement follows the UK government’s decision to create an AI growth zone in the North East, covering sites in Northumberland and Tyneside. Teesworks in Redcar was not included in the initial allocation, but ministers said further proposals from Teesside were still under review.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New Jersey proposes bill to uncover data centre energy and water use

New Jersey legislators have introduced a bill requiring data centre operators in the state to disclose their annual energy and water usage publicly. The measure seeks to inject transparency into operations that are notorious for high resource consumption.

The proposed law emerges amidst broader scrutiny of data centres’ environmental footprint. Researchers in the Great Lakes region estimate hyper-scale facilities could use up to 365 million gallons of water yearly for cooling and related systems.

Supporters say the disclosures can help policymakers and communities understand strain on the electric grid and water supplies, especially as data centre growth accelerates. Critics warn that requiring such detailed reporting might discourage investment or create competitive disadvantages.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!