Tesla moves to enter the British household electricity market

A licence that would allow Tesla to supply electricity directly to households and businesses across Great Britain has been applied for.

The application was submitted to the national energy regulator Ofgem, which oversees energy suppliers in England, Scotland and Wales.

Approval would enable the company to enter the retail electricity market as early as next year. The service is expected to operate under the brand ‘Tesla Electric’, extending the company’s strategy of combining electric vehicles, battery storage and energy supply into a single ecosystem.

Tesla’s UK energy subsidiary, Tesla Energy Ventures, filed the application through its Manchester-based operation. Regulatory review may take several months, as Ofgem typically requires up to nine months to evaluate electricity supplier licences.

A future electricity offer could primarily target households that already use Tesla technologies, including home batteries and electric vehicle charging systems.

The company sells Powerwall storage batteries in the UK, which allow homeowners to store electricity generated by solar panels or purchased during off-peak hours.

Such systems also allow surplus energy stored in batteries to be sold back to the grid.

Similar services are already available in the US, where Tesla launched a residential electricity supply programme in Texas in 2022.

The expansion into the energy supply market comes amid pressure on Tesla’s automotive business in Europe. Sales of Tesla vehicles in the UK declined significantly during 2025, reducing the company’s share of the national car market.

Diversifying into energy services could therefore represent a broader strategic shift for the company led by Elon Musk. Integrating electricity supply with electric vehicles and home energy systems could allow Tesla to build a more comprehensive energy platform for consumers.

If approved, the initiative would position Tesla as both a technology manufacturer and a direct energy supplier in the British market.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU approves signature of global AI framework

The European Parliament has approved the Council of Europe Framework Convention on Artificial Intelligence, the first international legally binding treaty on AI governance.

With 455 votes in favour, 101 against, and 74 abstentions, Parliament endorsed the EU’s signature to embed existing AI legislation in a global framework. The move reinforces the safe and rights-respecting deployment of AI across the EU and worldwide.

The convention sets standards for transparency, documentation, risk management, and oversight, applying to both public authorities and private actors acting on their behalf.

It establishes a global baseline for AI governance while allowing the EU to maintain higher protections under the AI Act, GDPR, and other EU legislation covering product safety, liability, and non-discrimination.

The EU co-rapporteurs highlighted that the agreement demonstrates the EU’s commitment to human-centric AI. By prioritising democracy, accountability, and fundamental rights, the framework aims to ensure AI strengthens open societies while supporting stable economic growth.

Negotiations on the convention began in 2022 with participation from the EU member states, international partners, civil society, academia, and industry. Current signatories include the EU, the UK, Ukraine, Canada, Israel, and the United States, with the convention open to additional global partners.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Writer files lawsuit against Grammarly over AI feature using experts’ identities

A journalist has filed a class action lawsuit against Grammarly after the company introduced an AI feature that generated editorial feedback by imitating well-known writers and public figures without their permission.

The legal complaint was submitted by investigative journalist Julia Angwin, who argued that the tool unlawfully used the identities and reputations of authors and commentators.

The feature, known as ‘Expert Review’, produced automated critiques presented as if they came from figures such as Stephen King, Carl Sagan and technology journalist Kara Swisher.

Such a feature was available to subscribers paying an annual fee and was designed to simulate professional editorial guidance.

Critics quickly questioned both the quality of the generated feedback and the decision to associate the tool with real individuals who had not authorised the use of their names or expertise.

Technology writer Casey Newton tested the system by submitting one of his own articles and receiving automated feedback attributed to an AI version of Swisher. The response appeared generic, casting doubt on the value of linking such commentary to prominent personalities.

Following criticism from writers and researchers, the feature was disabled. Shishir Mehrotra, chief executive of Grammarly’s parent company Superhuman, issued a public apology while defending the broader concept behind the tool.

The lawsuit reflects growing tensions around AI systems that replicate creative styles or professional expertise.

As generative AI technologies expand across writing and publishing industries, questions surrounding consent, intellectual labour and identity rights are becoming increasingly prominent.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI forecasts heart failure progression up to a year in advance

Researchers from MIT, Mass General Brigham, and Harvard Medical School developed a deep-learning model that predicts the risk of decline among heart-failure patients within a year. Known as PULSE-HF, the model forecasts changes in left ventricular ejection fraction (LVEF) using ECG data.

PULSE-HF can process both 12-lead and single-lead ECGs, making it suitable for low-resource settings, including rural clinics without specialised cardiac staff. The model achieved AUROCs of 0.87–0.91 across three patient cohorts, showing high accuracy in identifying patients at risk of severe heart failure.

By predicting future LVEF decline, clinicians can prioritise follow-up care for high-risk patients while reducing hospital visits for those at lower risk. Researchers faced challenges with data cleaning and labelling, but the model remained robust with imperfect real-world inputs.

The team plans to conduct prospective trials in real patients to validate PULSE-HF in clinical practice further. Researchers stressed that AI forecasting of heart failure could greatly improve patient outcomes and healthcare resource allocation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

DIGITALEUROPE urges changes to EU AI Act rules for industry

European industry representatives are urging policymakers to reconsider parts of the EU AI Act, arguing that the current framework could impose significant compliance costs on companies developing AI tools for industrial and medical technologies.

According to Cecilia Bonfeld-Dahl, director-general of DIGITALEUROPE, manufacturers of high-tech machines, medical devices, and radio equipment are already subject to strict product safety regulations. Adding AI-specific requirements could create unnecessary administrative burdens for companies already heavily regulated. She argues that policymakers should aim for balanced AI regulation that encourages innovation while maintaining safety standards.

Industry groups warn that classifying certain AI systems as high-risk under Annex I of the AI Act could be particularly costly for smaller firms. DIGITALEUROPE estimates that a company with around 50 employees developing an AI-based product could incur initial compliance costs of €320,000 to €600,000, followed by annual expenses of up to €150,000. According to the organisation, such costs could reduce profits significantly and discourage smaller companies from pursuing AI innovation.

Manufacturing and medical technology sectors across Europe employ millions of workers and increasingly rely on AI to improve product performance and safety. Industry representatives argue that many applications, such as AI systems used to enhance industrial equipment safety or improve medical devices, already operate under established regulatory frameworks. These existing frameworks could be adapted rather than introducing additional layers of regulation.

The broader regulatory landscape is also contributing to concerns among technology companies. Over the past six years, the EU has introduced nearly 40 new technology-related regulations, some of which overlap or impose similar compliance requirements. DIGITALEUROPE estimates that compliance with the AI Act could cost companies approximately €3.3 billion annually, while cybersecurity and data-sharing regulations add further financial obligations.

Industry leaders warn that rising compliance costs could affect investment in AI development across Europe. Current estimates suggest that the EU accounts for about 7.5% of global AI investment, significantly behind the United States and China.

DIGITALEUROPE has called on the EU institutions to consider postponing parts of the AI Act’s implementation timeline to allow further discussion on how high-risk AI systems should be defined. Supporters of this approach argue that additional consultation could help ensure the regulatory framework protects consumers while also enabling European companies to compete globally in the rapidly evolving AI sector.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU lawmakers move forward on AI Act changes

Members of the European Parliament have reached a preliminary political agreement on amendments to the EU Artificial Intelligence Act. The compromise will be reviewed by parliamentary committees before a scheduled vote in Brussels.

Lawmakers in the EU agreed to extend compliance deadlines for some high risk AI systems. The changes aim to give companies and regulators more time to prepare technical standards and enforcement frameworks.

The proposed amendments also include a ban on AI systems that create non consensual explicit deepfakes. Officials in the EU say the measure aims to strengthen consumer protection and improve online safety for children.

Industry groups in the EU have raised concerns about compliance burdens linked to the revised rules. Policymakers in the EU continue negotiations as the legislation moves toward committee approval.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Civil society urges stronger EU digital fairness rules

More than 200 civil society organisations have urged the European Commission to deliver strong consumer protections through the upcoming Digital Fairness Act. Advocacy groups in the EU say the proposal must address risks created by modern online platforms.

Campaigners argue that many existing EU consumer laws were designed decades ago and no longer reflect the realities of the digital market. The coalition warned policymakers in the EU not to treat regulatory simplification as a path toward deregulation.

Advocates are pushing for binding rules targeting deceptive design practices and addictive digital features. Survey responses across the EU show broad public support for stronger protections against dark patterns and unfair personalisation.

The European Commission is expected to present the Digital Fairness Act later this year. Officials in the EU are also considering expanding enforcement powers to strengthen consumer safeguards online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Telegram faces global outages as Russia slows service

Users of the messaging app Telegram have experienced outages in multiple regions over the past 24 hours, with the largest volume of complaints coming from Russia. Reports from the US, UK, Germany, the Netherlands, and Norway suggest the issues could be global.

Difficulties primarily affected the mobile app, with users reporting login issues, messaging delays, and limited access to features. In Russia, outages result from traffic slowdowns by Roskomnadzor, with similar restrictions affecting WhatsApp.

Telegram’s founder, Pavel Durov, has criticised the Russian government’s actions, arguing that authorities aim to push citizens towards a state-controlled alternative, the ‘Max’ messenger.

Despite Telegram overtaking WhatsApp in Russia with over 95 million active users, Max has now surpassed 100 million users, showing the Kremlin’s growing influence over digital communications.

Russian authorities have stated that Telegram must comply with local laws, moderate content, and consider data localisation to avoid further restrictions. Durov has reaffirmed the platform’s commitment to protecting user privacy and upholding freedom of speech.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU charts roadmap for tokenised financial markets

The European Central Bank (ECB) has unveiled Appia, a strategic roadmap for developing Europe’s tokenised financial ecosystem anchored in central bank money. The initiative aims to guide the shift from traditional finance to tokenised markets while ensuring stability and interoperability.

A key component of Appia is Pontes, the Eurosystem’s distributed ledger technology (DLT) settlement solution. Pontes, set for Q3 2026 pilots, will enable central bank money transactions and connect DLT infrastructures with the Eurosystem’s TARGET2, T2S, and TIPS services.

The ECB has opened a public consultation inviting feedback and proposals from both public and private sector stakeholders. Respondents’ input will help refine the roadmap and shape the long-term blueprint for Europe’s tokenised financial system.

Appia also complements ongoing efforts on the digital €, with payment service provider selection planned for 2026 and a 12-month pilot trial in the second half of 2027.

The initiative highlights the ECB’s commitment to integrating emerging technologies while preserving financial stability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK watchdog demands stronger child safety on social platforms

The British communications regulator Ofcom has called on major technology companies to enforce stricter age controls and improve safety protections for children using online platforms.

The warning targets services widely used by young audiences, including Facebook, Instagram, Roblox, Snapchat, TikTok and YouTube.

Regulators said that despite existing minimum age policies, large numbers of children under the age of 13 continue to access platforms intended for older users.

According to Ofcom research, more than 70 percent of children aged 8 to 12 regularly use such services.

Authorities have asked companies to demonstrate how they will strengthen protections and ensure compliance with minimum age requirements.

Platforms must present their plans by 30 April, after which Ofcom will publish an assessment of their responses and determine whether further regulatory action is necessary.

The regulator also outlined several key areas requiring improvement.

Companies in the UK are expected to implement more effective age-verification systems, strengthen protections against online grooming and ensure that recommendation algorithms do not expose children to harmful content.

Another concern involves product development practices.

Ofcom warned that new digital features, including AI tools, should not be tested on children without adequate safety assessments. Platforms are required to evaluate potential risks before launching significant updates.

The measures are part of the UK’s broader regulatory framework introduced under the Online Safety Act, which aims to reduce exposure to harmful online material.

The law requires platforms to prevent children from accessing content linked to pornography, suicide, self-harm and eating disorders, while limiting the promotion of violent or abusive material in recommendation feeds.

Ofcom indicated that enforcement action may follow if companies fail to demonstrate meaningful improvements. Regulators argue that stronger safeguards are necessary to restore public trust and ensure that digital platforms prioritise child safety in their design and operation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!