OpenAI says ChatGPT advertisements remain limited to the US

Despite speculation that the feature was expanding internationally, OpenAI has clarified that advertisements in ChatGPT are currently available only to users in the US.

Questions about a broader rollout emerged after references to advertisements appeared in the platform’s updated privacy policy. Some users interpreted the language as evidence that advertising would soon be introduced globally.

OpenAI said the policy update does not signal an immediate expansion. According to the company, advertising features are still being tested within the US as part of a gradual deployment strategy.

ChatGPT advertisements were introduced in February 2026 and appear below responses generated by the chatbot. The ads are shown only to logged-in users on free subscription tiers and are not displayed to users under eighteen.

Company representatives stated that advertising systems operate independently from the AI model that generates responses. According to OpenAI, advertisers cannot influence or modify the content produced by ChatGPT.

The company also said it does not share user conversations or personal chat histories with advertisers. However, advertisements may still be personalised based on user queries, which has prompted discussions about how conversational interfaces could shape consumer decisions.

OpenAI indicated that it is adopting a cautious, phased approach before considering any wider rollout of ChatGPT advertising features in other markets.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

OpenAI plans to integrate Sora video generation into ChatGPT

According to reports, OpenAI is preparing to integrate its AI video generator Sora directly into ChatGPT, a move that could expand the platform’s capabilities beyond text and image generation.

Sora currently operates as a standalone application and web service. Integrating the tool into ChatGPT could dramatically increase its visibility and usage, particularly given the chatbot’s massive global user base.

The company released an updated version of the model in 2025 that allows users to create, remix and even appear inside AI-generated videos. Bringing those features into ChatGPT would represent a major step toward making video generation a mainstream function within conversational AI systems.

Competition in the generative video market is intensifying. Companies, including Google, are developing similar technologies, with the company’s Gemini platform offering video creation powered by the Veo system. Other developers are also launching text-to-video models as the field rapidly expands.

Despite the potential growth, integrating video generation into ChatGPT may significantly increase operating costs. Running large AI systems requires vast computing resources and energy, and the chatbot already costs billions of dollars annually to operate.

Although OpenAI earns revenue from subscriptions, the majority of ChatGPT users currently use the free version. The company is therefore exploring additional monetisation strategies, including advertising and new premium services.

Integrating Sora into ChatGPT could therefore serve both strategic and financial goals, strengthening the platform’s position in the competitive generative AI market while expanding the types of content users can create.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Deepfake attacks push organisations to rethink cybersecurity strategies

Organisations are strengthening their cybersecurity strategies as deepfake attacks become more convincing and easier to produce using generative AI.

Security experts alert that enterprises must move beyond basic detection tools and adopt layered security strategies to defend against the growing threat of deepfake attacks targeting communications and digital identity.

Many existing tools for identifying manipulated media are still imperfect. Digital forensics expert Hany Farid estimates that some systems used to detect deepfake attacks are only about 80 percent effective and often fail to explain how they determine whether an image, video, or audio recording is authentic. The lack of explainability also raises challenges for legal investigations and public verification of suspicious media.

Cybersecurity companies are creating new technologies to improve the detection of deepfake attacks by analysing slight signals that are difficult for humans to notice. Firms such as GetReal Security, Reality Defender, Deep Media, and Sensity AI examine lighting consistency, shadow angles, voice patterns, and facial movements. Environmental indicators such as device location, metadata, and IP information can also help security teams spot potential deepfake attacks.

However, experts say detection alone cannot fully protect organisations from deepfake attacks. Companies are increasingly conducting internal red-team exercises that simulate impersonation scenarios to expose weaknesses in verification procedures. Multi-factor authentication techniques can reduce the risk of employees responding to fraudulent communications.

Another emerging defence involves digital provenance systems designed to track the origin and modification history of digital content. Initiatives such as the Coalition for Content Provenance and Authenticity (C2PA) embed cryptographically signed metadata into media files, allowing organisations to verify whether content linked to suspected deepfake attacks has been altered.

Recent experiments highlight how testing these threats can be. In February, cybersecurity company Reality Defender conducted an exercise with NATO by introducing deepfake media into a simulated military scenario. The findings showed how easily even experienced officials can struggle to identify manipulated communications, reinforcing calls for automated systems capable to detecting deepfake attacks across critical infrastructure.

As generative AI tools continue to advance, organisations are expected to combine detection technologies, stronger verification procedures, and provenance tracking to reduce the risks posed by deepfake attacks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hackers target WhatsApp and Signal in global encrypted messaging attacks

Foreign state-backed hackers are targeting accounts on WhatsApp and Signal used by government officials, diplomats, military personnel, and other high-value individuals, according to a security alert issued by the Portuguese Security Intelligence Service (SIS).

Portuguese authorities described the activity as part of a global cyber-espionage campaign aimed at gaining access to sensitive communications and extracting privileged information from Portugal and allied countries. The advisory did not identify the origin of the suspected attackers.

The warning follows similar alerts from other European intelligence agencies. Earlier this week, Dutch authorities reported that hackers linked to Russia were conducting a global campaign targeting the messaging accounts of officials, military personnel, and journalists.

Security agencies say the attackers are not exploiting vulnerabilities in the messaging platforms themselves. Both WhatsApp and Signal rely on end-to-end encryption designed to protect the content of messages from interception.

Instead, the campaign focuses on social engineering tactics that trick users into granting access to their accounts. According to the SIS report, attackers use phishing messages, malicious links, fake technical support requests, QR-code lures, and impersonation of trusted contacts.

The agency also warned that AI tools are increasingly being used to make such attacks more convincing. AI can help impersonate support staff, mimic familiar voices or identities, and conduct more realistic conversations through messages, phone calls, or video.

Once attackers gain access to an account, they may be able to read private messages, group chats, and shared files via WhatsApp and Signal. They can also impersonate the compromised user to launch additional phishing attacks targeting the victim’s contacts.

The alert echoes a previous warning issued by the Cybersecurity and Infrastructure Security Agency (CISA), which reported that encrypted messaging apps are increasingly being used as entry points for spyware and phishing campaigns targeting high-value individuals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Tesla moves to enter the British household electricity market

A licence that would allow Tesla to supply electricity directly to households and businesses across Great Britain has been applied for.

The application was submitted to the national energy regulator Ofgem, which oversees energy suppliers in England, Scotland and Wales.

Approval would enable the company to enter the retail electricity market as early as next year. The service is expected to operate under the brand ‘Tesla Electric’, extending the company’s strategy of combining electric vehicles, battery storage and energy supply into a single ecosystem.

Tesla’s UK energy subsidiary, Tesla Energy Ventures, filed the application through its Manchester-based operation. Regulatory review may take several months, as Ofgem typically requires up to nine months to evaluate electricity supplier licences.

A future electricity offer could primarily target households that already use Tesla technologies, including home batteries and electric vehicle charging systems.

The company sells Powerwall storage batteries in the UK, which allow homeowners to store electricity generated by solar panels or purchased during off-peak hours.

Such systems also allow surplus energy stored in batteries to be sold back to the grid.

Similar services are already available in the US, where Tesla launched a residential electricity supply programme in Texas in 2022.

The expansion into the energy supply market comes amid pressure on Tesla’s automotive business in Europe. Sales of Tesla vehicles in the UK declined significantly during 2025, reducing the company’s share of the national car market.

Diversifying into energy services could therefore represent a broader strategic shift for the company led by Elon Musk. Integrating electricity supply with electric vehicles and home energy systems could allow Tesla to build a more comprehensive energy platform for consumers.

If approved, the initiative would position Tesla as both a technology manufacturer and a direct energy supplier in the British market.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Writer files lawsuit against Grammarly over AI feature using experts’ identities

A journalist has filed a class action lawsuit against Grammarly after the company introduced an AI feature that generated editorial feedback by imitating well-known writers and public figures without their permission.

The legal complaint was submitted by investigative journalist Julia Angwin, who argued that the tool unlawfully used the identities and reputations of authors and commentators.

The feature, known as ‘Expert Review’, produced automated critiques presented as if they came from figures such as Stephen King, Carl Sagan and technology journalist Kara Swisher.

Such a feature was available to subscribers paying an annual fee and was designed to simulate professional editorial guidance.

Critics quickly questioned both the quality of the generated feedback and the decision to associate the tool with real individuals who had not authorised the use of their names or expertise.

Technology writer Casey Newton tested the system by submitting one of his own articles and receiving automated feedback attributed to an AI version of Swisher. The response appeared generic, casting doubt on the value of linking such commentary to prominent personalities.

Following criticism from writers and researchers, the feature was disabled. Shishir Mehrotra, chief executive of Grammarly’s parent company Superhuman, issued a public apology while defending the broader concept behind the tool.

The lawsuit reflects growing tensions around AI systems that replicate creative styles or professional expertise.

As generative AI technologies expand across writing and publishing industries, questions surrounding consent, intellectual labour and identity rights are becoming increasingly prominent.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

DIGITALEUROPE urges changes to EU AI Act rules for industry

European industry representatives are urging policymakers to reconsider parts of the EU AI Act, arguing that the current framework could impose significant compliance costs on companies developing AI tools for industrial and medical technologies.

According to Cecilia Bonfeld-Dahl, director-general of DIGITALEUROPE, manufacturers of high-tech machines, medical devices, and radio equipment are already subject to strict product safety regulations. Adding AI-specific requirements could create unnecessary administrative burdens for companies already heavily regulated. She argues that policymakers should aim for balanced AI regulation that encourages innovation while maintaining safety standards.

Industry groups warn that classifying certain AI systems as high-risk under Annex I of the AI Act could be particularly costly for smaller firms. DIGITALEUROPE estimates that a company with around 50 employees developing an AI-based product could incur initial compliance costs of €320,000 to €600,000, followed by annual expenses of up to €150,000. According to the organisation, such costs could reduce profits significantly and discourage smaller companies from pursuing AI innovation.

Manufacturing and medical technology sectors across Europe employ millions of workers and increasingly rely on AI to improve product performance and safety. Industry representatives argue that many applications, such as AI systems used to enhance industrial equipment safety or improve medical devices, already operate under established regulatory frameworks. These existing frameworks could be adapted rather than introducing additional layers of regulation.

The broader regulatory landscape is also contributing to concerns among technology companies. Over the past six years, the EU has introduced nearly 40 new technology-related regulations, some of which overlap or impose similar compliance requirements. DIGITALEUROPE estimates that compliance with the AI Act could cost companies approximately €3.3 billion annually, while cybersecurity and data-sharing regulations add further financial obligations.

Industry leaders warn that rising compliance costs could affect investment in AI development across Europe. Current estimates suggest that the EU accounts for about 7.5% of global AI investment, significantly behind the United States and China.

DIGITALEUROPE has called on the EU institutions to consider postponing parts of the AI Act’s implementation timeline to allow further discussion on how high-risk AI systems should be defined. Supporters of this approach argue that additional consultation could help ensure the regulatory framework protects consumers while also enabling European companies to compete globally in the rapidly evolving AI sector.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Civil society urges stronger EU digital fairness rules

More than 200 civil society organisations have urged the European Commission to deliver strong consumer protections through the upcoming Digital Fairness Act. Advocacy groups in the EU say the proposal must address risks created by modern online platforms.

Campaigners argue that many existing EU consumer laws were designed decades ago and no longer reflect the realities of the digital market. The coalition warned policymakers in the EU not to treat regulatory simplification as a path toward deregulation.

Advocates are pushing for binding rules targeting deceptive design practices and addictive digital features. Survey responses across the EU show broad public support for stronger protections against dark patterns and unfair personalisation.

The European Commission is expected to present the Digital Fairness Act later this year. Officials in the EU are also considering expanding enforcement powers to strengthen consumer safeguards online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK watchdog demands stronger child safety on social platforms

The British communications regulator Ofcom has called on major technology companies to enforce stricter age controls and improve safety protections for children using online platforms.

The warning targets services widely used by young audiences, including Facebook, Instagram, Roblox, Snapchat, TikTok and YouTube.

Regulators said that despite existing minimum age policies, large numbers of children under the age of 13 continue to access platforms intended for older users.

According to Ofcom research, more than 70 percent of children aged 8 to 12 regularly use such services.

Authorities have asked companies to demonstrate how they will strengthen protections and ensure compliance with minimum age requirements.

Platforms must present their plans by 30 April, after which Ofcom will publish an assessment of their responses and determine whether further regulatory action is necessary.

The regulator also outlined several key areas requiring improvement.

Companies in the UK are expected to implement more effective age-verification systems, strengthen protections against online grooming and ensure that recommendation algorithms do not expose children to harmful content.

Another concern involves product development practices.

Ofcom warned that new digital features, including AI tools, should not be tested on children without adequate safety assessments. Platforms are required to evaluate potential risks before launching significant updates.

The measures are part of the UK’s broader regulatory framework introduced under the Online Safety Act, which aims to reduce exposure to harmful online material.

The law requires platforms to prevent children from accessing content linked to pornography, suicide, self-harm and eating disorders, while limiting the promotion of violent or abusive material in recommendation feeds.

Ofcom indicated that enforcement action may follow if companies fail to demonstrate meaningful improvements. Regulators argue that stronger safeguards are necessary to restore public trust and ensure that digital platforms prioritise child safety in their design and operation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI-powered Copilot Health platform introduced by Microsoft

Microsoft has introduced Copilot Health, a new feature that uses AI to help users interpret personal health data and prepare for medical consultations.

The tool will operate as a separate and secure environment within Microsoft’s Copilot ecosystem, allowing users to combine health records, wearable data, and medical history into a single profile. The system then uses AI to analyse patterns and generate personalised insights intended to support conversations with healthcare professionals.

Microsoft said the feature aims to help people better understand existing medical information rather than replace clinical care. Users can review trends such as sleep patterns, activity levels, and vital signs gathered from wearable devices, alongside test results and visit summaries from healthcare providers.

Copilot Health can integrate data from more than 50 wearable devices, including systems connected through platforms such as Apple Health, Fitbit, and Oura. The platform can also access health records from over 50,000 US hospitals and provider organisations through HealthEx, as well as laboratory test results from Function.

According to Microsoft, the system builds on ongoing research into medical AI systems, including work on the Microsoft AI Diagnostic Orchestrator (MAI-DxO). The company said future publications will explore how such systems could assist in analysing complex medical cases.

Privacy and security are central elements of the design. Microsoft stated that Copilot Health data and conversations are stored separately from standard Copilot interactions and protected through encryption and access controls. The company also noted that health information used in the service will not be used to train AI models.

Development of the system involves Microsoft’s internal clinical team and an external advisory group of more than 230 physicians from 24 countries. The company said Copilot Health has also achieved ISO/IEC 42001 certification, a standard focused on the governance of AI management systems.

The feature is being introduced through a phased rollout, beginning with a waitlist for early users who will help shape the service as it develops.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!