Meta faces fresh EU backlash over Digital Markets Act non-compliance

Meta is again under EU scrutiny after failing to fully comply with the bloc’s Digital Markets Act (DMA), despite a €200 million fine earlier this year.

The European Commission says Meta’s current ‘pay or consent’ model still falls short and could trigger further penalties. A formal warning is expected, with recurring fines likely if the company does not adjust its approach.

The DMA imposes strict rules on major tech platforms to reduce market dominance and protect digital fairness. While Meta claims its model meets legal standards, the Commission says progress has been minimal.

Over the past year, Meta has faced nearly €1 billion in EU fines, including €798 million for linking Facebook Marketplace to its central platform. The new case adds to years of tension over data practices and user consent.

The ‘pay or consent’ model offers users a choice between paying for privacy or accepting targeted ads. Regulators argue this does not meet the threshold for genuine consent and mirrors Meta’s past GDPR tactics.

Privacy advocates have long criticised Meta’s approach, saying users are left with no meaningful alternatives. Internal documents show Meta lobbied against privacy reforms and warned governments about reduced investment.

The Commission now holds greater power under the DMA than it did with GDPR, allowing for faster, centralised enforcement and fines of up to 10% of global turnover.

Apple has already been fined €500 million, and Google is also under investigation. The EU’s rapid action signals a stricter stance on platform accountability. The message for Meta and other tech giants is clear: partial compliance is no longer enough to avoid serious regulatory consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple accused of blocking real browser competition on iOS

Developers and open web advocates say Apple continues to restrict rival browser engines on iOS, despite obligations under the EU’s Digital Markets Act. While Apple claims to allow competition, groups like Open Web Advocacy argue that technical and logistical hurdles still block real implementation.

The controversy centres on Apple’s refusal to allow developers to release region-specific browser versions or test new engines outside the EU. Developers must abandon global apps or persuade users to switch manually to new EU-only versions, creating friction and reducing reach.

Apple insists it upholds security and privacy standards built over 18 years and claims its new framework enables third-party browsers. However, critics say those browsers cannot be tested or deployed realistically without access for developers outside the EU.

The EU held a DMA compliance workshop in Brussels in June, during which tensions surfaced between Apple’s legal team and advocates. Apple says it is still transitioning and working with firms like Mozilla and Google on limited testing updates, but has offered no timeline for broader changes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google strengthens position as Perplexity and OpenAI launch browsers

OpenAI is reportedly preparing to launch an AI-powered web browser in the coming weeks, aiming to compete with Alphabet’s dominant Chrome browser, according to sources cited by Reuters.

The forthcoming browser seeks to leverage AI to reshape how users interact with the internet, while potentially granting OpenAI deeper access to valuable user data—a key driver behind Google’s advertising empire.

If adopted by ChatGPT’s 500 million weekly active users, the browser could pose a significant challenge to Chrome, which currently underpins much of Alphabet’s ad targeting and search traffic infrastructure.

The browser is expected to feature a native chat interface, reducing the need for users to click through traditional websites. The features align with OpenAI’s broader strategy to embed its services more seamlessly into users’ daily routines.

While the company declined to comment on the development, anonymous sources noted that the browser is likely to support AI agent capabilities, such as booking reservations or completing web forms on behalf of users.

The move comes as OpenAI faces intensifying competition from Google, Anthropic, and Perplexity.

In May, OpenAI acquired the AI hardware start-up io for $6.5 billion, in a deal linked to Apple design veteran Jony Ive. The acquisition signals a strategic push beyond software and into integrated consumer tools.

Despite Chrome’s grip on over two-thirds of the global browser market, OpenAI appears undeterred. Its browser will be built on Chromium—the open-source framework powering Chrome, Microsoft Edge, and other major browsers. Notably, OpenAI hired two former Google executives last year who had previously worked on Chrome.

The decision to build a standalone browser—rather than rely on third-party plug-ins—was reportedly driven by OpenAI’s desire for greater control over both data collection and core functionality.

The control could prove vital as regulatory scrutiny of Google’s dominance in search and advertising intensifies. The United States Department of Justice is currently pushing for Chrome’s divestiture as part of its broader antitrust actions against Alphabet.

Other players are already exploring the AI browser space. Perplexity recently launched its own AI browser, Comet, while The Browser Company and Brave have introduced AI-enhanced browsing features.

As the AI race accelerates, OpenAI’s entry into the browser market could redefine how users navigate and engage with the web—potentially transforming search, advertising, and digital privacy in the process.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Malaysia enforces trade controls on AI chips with US origin

Malaysia’s trade ministry announced new restrictions on the export, transshipment and transit of high-performance AI chips of US origin. Effective immediately, individuals and companies must obtain a trade permit and notify authorities at least 30 days in advance for such activities.

The restrictions apply to items not explicitly listed in Malaysia’s strategic items list, which is currently under review to include relevant AI chips. The move aims to close regulatory gaps while Malaysia updates its export control framework to match emerging technologies.

‘Malaysia stands firm against any attempt to circumvent export controls or engage in illicit trade activities,’ the ministry stated on Monday. Violations will result in strict legal action, with authorities emphasising a zero-tolerance approach to export control breaches.

The announcement follows increasing pressure from the United States to curb the flow of advanced chips to China. In March, the Financial Times reported that Washington had asked allies including Malaysia to tighten semiconductor export rules.

Malaysia is also investigating a shipment of servers linked to a Singapore-based fraud case that may have included restricted AI chips. Authorities are assessing whether local laws were breached and whether any controlled items were transferred without proper authorisation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CISA 2015 expiry threatens private sector threat sharing

Congress has under 90 days to renew the Cybersecurity Information Sharing Act (CISA) of 2015 and avoid a regulatory setback. The law protects companies from liability when they share cyber threat indicators with the government or other firms, fostering collaboration.

Before CISA, companies hesitated due to antitrust and data privacy concerns. CISA removed ambiguity by offering explicit legal protections. Without reauthorisation, fear of lawsuits could silence private sector warnings, slowing responses to significant cyber incidents across critical infrastructure sectors.

Debates over reauthorisation include possible expansions of CISA’s scope. However, many lawmakers and industry groups in the United States now support a simple renewal. Health care, finance, and energy groups say the law is crucial for collective defence and rapid cyber threat mitigation.

Security experts warn that a lapse would reverse years of progress in information sharing, leaving networks more vulnerable to large-scale attacks. With only 35 working days left for Congress before the 30 September deadline, the pressure to act is mounting.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI can reshape the insurance industry, but carries real-world risks

AI is creating new opportunities for the insurance sector, from faster claims processing to enhanced fraud detection.

According to Jeremy Stevens, head of EMEA business at Charles Taylor InsureTech, AI allows insurers to handle repetitive tasks in seconds instead of hours, offering efficiency gains and better customer service. Yet these opportunities come with risks, especially if AI is introduced without thorough oversight.

Poorly deployed AI systems can easily cause more harm than good. For instance, if an insurer uses AI to automate motor claims but trains the model on biassed or incomplete data, two outcomes are likely: the system may overpay specific claims while wrongly rejecting genuine ones.

The result would not simply be financial losses, but reputational damage, regulatory investigations and customer attrition. Instead of reducing costs, the company would find itself managing complaints and legal challenges.

To avoid such pitfalls, AI in insurance must be grounded in trust and rigorous testing. Systems should never operate as black boxes. Models must be explainable, auditable and stress-tested against real-world scenarios.

It is essential to involve human experts across claims, underwriting and fraud teams, ensuring AI decisions reflect technical accuracy and regulatory compliance.

For sensitive functions like fraud detection, blending AI insights with human oversight prevents mistakes that could unfairly affect policyholders.

While flawed AI poses dangers, ignoring AI entirely risks even greater setbacks. Insurers that fail to modernise may be outpaced by more agile competitors already using AI to deliver faster, cheaper and more personalised services.

Instead of rushing or delaying adoption, insurers should pursue carefully controlled pilot projects, working with partners who understand both AI systems and insurance regulation.

In Stevens’s view, AI should enhance professional expertise—not replace it—striking a balance between innovation and responsibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU urges stronger AI oversight after Grok controversy

A recent incident involving Grok, the AI chatbot developed by xAI, has reignited European Union calls for stronger oversight of advanced AI systems.

Comments generated by Grok prompted criticism from policymakers and civil society groups, leading to renewed debate over AI governance and voluntary compliance mechanisms.

The chatbot’s responses, which circulated earlier this week, included highly controversial language and references to historical figures. In response, xAI stated that the content was removed and that technical steps were being taken to prevent similar outputs from appearing in the future.

European policymakers said the incident highlights the importance of responsible AI development. Brando Benifei, an Italian lawmaker who co-led the EU AI Act negotiations, said the event illustrates the systemic risks the new regulation seeks to mitigate.

Christel Schaldemose, a Danish member of the European Parliament and co-lead on the Digital Services Act, echoed those concerns. She emphasised that such incidents underline the need for clear and enforceable obligations for developers of general-purpose AI models.

The European Commission is preparing to release guidance aimed at supporting voluntary compliance with the bloc’s new AI legislation. This code of practice, which has been under development for nine months, is expected to be published this week.

Earlier drafts of the guidance included provisions requiring developers to share information on how they address systemic risks. Reports suggest that some of these provisions may have been weakened or removed in the final version.

A group of five lawmakers expressed concern over what they described as the last-minute removal of key transparency and risk mitigation elements. They argue that strong guidelines are essential for fostering accountability in the deployment of advanced AI models.

The incident also brings renewed attention to the Digital Services Act and its enforcement, as X, the social media platform where Grok operates, is currently under EU investigation for potential violations related to content moderation.

General-purpose AI systems, such as OpenAI’s GPT, Google’s Gemini and xAI’s Grok, will be subject to additional requirements under the EU AI Act beginning 2 August. Obligations include disclosing training data sources, addressing copyright compliance, and mitigating systemic risks.

While these requirements are mandatory, their implementation is expected to be shaped by the Commission’s voluntary code of practice. Industry groups and international stakeholders have voiced concerns over regulatory burdens, while policymakers maintain that safeguards are critical for public trust.

The debate over Grok’s outputs reflects broader challenges in balancing AI innovation with the need for oversight. The EU’s approach, combining binding legislation with voluntary guidance, seeks to offer a measured path forward amid growing public scrutiny of generative AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italy’s Piracy Shield sparks EU scrutiny over digital rights

Italy’s new anti-piracy system, Piracy Shield, has come under scrutiny from the European Commission over potential breaches of the Digital Services Act.

The tool, launched by the Italian communications regulator AGCOM, allows authorities to block suspicious websites within 30 minutes — a feature praised by sports rights holders for minimising illegal streaming losses.

However, its speed and lack of judicial oversight have raised legal concerns. Critics argue that individuals are denied the right to defend themselves before action.

A recent glitch linked to Google’s CDN disrupted access to platforms like YouTube and Google Drive, deepening public unease.

Another point of contention is Piracy Shield’s governance. SP Tech, a company owned by Lega Serie A, manages the system, which directly benefits from anti-piracy enforcement.

The Computer & Communications Industry Association was prompted to file a complaint, citing a conflict of interest and calling for greater transparency.

While AGCOM Commissioner Massimiliano Capitanio insists the tool places Italy at the forefront of the fight against illegal streaming, growing pressure from digital rights groups and EU regulators suggests a clash between national enforcement and European law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Capgemini invests in AI-driven operations with WNS

Capgemini has announced it will acquire Indian IT firm WNS for $3.3 billion to accelerate its leadership in agentic AI. The acquisition will significantly enhance Capgemini’s business process services (BPS) by integrating advanced AI capabilities into core operations.

The boards of both companies have approved the deal, which offers WNS shareholders a 28% premium over the 90-day average share price. Completion is expected by the end of 2025, pending regulatory approvals.

The company sees strong potential in embedding AI into enterprise operations, with BPS becoming a key showcase. The integration will strengthen Capgemini’s US presence and unlock cross-selling opportunities across the combined client networks.

Both firms emphasised a shared vision of intelligent operations powered by agentic AI, aiming to help clients shift from automation to AI-driven autonomy. Capgemini’s existing partnerships with tech giants like Microsoft, Google and NVIDIA will support this vision.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU rejects delay for AI Act rollout

The EU has confirmed it will enforce its originally scheduled AI Act, despite growing calls from American and European tech firms to delay the rollout.

Major companies, including Alphabet, Meta, ASML and Mistral, have urged the European Commission to push back the timeline by several years, citing concerns over compliance costs.

Rejecting the pressure, a Commission spokesperson clarified there would be no pause or grace period. The legislation’s deadlines remain, with general-purpose AI rules taking effect this August and stricter requirements for high-risk systems following August 2026.

The AI Act represents the EU’s effort to regulate AI across various sectors, aiming to balance innovation and public safety. While tech giants argue that the rules are too demanding, the EU insists legal certainty is vital and the framework must move forward as planned.

The Commission intends to simplify the process later in the year, such as easing reporting demands for smaller businesses. Yet the core structure and deadlines of the AI Act will not be altered.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!