Siri AI delays lead to $250 million Apple settlement

Apple has agreed to pay $250 million to settle a class action lawsuit alleging that it misled consumers about the readiness and availability of AI-powered Siri features promoted ahead of the iPhone 16 launch. Under the proposed agreement, eligible US customers who bought supported iPhone models between 10 June 2024 and 29 March 2025 may receive between $25 and $95 per device, depending on the number of claims. Apple denied wrongdoing and settled the case without admitting liability.

The complaint argued that consumers who purchased supported iPhone 15 and iPhone 16 models expected advanced Apple Intelligence features and a significantly upgraded Siri experience that were not available at the time of sale. Plaintiffs said Apple’s marketing created the impression that the new capabilities would arrive sooner and with broader functionality than users ultimately received.

The settlement comes shortly before Apple’s annual Worldwide Developers Conference, where the company is widely expected to present further updates to Siri and its wider AI strategy.

Why does it matter?

The case shows how AI product marketing is becoming a legal and regulatory risk, not just a branding issue. As technology companies use generative AI features to drive device sales and platform adoption, courts and consumers are paying closer attention to whether those capabilities are actually available when products reach the market. The Apple settlement suggests that overstating AI readiness can create liability even before regulators step in, making transparency around launch claims increasingly important across the sector.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

European Commission publishes first Digital Markets Act review

The European Commission has published its first formal review of the Digital Markets Act, assessing how the regulation is affecting the behaviour of large online platforms in the EU digital economy. According to the review, the law has produced visible changes in some areas, while also exposing continuing problems in implementation and enforcement.

The review points to changes in user choice since the DMA entered into force in March 2024. These include support for third-party app stores and prompts on devices to select browsers or search engines, alongside reported increases in usage and downloads of alternative services.

Enforcement action is also a central part of the assessment. In April 2025, Apple was fined €500 million for blocking developers from directing users to cheaper purchasing options, while Meta was fined €200 million over its ‘consent or pay’ model. Both companies are appealing the decisions.

At the same time, the review identifies clear implementation challenges. It says investigations are taking around twice as long as the 12-month target, while legal procedures are being used to slow compliance. It also raises broader questions about whether fast-growing areas such as AI tools and cloud platforms should eventually be brought within the scope of the regulation.

The Digital Markets Act is therefore presented less as a completed intervention than as an ongoing regulatory process. The review suggests that its long-term impact will depend not only on the rules already in force, but also on how consistently they are enforced and how the EU responds to changes in digital markets.

Why does it matter?

The review matters because it shows that the real test of the Digital Markets Act is no longer whether the EU can write rules for large platforms, but whether it can enforce them quickly and adapt them to new market realities. Early changes in user choice suggest the law is starting to affect platform behaviour. However, delays in investigations and questions around AI and cloud services show that the regulatory contest is still evolving.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia expands collaboration efforts in key science and technology areas

The Australian Government Department of Industry, Science and Resources has announced $6.2 million in funding for nine international projects under round two of the Global Science and Technology Diplomacy Fund (GSTDF).

The programme supports collaboration, innovation and commercialisation in priority technology areas. The selected projects focus on AI, advanced manufacturing, quantum technologies and hydrogen, with several initiatives applying AI to areas such as robotics, satellite networks and ocean forecasting.

According to the department, Australian researchers will work with international partners across Asia-Pacific, with projects spanning fields from healthcare to environmental monitoring and space technologies.

The funding reflects a broader effort to deepen international cooperation and advance strategic technologies, with collaborations involving countries including Singapore, Vietnam, Japan, Malaysia, New Zealand, and South Korea, supporting innovation linked to Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta explores agentic AI assistants

Meta is developing an advanced ‘agentic’ AI assistant designed to perform complex, multi-step tasks for consumers. The initiative reflects the company’s broader push to expand its AI capabilities beyond basic chat functions.

The planned assistant is intended to act more autonomously, helping users complete actions such as organising activities or managing digital tasks. Powered by a new internal model called Muse Spark, the assistant is still under development, and its rollout timeline depends on internal testing.

Meta’s strategy focuses on embedding these tools across its platforms, aiming to deepen user engagement and create more personalised digital experiences.

This marks a shift towards AI systems that can anticipate needs rather than simply respond to prompts. The move also signals intensifying competition among major technology companies in consumer AI.

The report highlights that Meta is positioning AI as central to its future growth, with a focus on making assistants more proactive and capable within everyday digital environments in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Why DeepSeek V4 is changing the AI model race

DeepSeek has again placed itself at the centre of the global AI race. After drawing worldwide attention with its R1 reasoning model in early 2025, the Chinese company has recently released DeepSeek V4, a new model designed to compete not only on performance, but also on price, openness and efficiency.

The hype around DeepSeek V4 is not based on a single feature. The model comes with a 1 million-token context window, open weights, two versions for different use cases and a strong focus on agentic workflows such as coding, research, document analysis and long-running tasks. In a market still dominated by expensive closed models, DeepSeek is trying to prove that powerful AI does not need to remain locked behind trademarked systems.

A model built for long memory

The most immediate difference between DeepSeek V4 and other models is context length. Both DeepSeek-V4-Pro and DeepSeek-V4-Flash support a 1-million-token context window, meaning they can process inputs far longer than those of older generations of mainstream models. According to DeepSeek’s official release, one million tokens is now the default across all official DeepSeek services.

For ordinary users, that may sound technical. In practice, it matters because a longer context allows models to work with large documents, long conversations, full codebases, legal materials, research archives or complex project histories without losing track as quickly.

That is why DeepSeek V4 is not just another chatbot release. It is aimed at the next stage of AI use, where models are expected to act less like question-answering tools and more like assistants that can follow long processes over time.

Two models for two different needs

DeepSeek V4 comes in two main versions. DeepSeek-V4-Pro is a larger and more capable model, with 1.6 trillion total parameters and 49 billion active parameters. DeepSeek-V4-Flash is a smaller model, with 284 billion total parameters and 13 billion active parameters, designed for faster and more cost-effective workloads.

That distinction is important. Not every user needs the strongest model for every task. A company summarising documents, routing queries or running basic support may choose Flash. A developer working on complex coding tasks, long-context agents or advanced reasoning may prefer Pro.

DeepSeek’s release reflects a broader trend in AI. The best model is no longer always the biggest one. Cost, speed, context size and deployment flexibility are now as important as raw benchmark performance.

Why the price matters

One reason DeepSeek attracts so much attention is its aggressive pricing. DeepSeek’s API page lists V4-Flash at USD 0.14 per 1 million input tokens on a cache miss and USD 0.28 per 1 million output tokens. V4-Pro is listed at USD 1.74 per 1 million input tokens and USD 3.48 per 1 million output tokens before the temporary 75% discount.

For developers and companies, that changes the calculation. High-performing AI models are useful only if they can be deployed at scale. If every long document, coding session or agentic workflow becomes too expensive, adoption slows down.

DeepSeek’s challenge to the market is therefore not only technical. It is economic. The company is pushing the idea that frontier-level AI should be cheaper to run, easier to access and less dependent on closed ecosystems.

The architecture behind the hype

DeepSeek V4 uses a mixture-of-experts approach, meaning only part of the model is active during each response. That helps explain why the model can be very large on paper, yet still more efficient to run than a dense model of similar overall size.

The more interesting part is how DeepSeek handles long context. NVIDIA’s technical overview explains that DeepSeek V4 uses hybrid attention, combining compression and selective attention techniques to reduce the cost of processing very long prompts. NVIDIA says these changes are designed to cut per-token inference FLOPs by 73% and reduce KV cache memory burden by 90% compared with DeepSeek-V3.2.

For a non-technical audience, the point is simple. DeepSeek V4 is trying to solve one of the biggest problems in modern AI: how to make models remember and process much more information without becoming too slow or too expensive.

That is where much of the hype comes from. The model is not merely larger. It is designed around the economics of long-context AI.

Why NVIDIA is still in the picture

DeepSeek’s R2 launch is delayed as US restrictions cut off supply of NVIDIA H20 chips built for China.

NVIDIA’s role in the DeepSeek V4 story is especially interesting. DeepSeek is often discussed as part of China’s effort to build a more independent AI ecosystem, but NVIDIA has also been quick to move forward to support developers who want to build with the model.

In its technical blog, NVIDIA describes DeepSeek V4 as a model family designed for efficient inference of million-token contexts. The company says DeepSeek-V4-Pro and V4-Flash are available through NVIDIA GPU-accelerated endpoints, while developers can also use NVIDIA Blackwell, NIM containers, SGLang and vLLM deployment options.

NVIDIA also reports that early tests of DeepSeek-V4-Pro on the GB200 NVL72 platform showed more than 150 tokens per second per user. That matters because long-context models place heavy memory pressure, as well as on compute and networking infrastructure. The model may be efficient by design, but serving it at scale still requires serious hardware.

So, DeepSeek V4 does not remove NVIDIA from the story – it complicates it. The model is part of a broader push towards more efficient AI, but the infrastructure race remains central.

The chip question behind the model

DeepSeek V4 also arrives at a time when AI infrastructure is becoming just as important as model performance. MIT Technology Review frames the release partly through that lens, noting that DeepSeek’s new model reflects China’s broader attempt to reduce reliance on foreign AI hardware and build a more self-sufficient technology stack.

That detail matters because the AI race is no longer only about who builds the most capable model. It is also about who controls the chips, software frameworks and data centres needed to run it.

Replacing NVIDIA, however, remains difficult. Its advantage lies not just in its chips, but also in the software ecosystem developers have built around its platforms over many years. Moving to alternative hardware means adapting code, rebuilding tools and proving that the new systems are stable enough for serious use.

DeepSeek V4, however, sits between two realities. It points towards China’s ambition to build a more independent AI stack, while NVIDIA’s rapid support for the model shows that frontier AI still depends heavily on established infrastructure.

Open weights as a strategic move

DeepSeek V4 is also important because the model weights are available through Hugging Face under the MIT License. That gives developers more freedom to inspect, adapt and deploy the model than they would have with a fully closed commercial system.

Open-weight models are becoming a major pressure point in the AI race. Closed models may still lead in some areas, especially in polished consumer products, enterprise support and safety layers. However, open models offer something different: flexibility.

For universities, start-ups, smaller companies and developers outside the largest AI ecosystems, that flexibility matters. It means advanced AI can be tested, modified and integrated without relying entirely on a handful of dominant providers.

Benchmarks need caution

DeepSeek presents V4-Pro as highly competitive across reasoning, coding, long-context and agentic benchmarks. Hugging Face lists results including 80.6 on SWE-bench Verified, 90.1 on GPQA Diamond and 87.5 on MMLU-Pro for DeepSeek-V4-Pro.

Those numbers are impressive, but they should not be treated as the full story. Benchmarks are useful, but they rarely capture every real-world use case. A model can score well on coding tests and still struggle with reliability, factual accuracy, safety or complex multi-step workflows in production.

That caution is important. The AI industry often turns benchmarks into headlines, while real performance depends on deployment, prompting, safety controls and the specific task at hand.

More than just another model release

DeepSeek V4 matters because it combines several trends into one release: long context, lower prices, open weights, agentic workflows and geopolitical competition. It also shows that the AI race is no longer fought only in labs, benchmarks and data centres. Visibility now matters too. Tools such as Diplo’s Digital Footprints show how digital presence shapes the way technology actors and media narratives are discovered, ranked and understood. At this stage, the competition is not only about who has the smartest model. It is also about who can make intelligence cheaper, more available and easier to deploy.

That does not mean DeepSeek has solved every problem. Questions remain around independent benchmarking, safety, data governance, infrastructure and the broader political context of Chinese AI development. Still, the release does show where the market is heading.

The next phase of AI may not be defined solely by the most powerful model. It may be defined by the model that is powerful enough, affordable enough and open enough to change how people build products, services and tools with AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MoneyGram and Kraken connect crypto and cash globally

Kraken has entered a strategic partnership with MoneyGram to enable crypto-to-cash withdrawals in more than 100 countries. The integration links digital asset infrastructure with MoneyGram’s global network, allowing users to convert crypto into hundreds of fiat currencies through physical and digital payout channels.

The service is intended to address one of the main barriers to crypto adoption by improving access to reliable off-ramps. Users will be able to transfer funds to their accounts and receive near-instant cash payouts through MoneyGram’s retail network and regulated payment infrastructure.

Both companies highlighted the importance of interoperability between traditional finance and digital assets in driving practical adoption.

Kraken stressed the value of connecting liquidity and compliance systems with established payment rails, while MoneyGram presented its global distribution network as a bridge between digital value and everyday financial use.

The rollout will begin across the United States, Europe, Latin America, Africa, and parts of Asia-Pacific, with plans to expand further into local bank deposits and additional payment services as the partnership develops.

Why does it matter?

The partnership addresses one of the main friction points in crypto adoption: converting digital assets into usable cash at scale. By linking crypto infrastructure with a global payout network, it strengthens the practical use of digital assets beyond trading and speculation.

More broadly, it reflects a gradual convergence between traditional financial rails and crypto-native systems, with interoperability becoming increasingly important to how value moves across borders.

It may also support financial inclusion by expanding access to cash-out services in regions where banking infrastructure remains limited or uneven.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Meta taps blockchain networks for faster creator payments

Meta has introduced USDC payouts for selected Facebook creators in Colombia and the Philippines, marking another step towards using blockchain-based payment rails for creator earnings. The programme allows eligible users to receive funds directly into crypto wallets using Polygon or Solana as settlement networks.

Creators receiving USDC on Polygon can move funds through supported wallets or exchanges and convert them into local currency where off-ramp services are available. The model reduces reliance on traditional cross-border payment channels and is intended to give creators faster and more flexible access to dollar-denominated earnings.

Polygon has been included alongside Solana as part of the payout infrastructure, with Polygon arguing that its network already handles a large share of global USDC transfer activity. Low transaction costs and broad wallet and exchange support are presented as key reasons stablecoin rails are becoming more attractive for recurring digital payouts.

Why does it matter?

The significance of the move lies less in crypto branding than in payment infrastructure. Meta is testing whether stablecoin rails can make creator payouts faster, more flexible, and less dependent on the frictions of traditional cross-border transfers. If this model scales, it would suggest that blockchain networks are becoming useful not only for trading or speculation, but for mainstream platform payments where speed, settlement, and access to dollar-denominated value matter.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

UK AI sector survey to map growth trends and policy direction

The UK government is stepping up efforts to better understand the structure and growth of its AI sector through an updated national survey led by the Department for Science, Innovation and Technology.

The research, conducted by Ipsos and supported by Perspective Economics, aims to gather direct insights from businesses operating in the UK AI ecosystem. The findings are expected to inform future government policy on AI and sector development.

Participation is voluntary and confidential. Respondents are drawn from senior leadership roles, including chief executives, chief technology officers, company directors, and senior members of AI or data science teams. The survey focuses on business activity, products and services, and longer-term growth plans across the sector.

Fieldwork is taking place between late April and the end of May 2026 using online questionnaires and telephone interviews. Each session is expected to last around 15 to 20 minutes, allowing businesses to contribute structured input without significant disruption to normal operations.

The initiative reflects a wider UK policy priority: ensuring that government strategy keeps pace with developments in AI innovation and commercial growth. By drawing on direct industry evidence rather than relying only on secondary analysis, policymakers are trying to build a more accurate picture of the country’s evolving AI landscape. This last sentence is an inference based on the survey’s stated purpose of informing government AI policy.

Why does it matter?

AI policy is much easier to design in theory than in a market that is changing quickly and unevenly. If the government lacks current information on how AI firms are growing, what products they are developing, and where the main constraints lie, it risks shaping policy based on outdated assumptions. Direct input from businesses gives policymakers a stronger basis for decisions on support, regulation, skills, and investment, especially at a time when the UK is trying to turn AI ambition into measurable economic capacity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Study examines trust and fraud prevention in AI-enabled banking in Bangladesh

A new non-peer-reviewed preprint examines how AI is shaping e-banking in Bangladesh, focusing on consumer decision-making, ethical trust, and fraud prevention.

The paper links AI adoption in digital banking to customer experience, risk management, process automation, financial inclusion and regulatory compliance, arguing that these factors are increasingly important as Bangladesh’s financial sector becomes more digital.

A study that uses a narrative literature review of recent research from 2024 and 2025 and builds its conceptual model on the UTAUT2 framework, which is commonly used to explain technology adoption.

The authors extend the model by adding ethical trust and fraud prevention as mediating mechanisms, arguing that consumers are more likely to use AI-enabled banking services when they see them as useful, secure, transparent and fair.

Ethical trust is treated as a central part of adoption. The paper identifies transparency, algorithmic fairness, data privacy, reliability, accountability and digital inclusion as key factors shaping how users respond to AI in banking.

It also notes that explainable AI tools and localised interfaces, including Bengali-language systems, could help reduce uncertainty for users with lower digital literacy.

Fraud prevention is presented as a critical enabler of consumer confidence. The authors point to real-time monitoring, anomaly detection, secure authentication, biometric e-KYC and explainable fraud alerts as tools that can reduce perceived risk.

Additionally, they argue that AI systems should not only detect fraud effectively, but also explain decisions clearly enough for users to trust them.

The paper also highlights Bangladesh-specific issues, including Islamic banking, Shariah-compliant AI models, rural and urban digital access gaps, and the need for inclusive design. However, the study remains conceptual and has not yet been peer reviewed.

The authors recommend future empirical research with Bangladeshi banking users to test the model across income levels, regions, generations and gender groups.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Powerful Gemini update turns simple prompts into ready-to-use results

Gemini can now generate downloadable and ready-to-share files directly in chat across a wide range of formats, including PDF, Microsoft Word, Excel, Google Docs, Sheets, and Slides.

The new feature is meant to remove the extra steps that often follow AI-assisted brainstorming, such as copying content into other applications and reformatting it manually. Instead, users can ask Gemini to create a structured file that is already formatted and ready to download or export to Google Drive.

Supported formats include Google Workspace files, PDF, DOCX, XLSX, CSV, LaTeX, TXT, RTF, and Markdown. The company says the feature is now available globally to all Gemini app users.

Possible uses include turning budget plans into spreadsheets, organising rough ideas into structured documents, converting long discussions into concise reports, and generating PDF study guides from uploaded lecture notes.

Why does it matter?

What changes here is not simply that Gemini can create more file types, but that it moves AI one step closer to replacing part of the software workflow itself. Instead of using AI to generate rough text and then finishing the task manually in Word, Excel, or Google Docs, users can now get output in a format that is already structured for immediate use.

That may reduce friction between prompting and execution, making AI more useful in everyday work, study, and administration. In practical terms, the update pushes Gemini further from being just a conversational assistant towards becoming a tool that can produce finished digital outputs people can actually work with.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!