UK and US freeze assets of Southeast Asian online scam network

The UK and US governments have jointly sanctioned a transnational network operating illegal scam centres across Southeast Asia. These centres use sophisticated methods, including fake romantic relationships, to defraud victims worldwide.

Many of the individuals forced to conduct these scams are trafficked foreign nationals, coerced under threat of torture. Authorities have frozen a £12 million North London mansion, along with a £100 million City office and several London flats.

Network leader Chen Zhi and his associates used corporate proxies and overseas companies to launder proceeds from their scams through London’s property market.

The sanctioned entities include the Prince Group, Jin Bei Group, Golden Fortune Resorts World Ltd., and Byex Exchange. Scam operations trap foreign nationals with fake job adverts, forcing them to commit online fraud, often through fake cryptocurrency schemes.

Proceeds are then laundered through a complex system of front businesses and gambling platforms.

Foreign Secretary Yvette Cooper and Fraud Minister Lord Hanson said the action protects human rights, UK citizens, and blocks criminals from storing illicit funds. Coordination with the US ensures these sanctions disrupt the network’s international operations and financial access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Adult erotica tests OpenAI’s safety claims

OpenAI will loosen some ChatGPT rules, letting users make replies friendlier and allowing erotica for verified adults. Altman framed the shift as ‘treat adult users like adults’, tied to stricter age-gating. The move follows months of new guardrails against sycophancy and harmful dynamics.

The change arrives after reports of vulnerable users forming unhealthy attachments to earlier models. OpenAI has since launched GPT-5 with reduced sycophancy and behaviour routing, plus safeguards for minors and a mental-health council. Critics question whether evidence justifies loosening limits so soon.

Erotic role-play can boost engagement, raising concerns that at-risk users may stay online longer. Access will be restricted to verified adults via age prediction and, if contested, ID checks. That trade-off intensifies privacy tensions around document uploads and potential errors.

It is unclear whether permissive policies will extend to voice, image, or video features, or how regional laws will apply to them. OpenAI says it is not ‘usage-maxxing’ but balancing utility with safety. Observers note that ambitions to reach a billion users heighten moderation pressures.

Supporters cite overdue flexibility for consenting adults and more natural conversation. Opponents warn normalising intimate AI may outpace evidence on mental-health impacts. Age checks can fail, and vulnerable users may slip through without robust oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wider AI applications take centre stage at Japan’s CEATEC electronics show

At this year’s CEATEC exhibition in Japan, more companies and research institutions are promoting AI applications that stretch well beyond traditional factory or industrial automation.

Innovations on display suggest an increasing emphasis on ‘AI as companion’ systems, tools that help, advise, or augment human abilities in everyday settings.

Fujitsu’s showcase is a strong example. The company is using AI skeleton recognition and agent-based analysis to help people improve movement, whether for sports performance (such as refining a golf swing) or for healthcare settings. These systems give live feedback, coaching form, and offer suggestions, all in real time.

Other exhibits combine sensor tech, vision, and AI in consumer-friendly ways. For example, smart fridge compartments that monitor produce, earbuds or glasses that recognise real-world context (a flyer in a shop, say) and suggest recipes, or wearable systems that adapt to your motion.

These are not lab demos, they’re meant for direct, everyday interaction. Rising numbers of startups and university groups at CEATEC underscore Japan’s push toward embedding AI deeply in daily life.

The ‘AI for All’ theme and ‘Partner Parks’ at the show reflect a movement toward socially oriented technologies, with suggestions, health, ease, and personalisation. Japan seems to be leaning into AI not just for productivity gains but for lifestyle and well-being enhancements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI forms Expert Council to guide well-being in AI

OpenAI has announced the establishment of an Expert Council on Well-Being and AI to help it shape ChatGPT, Sora and other products in ways that promote healthier interactions and better emotional support.

The council comprises eight distinguished figures from psychology, psychiatry, human-computer interaction, developmental science and clinical practice.

Members include David Bickham (Digital Wellness Lab, Harvard), Munmun De Choudhury (Georgia Tech), Tracy Dennis-Tiwary (Hunter College), Sara Johansen (Stanford), Andrew K. Przybylski (University of Oxford), David Mohr (Northwestern), Robert K. Ross (public health) and Mathilde Cerioli (everyone.AI).

OpenAI says this new body will meet regularly with internal teams to examine how AI should function in ‘complex or sensitive situations,’ advise on guardrails, and explore what constitutes well-being in human-AI interaction. For example, the council already influenced how parental controls and user-teen distress notifications were prioritised.

OpenAI emphasises that it remains accountable for its decisions, but commits to ongoing learning through this council, the Global Physician Network, policymakers and experts. The company notes that different age groups, especially teenagers, use AI tools differently, hence the need for tailored insights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US seizes $15 billion crypto from Cambodia fraud ring

US federal prosecutors have seized $15 billion in cryptocurrency tied to a large-scale ‘pig butchering’ investment scam linked to forced labour compounds in Cambodia. Officials said it marks the biggest crypto forfeiture in Justice Department history.

Authorities charged Chinese-born businessman Chen Zhi, founder of the Prince Group, with money laundering and wire fraud. Chen allegedly used the conglomerate as cover for criminal operations that laundered billions through fake crypto investments. He remains at large.

Investigators say Chen and his associates operated at least ten forced labour sites in Cambodia where victims, many coerced workers, managed thousands of fake social media accounts to lure targets into fraudulent investment schemes.

The US Treasury also imposed sanctions on dozens of Prince Group affiliates, calling them transnational criminal organisations. FBI officials said the scam is part of a wider wave of crypto fraud across Southeast Asia, urging anyone targeted by online investment offers to contact authorities immediately.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

An awards win for McAfee’s consumer-first AI defence

McAfee won ‘Best Use of AI in Cybersecurity’ at the 2025 A.I. Awards for its Scam Detector. The tool, which McAfee says is the first to automate deepfake, email, and text-scam detection, underscores a consumer-focused defence. The award recognises its bid to counter fast-evolving online fraud.

Scams are at record levels, with one in three US residents reporting victimisation and average losses of $1,500. Threats now range from fake job offers and text messages to AI-generated deepfakes, increasing the pressure on tools that can act in real time across channels.

McAfee’s Scam Detector uses advanced AI to analyse text, email, and video, blocking dangerous links and flagging deepfakes before they cause harm. It is included with core McAfee plans and available on PC, mobile, and web, positioning it as a default layer for everyday protection.

Adoption has been rapid, with the product crossing one million users in its first months, according to the company. Judges praised its proactive protection and emphasis on accuracy and trust, citing its potential to restore user confidence as AI-enabled deception becomes more sophisticated.

McAfee frames the award as validation of its responsible, consumer-first AI strategy. The company says it will expand Scam Detector’s capabilities while partnering with the wider ecosystem to keep users a step ahead of emerging threats, both online and offline.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI predicts future knee X-rays for osteoarthritis patients

In the UK, an AI system developed at the University of Surrey can predict what a patient’s knee X-ray will look like a year in the future, offering a visual forecast alongside a risk score for osteoarthritis progression.

The technology is designed to help both patients and doctors better understand how the condition may develop, allowing earlier and more informed treatment decisions.

Trained on nearly 50,000 knee X-rays from almost 5,000 patients, the system delivers faster and more accurate predictions than existing AI tools.

It uses a generative diffusion model to produce a future X-ray and highlights 16 key points in the joint, giving clinicians transparency and confidence in the areas monitored. Patients can compare their current and predicted X-rays, which can encourage adherence to treatment plans and lifestyle changes.

Researchers hope the technology could be adapted for other chronic conditions, including lung disease in smokers or heart disease progression, providing similar visual insights.

The team is seeking partnerships to integrate the system into real-world clinical settings, potentially transforming how millions of people manage long-term health conditions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New YouTube tools provide trusted health advice for teens

YouTube is introducing a new shelf of mental health and wellbeing content designed specifically for teenagers. The feature will provide age-appropriate, evidence-based videos covering topics such as depression, anxiety, ADHD, and eating disorders.

Content is created in collaboration with trusted organisations and creators, including Black Dog Institute, ReachOut Australia, and Dr Syl, to ensure it is both reliable and engaging.

The initiative will initially launch in Australia, with plans to expand to the US, the UK, and Canada. Videos are tailored to teens’ developmental stage, offering practical advice, coping strategies, and medically-informed guidance.

By providing credible information on a familiar platform, YouTube hopes to improve mental health literacy and reduce stigma among young users.

YouTube has implemented teen-specific safeguards for recommendations, content visibility, and advertising eligibility, making it easier for adolescents to explore their interests safely.

The company emphasises that the platform is committed to helping teens access trustworthy resources, while supporting their wellbeing in a digital environment increasingly filled with misinformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The AI gold rush where the miners are broke

The rapid rise of AI has drawn a wave of ambitious investors eager to tap into what many consider the next major economic engine. Capital has flowed into AI companies at an unprecedented pace, fuelled by expectations of substantial future returns.

Yet despite these bloated investments, none of the leading players have managed to break even, let alone deliver a net-positive financial year. Even so, funding shows no signs of slowing, driven by the belief that profitability is only a matter of time. Is this optimism justified, or is the AI boom, for now, little more than smoke and mirrors?

Where the AI money flows

Understanding the question of AI profitability starts with following the money. Capital flows through the ecosystem from top to bottom, beginning with investors and culminating in massive infrastructure spending. Tracing this flow makes it easier to see where profits might eventually emerge.

The United States is the clearest focal point. The country has become the main hub for AI investment, where the technology is presented as the next major economic catalyst and treated by many investors as a potential cash cow.

The US market fuels AI through a mix of venture capital, strategic funding from Big Tech, and public investment. By late August 2025, at least 33 US AI startups had each raised 100 million dollars or more, showing the depth of available capital and investor appetite.

OpenAI stands apart from the rest of the field. Multiple reports point to a primary round of roughly USD 40 billion at a USD 300 billion post-money valuation, followed by secondary transactions that pushed the implied valuation even higher. No other AI company has matched this scale.

Much of the capital is not aimed at quick profits. Large sums support research, model development, and heavy infrastructure spending on chips, data centres, and power. Plans to deploy up to 6 gigawatts of AMD accelerators in 2026 show how funding moves into capacity rather than near-term earnings.

Strategic partners and financiers supply some of the largest investments. Microsoft has a multiyear, multibillion-dollar deal with OpenAI. Amazon has invested USD 4 billion in Anthropic, Google has pledged up to USD 2 billion, and infrastructure players like Oracle and CoreWeave are backed by major Wall Street banks.

AI makes money – it’s just not enough (yet)

Winning over deep-pocketed investors has become essential for both scrappy startups and established AI giants. Tech leaders have poured money into ambitious AI ventures for many reasons, from strategic bets to genuine belief in the technology’s potential to reshape industries.

No matter their motives, investors eventually expect a return. Few are counting on quick profits, but sooner or later, they want to see results, and the pressure to deliver is mounting. Hype alone cannot sustain a company forever.

To survive, AI companies need more than large fundraising rounds. Real users and reliable revenue streams are what keep a business afloat once investor patience runs thin. Building a loyal customer base separates long-term players from temporary hype machines.

OpenAI provides the clearest example of a company that has scaled. In the first half of 2025, it generated around 4.3 billion dollars in revenue, and by October, its CEO reported that roughly 800 million people were using ChatGPT weekly. The scale of its user base sets it apart from most other AI firms, but the company’s massive infrastructure and development costs keep it far from breaking even.

Microsoft has also benefited from the surge in AI adoption. Azure grew 39 percent year-over-year in Q4 FY2025, reaching 29.9 billion dollars. AI services drive a significant share of this growth, but data-centre expansion and heavy infrastructure costs continue to weigh on margins.

NVIDIA remains the biggest financial winner. Its chips power much of today’s AI infrastructure, and demand has pushed data-centre revenue to record highs. In Q2 FY2026, the company reported total revenue of 46.7 billion dollars, yet overall industry profits still lag behind massive investment levels due to maintenance costs and a mismatch between investment and earnings.

Why AI projects crash and burn

Besides the major AI players earning enough to offset some of their costs, more than two-fifths of AI initiatives end up on the virtual scrapheap for a range of reasons. Many companies jumped on the AI wave without a clear plan, copying what others were doing and overlooking the huge upfront investments needed to get projects off the ground.

GPU prices have soared in recent years, and new tariffs introduced by the current US administration have added even more pressure. Running an advanced model requires top-tier chips like NVIDIA’s H100, which costs around 30,000 dollars per unit. Once power consumption, facility costs, and security are added, the total bill becomes daunting for all but the largest players.

Another common issue is the lack of a scalable business model. Many companies adopt AI simply for the label, without a clear strategy for turning interest into revenue. In some industries, these efforts raise questions with customers and employees, exposing persistent trust gaps between human workers and AI systems.

The talent shortage creates further challenges. A young AI startup needs skilled engineers, data scientists, and operations teams to keep everything running smoothly. Building and managing a capable team requires both money and expertise. Unrealistic goals often add extra strain, causing many projects to falter before reaching the finish line.

Legal and ethical hurdles can also derail projects early on. Privacy laws, intellectual property disputes, and unresolved ethical questions create a difficult environment for companies trying to innovate. Lawsuits and legal fees have become routine, prompting some entrepreneurs to shut down rather than risk deeper financial trouble.

All of these obstacles together have proven too much for many ventures, leaving behind a discouraging trail of disbanded companies and abandoned ambitions. Sailing the AI seas offers a great opportunity, but storms can form quickly and overturn even the most confident voyages.

How AI can become profitable

While the situation may seem challenging now, there is still light at the end of the AI tunnel. The key to building a profitable and sustainable AI venture lies in careful planning and scaling only when the numbers add up. Companies that focus on fundamentals rather than hype stand the best chance of long-term success.

Lowering operational costs is one of the most important steps. Techniques such as model compression, caching, and routing queries to smaller models can dramatically reduce the cost of running AI systems. Improvements in chip efficiency and better infrastructure management can also help stretch every dollar further.

Shifting the revenue mix is another crucial factor. Many companies currently rely on cheap consumer products that attract large user bases but offer thin margins. A stronger focus on enterprise clients, who pay for reliability, customisation, and security, can provide a steadier and more profitable income stream.

Building real platforms rather than standalone products can unlock new revenue sources. Offering APIs, marketplaces, and developer tools allows companies to collect a share of the value created by others. The approach mirrors the strategies used by major cloud providers and app ecosystems.

Improving unit economics will determine which companies endure. Serving more users at lower per-request costs, increasing cache hit rates, and maximising infrastructure utilisation are essential to moving from growth at any cost to sustainable profit. Careful optimisation can turn large user bases into reliable sources of income.

Stronger financial discipline and clearer regulation can also play a role. Companies that set realistic growth targets and operate within stable policy frameworks are more likely to survive in the long run. Profitability will depend not only on innovation but also on smart execution and strategic focus.

Charting the future of AI profitability

The AI bubble appears stretched thin, and a constant stream of investments can do little more than artificially extend the lifespan of an AI venture doomed to fail. AI companies must find a way to create viable, realistic roadmaps to justify the sizeable cash injections, or they risk permanently compromising investors’ trust.

That said, the industry is still in its early and formative years, and there is plenty of room to grow and adapt to current and future landscapes. AI has the potential to become a stable economic force, but only if companies can find a compromise between innovation and financial pragmatism. Profitability will not come overnight, but it is within reach for those willing to build patiently and strategically.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

A common EU layer for age verification without a single age limit

Denmark will push for EU-wide age-verification rules to avoid a patchwork of national systems. As Council presidency, Copenhagen prioritises child protection online while keeping flexibility on national age limits. The aim is coordination without a single ‘digital majority’ age.

Ministers plan to give the European Commission a clear mandate for interoperable, privacy-preserving tools. An updated blueprint is being piloted in five states and aligns with the EU Digital Identity Wallet, which is due by the end of 2026. Goal: seamless, cross-border checks with minimal data exposure.

Copenhagen’s domestic agenda moves in parallel with a proposed ban on under-15 social media use. The government will consult national parties and EU partners on the scope and enforcement. Talks in Horsens, Denmark, signalled support for stronger safeguards and EU-level verification.

The emerging compromise separates ‘how to verify’ at the EU level from ‘what age to set’ at the national level. Proponents argue this avoids fragmentation while respecting domestic choices; critics warn implementation must minimise privacy risks and platform dependency.

Next steps include expanding pilots, formalising the Commission’s mandate, and publishing impact assessments. Clear standards on data minimisation, parental consent, and appeals will be vital. Affordable compliance for SMEs and independent oversight can sustain public trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot.