Network failure hits EE, BT and affects other UK providers

Thousands of EE and BT customers across the UK encountered widespread network failures on 24 July, primarily affecting voice services.

The outage, lasting over 24 hours, disrupted mobile and landline calls. Over 2,600 EE users reported issues with Downdetector at peak volume around 2:15 p.m. BST. Despite repair efforts, residual outages were still being logged the following day.

Although Vodafone and Three initially confirmed their networks were stable, users who recently switched carriers or ported numbers from EE experienced failures when making or receiving calls. However, this suggests cross-network routing issues burdened by EE’s technical fault.

Emergency services were briefly impacted; some users could not reach 999, though voice functionality has resumed. BT and EE apologised and said they were working urgently to restore reliable service.

Given statutory obligations around service resilience, Ofcom has opened inquiries into scale and causes. Affected MVNO operators using EE infrastructure, like 1pMobile, reported customer disruptions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta boosts teen safety as it removes hundreds of thousands of harmful accounts

Meta has rolled out new safety tools to protect teenagers on Instagram and Facebook, including alerts about suspicious messages and a one-tap option to block or report harmful accounts.

The company said it is increasing efforts to prevent inappropriate contact from adults and has removed over 635,000 accounts that sexualised or targeted children under 13.

Of those accounts, 135,000 were caught posting sexualised comments, while another 500,000 were flagged for inappropriate interactions.

Meta said teen users blocked over one million accounts and reported another million after receiving in-app warnings encouraging them to stay cautious in private messages.

The company also uses AI to detect users lying about their age on Instagram. If flagged, those accounts are automatically converted to teen accounts with stronger privacy settings and messaging restrictions. Since 2024, all teen accounts are set to private by default.

Meta’s move comes as it faces mounting legal pressure from dozens of US states accusing the company of contributing to the youth mental health crisis by designing addictive features on Instagram and Facebook. Critics argue that more must be done to ensure safety instead of relying on user action alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and quantum tech reshape global business

AI and quantum computing are reshaping global industries as investment surges and innovation accelerates across sectors like finance, healthcare and logistics. Microsoft and Amazon are driving a major shift in AI infrastructure, transforming cloud services into profitable platforms.

Quantum computing is moving beyond theory, with real-world applications emerging in pharmaceuticals and e-commerce. Google’s development of quantum-inspired algorithms for virtual shopping and faster analytics demonstrates its potential to revolutionise decision-making.

Sustainability is also gaining ground, with companies adopting AI-powered solutions for renewable energy and eco-friendly manufacturing. At the same time, digital banks are integrating AI to challenge legacy finance systems, offering personalised, accessible services.

Despite rapid progress, ethical concerns and regulatory challenges are mounting. Data privacy, AI bias, and antitrust issues highlight the need for responsible innovation, with industry leaders urged to balance risk and growth for long-term societal benefit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reshaping the US labour market

AI is often seen as a job destroyer, but it’s also emerging as a significant source of new employment, according to a new Brookings report. The number of job postings mentioning AI has more than doubled in the past year, with demand continuing to surge across various industries and regions.

Over the past 15 years, AI-related job listings have grown nearly 29% annually, far outpacing the 11% growth rate of overall job postings in the broader economy.

Brookings based its findings on data from Lightcast, a labour market analytics firm, and noted rising demand for AI skills across sectors, including manufacturing. According to the US Census Bureau’s Business Trends Survey, the share of manufacturers using AI has jumped from 4% in early 2023 to 9% by mid-2025.

Yet, AI jobs still form a small part of the market. Goldman Sachs predicts widespread AI adoption will peak in the early 2030s, with a slower near-term influence on jobs. ‘AI is visible in the micro labour market data, but it doesn’t dominate broader job dynamics,’ said Joseph Briggs, an economist at Goldman Sachs.

Roles range from AI engineers and data scientists to consultants and marketers learning to integrate AI into business operations responsibly and ethically. In 2025, over 80,000 job postings cited generative AI skills—up from fewer than 4,000 in 2010, Brookings reported, indicating explosive long-term growth.

Job openings involving ‘responsible AI’—those addressing ethical AI use in business and society—are also rising, according to data from Indeed and Lightcast. ‘As AI evolves, so does what counts as an AI job,’ said Cory Stahle of the Indeed Hiring Lab, noting that definitions shift with new business applications.

AI skills carry financial value, too. Lightcast found that jobs requiring AI expertise offer an average salary premium of $18,000, or 28% more annually. Unsurprisingly, tech hubs like Silicon Valley and Seattle dominate AI hiring, but job growth spreads to regions like the Sunbelt and the East Coast.

Mark Muro of Brookings noted that universities play a key role in AI job growth across new regions by fuelling local innovation. AI is also entering non-tech fields such as finance, human resources, and marketing, with more than half of AI-related postings now being outside IT roles.

Muro expects more widespread AI adoption in the next few years, as employers gain clarity on its value, limitations and potential for productivity. ‘There’s broad consensus that AI boosts productivity and economic competitiveness,’ he said. ‘It energises regional leaders and businesses to act more quickly.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts urge broader values in AI development

Since the launch of ChatGPT in late 2023, the private sector has led AI innovation. Major players like Microsoft, Google, and Alibaba—alongside emerging firms such as Anthropic and Mistral—are racing to monetise AI and secure long-term growth in the technology-driven economy.

But during the Fortune Brainstorm AI conference in Singapore this week, experts stressed the importance of human values in shaping AI’s future. Anthea Roberts, founder of Dragonfly Thinking, argued that AI must be built not just to think faster or cheaper, but also to think better.

She highlighted the risk of narrow thinking—national, disciplinary or algorithmic—and called for diverse, collaborative thinking to counter it. Roberts sees potential in human-AI collaboration, which can help policymakers explore different perspectives, boosting the chances of sound outcomes.

Russell Wald, executive director at Stanford’s Institute for Human-Centred AI, called AI a civilisation-shifting force. He stressed the need for an interdisciplinary ecosystem—combining academia, civil society, government and industry—to steer AI development.

‘Industry must lead, but so must academia,’ Wald noted, as well as universities’ contributions to early research, training, and transparency. Despite widespread adoption, AI scepticism persists, due to issues like bias, hallucination, and unpredictable or inappropriate language.

Roberts said most people fall into two camps: those who use AI uncritically, such as students and tech firms, and those who reject it entirely.

She labelled the latter as practising ‘critical non-use’ due to concerns over bias, authenticity and ethical shortcomings in current models. Inviting a broader demographic into AI governance, Roberts urged more people—especially those outside tech hubs like Silicon Valley—to shape its future.

Wald noted that in designing AI, developers must reflect the best of humanity: ‘Not the crazy uncle at the Thanksgiving table.’

Both experts believe the stakes are high, and the societal benefits of getting AI right are too great to ignore or mishandle. ‘You need to think not just about what people want,’ Roberts said, ‘but what they want to want—their more altruistic instincts.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Crypto hacks soar in 2025 as security gaps widen

According to Hacken’s latest research, the crypto sector has already recorded more than $3.1 billion in losses during the first half of 2025. That figure already exceeds 2024, mainly due to access control flaws, phishing, and AI-driven exploits.

Access control remains the most significant weakness, responsible for almost 60% of recorded losses. The most severe breach was the Bybit attack, where North Korean hackers exploited a wallet signer vulnerability to steal $1.46 billion.

Other incidents include UPCX’s $70 million loss, a manipulated price oracle exploit on KiloEx, and insider fraud involving the Roar staking contract.

Phishing and social engineering continue to evolve, accounting for nearly $600 million in stolen funds. One victim reportedly lost $330 million in Bitcoin, while fake Coinbase support calls drained over $100 million from user wallets.

Meanwhile, AI-related hacks have exploded in volume, increasing by more than 1,000% compared to last year. Most of these incidents stem from insecure APIs and flaws in large language model integrations.

Experts warn that smarter attackers and Web3’s fragmented security practices demand a stronger approach. Hacken advises combining blockchain standards with off-chain protections and better training to stay ahead of threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Trump pushes for ‘anti-woke’ AI in US government contracts

Tech firms aiming to sell AI systems to the US government will now need to prove their chatbots are free of ideological bias, following a new executive order signed by Donald Trump.

The measure, part of a broader plan to counter China’s influence in AI development, marks the first official attempt by the US to shape the political behaviour of AI in services.

It places a new emphasis on ensuring AI reflects so-called ‘American values’ and avoids content tied to diversity, equity and inclusion (DEI) frameworks in publicly funded models.

The order, titled ‘Preventing Woke AI in the Federal Government’, does not outright ban AI that promotes DEI ideas, but requires companies to disclose if partisan perspectives are embedded.

Major providers like Google, Microsoft and Meta have yet to comment. Meanwhile, firms face pressure to comply or risk losing valuable public sector contracts and funding.

Critics argue the move forces tech companies into a political culture war and could undermine years of work addressing AI bias, harming fair and inclusive model design.

Civil rights groups warn the directive may sideline tools meant to support vulnerable groups, favouring models that ignore systemic issues like discrimination and inequality.

Policy analysts have compared the approach to China’s use of state power to shape AI behaviour, though Trump’s order stops short of requiring pre-approval or censorship.

Supporters, including influential Trump-aligned venture capitalists, say the order restores transparency. Marc Andreessen and David Sacks were reportedly involved in shaping the language.

The move follows backlash to an AI image tool released by Google, which depicted racially diverse figures when asked to generate the US Founding Fathers, triggering debate.

Developers claimed the outcome resulted from attempts to counter bias in training data, though critics labelled it ideological overreach embedded by design teams.

Under the directive, companies must disclose model guidelines and explain how neutrality is preserved during training. Intentional encoding of ideology is discouraged.

Former FTC technologist Neil Chilson described the order as light-touch. It does not ban political outputs; it only calls for transparency about generating outputs.

OpenAI said its objectivity measures align with the order, while Microsoft declined to comment. xAI praised Trump’s AI policy but did not mention specifics.

The firm, founded by Elon Musk, recently won a $200M defence contract shortly after its Grok chatbot drew criticism for generating antisemitic and pro-Hitler messages.

Trump’s broader AI orders seek to strengthen American leadership and reduce regulatory burdens to keep pace with China in the development of emerging technologies.

Some experts caution that ideological mandates could set a precedent for future governments to impose their political views on critical AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nigeria opens door to stablecoin businesses

Nigeria’s top markets regulator is open to stablecoin ventures to revive the digital asset sector after last year’s Binance crackdown. At the Nigeria Stablecoin Summit, SEC Director-General Emomotimi Agama highlighted support for firms following evolving regulations.

He envisions Nigeria becoming a stablecoin hub in the global south, powering cross-border trade across Africa within five years.

The SEC has already onboarded stablecoin-focused companies through its regulatory sandbox, reflecting a broader strategy to lead digital innovation. Agama acknowledged stablecoins as vital to the crypto ecosystem but warned of national security risks, underscoring the need for careful regulation.

His comments come following the high-profile arrest and release of Binance executive Tigran Gambaryan amid Nigeria’s previous crypto crackdown.

While the new stance suggests a regulatory easing, experts caution that rebuilding trust with global firms will take time. Industry leaders stress the need for clear legal frameworks, reliable fiat access, and consistent enforcement to attract investment and restore liquidity.

Nigeria’s path to becoming a stablecoin hub hinges on sustained policy stability and meaningful re-engagement with crypto players.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Democratising inheritance: AI tool handles estate administration

Lauren Kolodny, who early backed Chime and earned a spot on the Forbes Midas list, is leading a $20 million Series A funding round into Alix, a San Francisco-based startup using AI to revolutionise estate settlement. Founder Alexandra Mysoor conceived the idea after spending nearly 1,000 hours over 18 months managing a friend’s family estate, highlighting a widespread, emotionally taxing administrative gap.

Using AI agents, Alix automates tedious elements of the estate process, including scanning documents, extracting data, pre-populating legal forms, and liaising with financial institutions. This contrasts sharply with the traditional, costly probate system. The startup’s pricing model charges around 1% of estate value, translating to approximately $9,000–$12,000 for smaller estates.

Kolodny sees Alix as part of a new wave of startups harnessing AI to democratise services once accessible only to high-net-worth individuals. As trillions of dollars transfer to millennials and Gen Z in the coming decades, Alix aims to simplify one of the most complex and emotionally fraught administrative tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Starlink suffers widespread outage from a rare software failure

The disruption began around 3 p.m. EDT and was attributed to a failure in Starlink’s core internal software services. The issue affected one of the most resilient satellite systems globally, sparking speculation over whether a botched update or a cyberattack may have been responsible.

Starlink, which serves more than six million users across 140 countries, saw service gradually return after two and a half hours.

Executives from SpaceX, including CEO Elon Musk and Vice President of Starlink Engineering Michael Nicolls, apologised publicly and promised to address the root cause to avoid further interruptions. Experts described it as Starlink’s most extended and severe outage since becoming a major provider.

As SpaceX continues upgrading the network to support greater speed and bandwidth, some experts warned that such technical failures may become more visible. Starlink has rapidly expanded with over 8,000 satellites in orbit and new services like direct-to-cell text messaging in partnership with T-Mobile.

Questions remain over whether Thursday’s failure affected military services like Starshield, which supports high-value US defence contracts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!