Meta boosts teen safety as it removes hundreds of thousands of harmful accounts

Meta has rolled out new safety tools to protect teenagers on Instagram and Facebook, including alerts about suspicious messages and a one-tap option to block or report harmful accounts.

The company said it is increasing efforts to prevent inappropriate contact from adults and has removed over 635,000 accounts that sexualised or targeted children under 13.

Of those accounts, 135,000 were caught posting sexualised comments, while another 500,000 were flagged for inappropriate interactions.

Meta said teen users blocked over one million accounts and reported another million after receiving in-app warnings encouraging them to stay cautious in private messages.

The company also uses AI to detect users lying about their age on Instagram. If flagged, those accounts are automatically converted to teen accounts with stronger privacy settings and messaging restrictions. Since 2024, all teen accounts are set to private by default.

Meta’s move comes as it faces mounting legal pressure from dozens of US states accusing the company of contributing to the youth mental health crisis by designing addictive features on Instagram and Facebook. Critics argue that more must be done to ensure safety instead of relying on user action alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and quantum tech reshape global business

AI and quantum computing are reshaping global industries as investment surges and innovation accelerates across sectors like finance, healthcare and logistics. Microsoft and Amazon are driving a major shift in AI infrastructure, transforming cloud services into profitable platforms.

Quantum computing is moving beyond theory, with real-world applications emerging in pharmaceuticals and e-commerce. Google’s development of quantum-inspired algorithms for virtual shopping and faster analytics demonstrates its potential to revolutionise decision-making.

Sustainability is also gaining ground, with companies adopting AI-powered solutions for renewable energy and eco-friendly manufacturing. At the same time, digital banks are integrating AI to challenge legacy finance systems, offering personalised, accessible services.

Despite rapid progress, ethical concerns and regulatory challenges are mounting. Data privacy, AI bias, and antitrust issues highlight the need for responsible innovation, with industry leaders urged to balance risk and growth for long-term societal benefit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reshaping the US labour market

AI is often seen as a job destroyer, but it’s also emerging as a significant source of new employment, according to a new Brookings report. The number of job postings mentioning AI has more than doubled in the past year, with demand continuing to surge across various industries and regions.

Over the past 15 years, AI-related job listings have grown nearly 29% annually, far outpacing the 11% growth rate of overall job postings in the broader economy.

Brookings based its findings on data from Lightcast, a labour market analytics firm, and noted rising demand for AI skills across sectors, including manufacturing. According to the US Census Bureau’s Business Trends Survey, the share of manufacturers using AI has jumped from 4% in early 2023 to 9% by mid-2025.

Yet, AI jobs still form a small part of the market. Goldman Sachs predicts widespread AI adoption will peak in the early 2030s, with a slower near-term influence on jobs. ‘AI is visible in the micro labour market data, but it doesn’t dominate broader job dynamics,’ said Joseph Briggs, an economist at Goldman Sachs.

Roles range from AI engineers and data scientists to consultants and marketers learning to integrate AI into business operations responsibly and ethically. In 2025, over 80,000 job postings cited generative AI skills—up from fewer than 4,000 in 2010, Brookings reported, indicating explosive long-term growth.

Job openings involving ‘responsible AI’—those addressing ethical AI use in business and society—are also rising, according to data from Indeed and Lightcast. ‘As AI evolves, so does what counts as an AI job,’ said Cory Stahle of the Indeed Hiring Lab, noting that definitions shift with new business applications.

AI skills carry financial value, too. Lightcast found that jobs requiring AI expertise offer an average salary premium of $18,000, or 28% more annually. Unsurprisingly, tech hubs like Silicon Valley and Seattle dominate AI hiring, but job growth spreads to regions like the Sunbelt and the East Coast.

Mark Muro of Brookings noted that universities play a key role in AI job growth across new regions by fuelling local innovation. AI is also entering non-tech fields such as finance, human resources, and marketing, with more than half of AI-related postings now being outside IT roles.

Muro expects more widespread AI adoption in the next few years, as employers gain clarity on its value, limitations and potential for productivity. ‘There’s broad consensus that AI boosts productivity and economic competitiveness,’ he said. ‘It energises regional leaders and businesses to act more quickly.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts urge broader values in AI development

Since the launch of ChatGPT in late 2023, the private sector has led AI innovation. Major players like Microsoft, Google, and Alibaba—alongside emerging firms such as Anthropic and Mistral—are racing to monetise AI and secure long-term growth in the technology-driven economy.

But during the Fortune Brainstorm AI conference in Singapore this week, experts stressed the importance of human values in shaping AI’s future. Anthea Roberts, founder of Dragonfly Thinking, argued that AI must be built not just to think faster or cheaper, but also to think better.

She highlighted the risk of narrow thinking—national, disciplinary or algorithmic—and called for diverse, collaborative thinking to counter it. Roberts sees potential in human-AI collaboration, which can help policymakers explore different perspectives, boosting the chances of sound outcomes.

Russell Wald, executive director at Stanford’s Institute for Human-Centred AI, called AI a civilisation-shifting force. He stressed the need for an interdisciplinary ecosystem—combining academia, civil society, government and industry—to steer AI development.

‘Industry must lead, but so must academia,’ Wald noted, as well as universities’ contributions to early research, training, and transparency. Despite widespread adoption, AI scepticism persists, due to issues like bias, hallucination, and unpredictable or inappropriate language.

Roberts said most people fall into two camps: those who use AI uncritically, such as students and tech firms, and those who reject it entirely.

She labelled the latter as practising ‘critical non-use’ due to concerns over bias, authenticity and ethical shortcomings in current models. Inviting a broader demographic into AI governance, Roberts urged more people—especially those outside tech hubs like Silicon Valley—to shape its future.

Wald noted that in designing AI, developers must reflect the best of humanity: ‘Not the crazy uncle at the Thanksgiving table.’

Both experts believe the stakes are high, and the societal benefits of getting AI right are too great to ignore or mishandle. ‘You need to think not just about what people want,’ Roberts said, ‘but what they want to want—their more altruistic instincts.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Starlink suffers widespread outage from a rare software failure

The disruption began around 3 p.m. EDT and was attributed to a failure in Starlink’s core internal software services. The issue affected one of the most resilient satellite systems globally, sparking speculation over whether a botched update or a cyberattack may have been responsible.

Starlink, which serves more than six million users across 140 countries, saw service gradually return after two and a half hours.

Executives from SpaceX, including CEO Elon Musk and Vice President of Starlink Engineering Michael Nicolls, apologised publicly and promised to address the root cause to avoid further interruptions. Experts described it as Starlink’s most extended and severe outage since becoming a major provider.

As SpaceX continues upgrading the network to support greater speed and bandwidth, some experts warned that such technical failures may become more visible. Starlink has rapidly expanded with over 8,000 satellites in orbit and new services like direct-to-cell text messaging in partnership with T-Mobile.

Questions remain over whether Thursday’s failure affected military services like Starshield, which supports high-value US defence contracts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google’s AI Overviews reach 2 billion users monthly, reshaping the web’s future

Google’s AI Overviews, the generative summaries placed above traditional search results, now serve over 2 billion users monthly, a sharp rise from 1.5 billion just last quarter.

First launched in May 2023 and widely available in the US by mid-2024, the feature has rapidly expanded across more than 200 countries and 40 languages.

The widespread use of AI Overviews transforms how people search and who benefits. Google reports that the feature boosts engagement by over 10% for queries where it appears.

However, a study by Pew Research shows clicks on search results drop significantly when AI Overviews are shown, with just 8% of users clicking any link, and only 1% clicking within the overview itself.

While Google claims AI Overviews monetise at the same rate as regular search, publishers are left out unless users click through, which they rarely do.

Google has started testing ads within the summaries and is reportedly negotiating licensing deals with select publishers, hinting at a possible revenue-sharing shift. Meanwhile, regulators in the US and EU are scrutinising whether the feature violates antitrust laws or misuses content.

Industry experts warn of a looming ‘Google Zero’ future — a web where search traffic dries up and AI-generated answers dominate.

As visibility in search becomes more about entity recognition than page ranking, publishers and marketers must rethink how they maintain relevance in an increasingly post-click environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

VPN interest surges in the UK as users bypass porn site age checks

Online searches for VPNs skyrocketed in the UK following the introduction of new age verification rules on adult websites such as PornHub, YouPorn and RedTube.

Under the Online Safety Act, these platforms must confirm that visitors are over 18 using facial recognition, photo ID or credit card details.

Data from Google Trends showed that searches for ‘VPN’ jumped by over 700 percent on Friday morning, suggesting many attempt to sidestep the restrictions by masking their location. VPN services allow users to spoof their device’s location to another country instead of complying with local regulations.

Critics argue that the measures are both ineffective and risky. Aylo, the company behind PornHub, called the checks ‘haphazard and dangerous’, warning they put users’ privacy at risk.

Legal experts also doubt the system’s impact, saying it fails to block access to dark web content or unregulated forums.

Aylo proposed that age verification should occur on users’ devices instead of websites storing sensitive information. The company stated it is open to working with governments, civil groups and tech firms to develop a safer, device-based system that protects privacy while enforcing age limits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon exit highlights deepening AI divide between US and China

Amazon’s quiet wind-down of its Shanghai AI lab underscores a broader shift in global research dynamics, as escalating tensions between the US and China reshape how tech giants operate across borders.

Instead of expanding innovation hubs in China, major American firms are increasingly dismantling them.

The AWS lab, once central to Amazon’s AI research, produced tools said to have generated nearly $1bn in revenue and over 100 academic papers.

Yet its dissolution reflects a growing push from Washington to curb China’s access to cutting-edge technology, including restrictions on advanced chips and cloud services.

As IBM and Microsoft have also scaled back operations or relocated talent away from mainland China, a pattern is emerging: strategic retreat. Rather than risking compliance issues or regulatory scrutiny, US tech companies are choosing to restructure globally and reduce local presence in China altogether.

With Amazon already having exited its Chinese ebook and ecommerce markets, the shuttering of its AI lab signals more than a single closure — it reflects a retreat from joint innovation and a widening technological divide that may shape the future of AI competition.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta tells Australia AI needs real user data to work

Meta, the parent company of Facebook, Instagram, and WhatsApp, has urged the Australian government to harmonise privacy regulations with international standards, warning that stricter local laws could hamper AI development. The comments came in Meta’s submission to the Productivity Commission’s review on harnessing digital technology, published this week.

Australia is undergoing its most significant privacy reform in decades. The Privacy and Other Legislation Amendment Bill 2024, passed in November and given royal assent in December, introduces stricter rules around handling personal and sensitive data. The rules are expected to take effect throughout 2024 and 2025.

Meta maintains that generative AI systems depend on access to large, diverse datasets and cannot rely on synthetic data alone. In its submission, the company argued that publicly available information, like legislative texts, fails to reflect the cultural and conversational richness found on its platforms.

Meta said its platforms capture the ways Australians express themselves, making them essential to training models that can understand local culture, slang, and online behaviour. It added that restricting access to such data would make AI systems less meaningful and effective.

The company has faced growing scrutiny over its data practices. In 2024, it confirmed using Australian Facebook data to train AI models, although users in the EU have the option to opt out—an option not extended to Australian users.

Pushback from regulators in Europe forced Meta to delay its plans for AI training in the EU and UK, though it resumed these efforts in 2025.

Australia’s Office of the Australian Information Commissioner has issued guidance on AI development and commercial deployment, highlighting growing concerns about transparency and accountability. Meta argues that diverging national rules create conflicting obligations, which could reduce the efficiency of building safe and age-appropriate digital products.

Critics claim Meta is prioritising profit over privacy, and insist that any use of personal data for AI should be based on informed consent and clearly demonstrated benefits. The regulatory debate is intensifying at a time when Australia’s outdated privacy laws are being modernised to protect users in the AI age.

The Productivity Commission’s review will shape how the country balances innovation with safeguards. As a key market for Meta, Australia’s decisions could influence regulatory thinking in other jurisdictions confronting similar challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI energy demand accelerates while clean power lags

Data centres are driving a sharp rise in electricity consumption, putting mounting pressure on power infrastructure that is already struggling to keep pace.

The rapid expansion of AI has led technology companies to invest heavily in AI-ready infrastructure, but the energy demands of these systems are outstripping available grid capacity.

The International Energy Agency projects that electricity use by data centres will more than double globally by 2030, reaching levels equivalent to the current consumption of Japan.

In the United States, they are expected to use 580 TWh annually by 2028—about 12% of national consumption. AI-specific data centres will be responsible for much of this increase.

Despite this growth, clean energy deployment is lagging. Around two terawatts of projects remain stuck in interconnection queues, delaying the shift to sustainable power. The result is a paradox: firms pursuing carbon-free goals by 2035 now rely on gas and nuclear to power their expanding AI operations.

In response, tech companies and utilities are adopting short-term strategies to relieve grid pressure. Microsoft and Amazon are sourcing energy from nuclear plants, while Meta will rely on new gas-fired generation.

Data centre developers like CloudBurst are securing dedicated fuel supplies to ensure local power generation, bypassing grid limitations. Some utilities are introducing technologies to speed up grid upgrades, such as AI-driven efficiency tools and contracts that encourage flexible demand.

Behind-the-meter solutions—like microgrids, batteries and fuel cells—are also gaining traction. AEP’s 1-GW deal with Bloom Energy would mark the US’s largest fuel cell deployment.

Meanwhile, longer-term efforts aim to scale up nuclear, geothermal and even fusion energy. Google has partnered with Commonwealth Fusion Systems to source power by the early 2030s, while Fervo Energy is advancing geothermal projects.

National Grid and other providers invest in modern transmission technologies to support clean generation. Cooling technology for data centre chips is another area of focus. Programmes like ARPA-E’s COOLERCHIPS are exploring ways to reduce energy intensity.

At the same time, outdated regulatory processes are slowing progress. Developers face unclear connection timelines and steep fees, sometimes pushing them toward off-grid alternatives.

The path forward will depend on how quickly industry and regulators can align. Without faster deployment of clean power and regulatory reform, the systems designed to power AI could become the bottleneck that stalls its growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!