Alphabet hits US$3 trillion valuation on AI optimism

Google’s parent company, Alphabet, has become the fourth company to reach a market value above US$3 trillion, fuelled by investor confidence in AI and relief over a favourable antitrust ruling.

Its shares jumped 4.3 percent to close at US$251.76 on 15 September, lifting the firm’s valuation to US$3.05 trillion.

The rally has added about US$1.2 trillion in value since April, with Alphabet joining Apple and Microsoft in the elite group while Nvidia remains the most valuable at US$4.25 trillion.

Investor optimism has been strengthened by expectations of a US Federal Reserve rate cut and surging demand for AI-related products.

Alphabet’s communications services unit has risen more than 26 percent in 2025, outpacing all other major sectors. Strong growth in its cloud division, new AI investments, and the Gemini model have reinforced the company’s momentum.

Analysts note that, while search continues to dominate revenues, Alphabet is increasingly viewed as a diversified technology powerhouse with YouTube, Waymo, and AI research at its core.

By avoiding a forced breakup of Chrome and Android, the antitrust ruling also removed a significant threat to its business model.

Market strategists suggest Alphabet now combines the strength of its legacy platforms with the credibility of emerging technologies, securing its place at the centre of Wall Street’s AI-driven rally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI enables rapid phishing attacks on older users

A recent study has shown that AI chatbots can generate compelling phishing emails for older people. Researchers tested six major chatbots, including Grok, ChatGPT, Claude, Meta AI, DeepSeek, and Google’s Gemini, by asking them to draft scam emails posing as charitable organisations.

Of 108 senior volunteers, roughly 11% clicked on the AI-written links, highlighting the ease with which cybercriminals could exploit such tools.

Some chatbots initially declined harmful requests, but minor adjustments, such as stating the task was for research purposes, or circumvented these safeguards.

Grok, in particular, produced messages urging recipients to ‘click now’ and join a fictitious charity, demonstrating how generative AI can amplify the persuasiveness of scams. Researchers warn that criminals could use AI to conduct large-scale phishing campaigns at minimal cost.

Phishing remains the most common cybercrime in the US, according to the FBI, with seniors disproportionately affected. Last year, Americans over 60 lost nearly $5 billion to phishing attacks, an increase driven partly by generative AI.

The study underscores the urgent need for awareness and protection measures among vulnerable populations.

Experts note that AI’s ability to generate varied scam messages rapidly poses a new challenge for cybersecurity, as it allows fraudsters to scale operations quickly while targeting specific demographics, including older people.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google lays off over 200 AI contractors amid union tensions

The US tech giant, Google, has dismissed over 200 contractors working on its Gemini chatbot and AI Overviews tool. However, this sparks criticism from labour advocates and claims of retaliation against workers pushing for unionisation.

Many affected staff were highly trained ‘super raters’ who helped refine Google’s AI systems, yet were abruptly laid off.

The move highlights growing concerns over job insecurity in the AI sector, where companies depend heavily on outsourced and low-paid contract workers instead of permanent employees.

Workers allege they were penalised for raising issues about inadequate pay, poor working conditions, and the risks of training AI that could eventually replace them.

Google has attempted to distance itself from the controversy, arguing that subcontractor GlobalLogic handled the layoffs rather than the company itself.

Yet critics say that outsourcing allows the tech giant to expand its AI operations without accountability, while undermining collective bargaining efforts.

Labour experts warn that the cuts reflect a broader industry trend in which AI development rests on precarious work arrangements. With union-busting claims intensifying, the dismissals are now seen as part of a deeper struggle over workers’ rights in the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

PDGrapher AI tool aims to speed up precision medicine development

Harvard Medical School researchers have developed an AI tool that could transform drug discovery by identifying multiple drivers of disease and suggesting treatments to restore cells to a healthy state.

The model, called PDGrapher, utilises graph neural networks to map the relationships between genes, proteins, and cellular pathways, thereby predicting the most effective targets for reversing disease. Unlike traditional approaches that focus on a single protein, it considers multiple factors at once.

Trained on datasets of diseased cells before and after treatment, PDGrapher correctly predicted known drug targets and identified new candidates supported by emerging research. The model ranked potential targets up to 35% higher and worked 25 times faster than comparable tools.

Researchers are now applying PDGrapher to complex diseases such as Parkinson’s, Alzheimer’s, and various cancers, where single-target therapies often fail. By identifying combinations of targets, the tool can help overcome drug resistance and expedite treatment design.

Senior author Marinka Zitnik said the ultimate goal is to create a cellular ‘roadmap’ to guide therapy development and enable personalised treatments for patients. After further validation, PDGrapher could become a cornerstone in precision medicine.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Telecom industry outlines vision for secure 6G

Telecom experts say 6G must be secure by design as planning for the next generation of mobile networks accelerates.

Industry leaders warn that 6G will vastly expand the attack surface, with autonomous vehicles, drones, industrial robots and AR systems all reliant on ultra-low latency connections. AI will be embedded at every layer, creating opportunities for optimisation but also new risks such as model poisoning.

Quantum threats are also on the horizon, with adversaries potentially able to decrypt sensitive data. Quantum-resistant cryptography is expected to be a cornerstone of 6G defences.

With standards due by 2029, experts stress cooperation among regulators, equipment vendors and operators. Security, they argue, must be as fundamental to 6G as speed and sustainability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI challenges how students prepare for exams

Australia’s Year 12 students are the first to complete their final school years with widespread access to AI tools such as ChatGPT.

Educators warn that while the technology can support study, it risks undermining the core skills of independent thinking and writing. In English, the only compulsory subject, critical thinking is now viewed as more essential than ever.

Trials in New South Wales and South Australia use AI programs designed to guide rather than provide answers, but teachers remain concerned about how to verify work and ensure students value their own voices.

Experts argue that exams, such as the VCE English paper in October, highlight the reality that AI cannot sit assessments. Students must still practise planning, drafting and reflecting on ideas, skills which remain central to academic success.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Lumex chips bring advanced AI to mobile devices

Arm Holdings has unveiled Lumex, its next-generation chip designs built to bring advanced AI performance directly to mobile devices.

The new designs range from highly energy-efficient chips for wearables to high-performance versions capable of running large AI models on smartphones without cloud support.

Lumex forms part of Arm’s Compute Subsystems business, offering handset makers pre-integrated designs, while also strengthening Arm’s broader strategy to expand smartphone and data centre revenues.

The chips are tailored for 3-nanometre manufacturing processes provided by suppliers such as TSMC, whose technology is also used in Apple’s latest iPhone chips. Arm has indicated further investment in its own chip development to capitalise on demand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US and China reach framework deal on TikTok

The United States and China have reached a tentative ‘framework’ deal on the future of TikTok’s American operations, US Treasury Secretary Scott Bessent confirmed during trade talks in Madrid. The agreement, which still requires the approval of Presidents Donald Trump and Xi Jinping, is aimed at resolving a looming deadline that could see the video-sharing app banned in the US unless its Chinese owner ByteDance sells its American division.

US officials say the framework addresses national security concerns by paving the way for US ownership of TikTok’s operations, while China insists any final deal must not undermine its companies’ interests. The Biden administration has long argued that the app’s access to US user data poses significant risks, while ByteDance maintains its American arm operates independently and respects user privacy.

The law mandating a sale or ban, upheld by the Supreme Court earlier this year, is due to take effect on 17 September. Although the framework marks progress, key details remain unresolved, particularly over whether TikTok’s recommendation algorithm and user data will be fully transferred, stored, and protected in the US.

Experts warn that unless strict safeguards are included, the deal may solve ownership issues without closing potential ‘backdoors’ for Beijing. Concerns also remain over how much influence China retains, with negotiators linking TikTok’s fate to wider tariff discussions between the two powers.

If fully implemented, the agreement could represent a breakthrough in both trade relations and tech governance. But with ByteDance among China’s most powerful AI firms, the stakes go far beyond social media, touching on questions of global competition, national security, and digital sovereignty.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UNDP publishes digital participation guide to empower civic action

A newly published guide by People Powered and UNDP aims to connect people in their communities through inclusive, locally relevant digital participation platforms. Designed with local governments, civic groups, and organisations in mind, it highlights digital platforms that enable inclusive, action-oriented civic engagement.

According to the UNDP, ‘the guide covers the latest trends, including the integration of AI features, and addresses challenges such as digital inclusion, data privacy, accessibility, and sustainability.’

The guide focuses on actively maintained, publicly available platforms, typically offered through cloud-based software (SaaS) models, and prioritises flexible, multi-purpose tools over single-use options. While recognising the dominance of platforms from wealthier countries, it makes a deliberate effort to feature case studies and tools from the Global Majority.

Political advocacy platforms, internal government tools, and issue-reporting apps are excluded to keep the focus on technologies that drive meaningful public participation. Lastly, the guide emphasises the importance of local context and community empowerment, encouraging a shift from passive input to meaningful public influence in governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Privacy-preserving AI gets a boost with Google’s VaultGemma model

Google has unveiled VaultGemma, a new large language model built to offer cutting-edge privacy through differential privacy. The 1-billion-parameter model is based on Google’s Gemma architecture and is described as the most powerful differentially private LLM to date.

Differential privacy adds mathematical noise to data, preventing the identification of individuals while still producing accurate overall results. The method has long been used in regulated industries, but has been challenging to apply to large language models without compromising performance.

VaultGemma is designed to eliminate that trade-off. Google states that the model can be trained and deployed with differential privacy enabled, while maintaining comparable stability and efficiency to non-private LLMs.

This breakthrough could have significant implications for developers building privacy-sensitive AI systems, ranging from healthcare and finance to government services. It demonstrates that sensitive data can be protected without sacrificing speed or accuracy.

Google’s research teams say the model will be released with open-source tools to help others adopt privacy-preserving techniques. The move comes amid rising regulatory and public scrutiny over how AI systems handle personal data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!