Gmail accounts targeted in phishing wave after Google data leak

Hackers linked to the ShinyHunters group have compromised Google’s Salesforce systems, leading to a data leak that puts Gmail and Google Cloud users at risk of phishing attacks.

Google confirmed that customer and company names were exposed, though no passwords were stolen. Attackers are now exploiting the breach with phishing schemes, including fake account resets and malware injection attempts through outdated access points.

With Gmail and Google Cloud serving around 2.5 billion users worldwide, both companies and individuals could be targeted. Early reports on Reddit describe callers posing as Google staff warning of supposed account breaches.

Google urges users to strengthen protections by running its Security Checkup, enabling Advanced Protection, and switching to passkeys instead of passwords. The company emphasised that its staff never initiates unsolicited password resets by phone or email.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Bluesky shuts down in Mississippi over new age law

Bluesky, a decentralised social media platform, has ceased operations in Mississippi due to a new state law requiring strict age verification.

The company said compliance would require tracking users, identifying children, and collecting sensitive personal information. For a small team like Bluesky’s, the burden of such infrastructure, alongside privacy concerns, made continued service unfeasible.

The law mandates age checks not just for explicit content, but for access to general social media. Bluesky highlighted that even the UK Online Safety Act does not require platforms to track which users are children.

US Mississippi law has sparked debate over whether efforts to protect minors are inadvertently undermining online privacy and free speech. Bluesky warned that such legislation may stifle innovation and entrench dominance by larger tech firms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI could democratise higher education if implemented responsibly

Professor Orla Sheils of Trinity College Dublin calls on universities to embrace AI as a tool for educational equity rather than fear. She notes that AI is already ubiquitous in higher education, with students, lecturers, and researchers using it daily.

AI can help universities fulfil the democratic ideals of the Bologna Process and Ireland’s National AI Strategy by expanding lifelong learning, making education more accessible and supporting personalised student experiences.

Initiatives such as AI-driven tutoring, automated transcription and translation, streamlined timetabling and grading tools can free staff time while supporting learners with challenging schedules or disabilities.

Trinity’s AI Accountability Lab, led by Dr Abeba Birhane, exemplifies how institutions can blend innovation with ethics. Sheils warns that overreliance on AI risks academic integrity and privacy unless governed carefully. AI must serve educators, not replace them, preserving the human qualities of creativity and judgement in learning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Cloud’s new AI tools expand enterprise threat protection

Following last week’s announcements on AI-driven cybersecurity, Google Cloud has unveiled further tools at its Security Summit 2025 aimed at protecting enterprise AI deployments and boosting efficiency for security teams.

The updates build on prior innovations instead of replacing them, reinforcing Google’s strategy of integrating AI directly into security operations.

Vice President and General Manager Jon Ramsey highlighted the growing importance of agentic approaches as AI agents operate across increasingly complex enterprise environments.

Building on the previous rollout, Google now introduces Model Armor protections, designed to shield AI agents from prompt injections, jailbreaking, and data leakage, enhancing safeguards without interrupting existing workflows.

Additional enhancements include the Alert Investigation agent, which automates event enrichment and analysis while offering actionable recommendations.

By combining Mandiant threat intelligence feeds with Google’s Gemini AI, organisations can now detect and respond to incidents across distributed agent networks more rapidly and efficiently than before.

SecOps Labs and updated SOAR dashboards provide early access to AI-powered threat detection experiments and comprehensive visualisations of security operations.

These tools allow teams to continue scaling agentic AI security, turning previous insights into proactive, enterprise-ready protections for real-world deployments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google pushes staff to embrace AI to stay ahead

Google is urging its workforce to adopt AI in everyday tasks instead of relying solely on traditional methods.

CEO Sundar Pichai has warned that falling behind in AI could risk the company’s competitive edge, especially as rivals like Microsoft, Amazon and Meta push their staff to embrace similar tools.

Early trials inside Google suggest a significant boost in efficiency, with engineers reporting a 10% increase in weekly productivity after adopting AI.

The company has launched a training initiative called AI Savvy Google to accelerate the shift. The programme provides courses, toolkits and hands-on sessions to help employees integrate AI into their workflows.

One of the standout tools is Cider, an AI-powered coding assistant already used by half of the engineers with access to it.

Executives believe AI will soon become an essential part of software engineering. Brian Saluzzo, a senior leader at Google, told staff that internal AI tools will continue to improve and become deeply embedded in coding work.

The company stresses the importance of using AI to support rather than replace workers, with the training programme designed to upskill employees instead of pushing them aside.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia launches Spectrum-XGS to build global AI factories

American technology company Nvidia has unveiled Spectrum-XGS Ethernet, a new networking technology designed to connect multiple data centres into unified giga-scale AI factories.

With AI demand skyrocketing, single facilities are hitting limits in power and capacity, creating the need for infrastructure that can operate across cities, nations and continents.

Spectrum-XGS extends Nvidia’s Spectrum-X Ethernet platform, introducing what the company calls a ‘scale-across’ approach, alongside scale-up and scale-out models.

Integrating advanced congestion control, latency management, and telemetry nearly doubles the performance of the Nvidia Collective Communications Library, allowing geographically distributed data centres to function as one large AI cluster.

Early adopters like CoreWeave are preparing to link their facilities using the new system. According to Nvidia, the technology offers 1.6 times greater bandwidth density than traditional Ethernet and features Spectrum-X switches and ConnectX-8 SuperNICs, optimised for hyperscale AI operations.

The company argues that the approach will define the next phase of AI infrastructure, enabling super-factories to manage millions of GPUs while improving efficiency and lowering operational costs.

Nvidia CEO Jensen Huang described the development as part of the AI industrial revolution, highlighting that Spectrum-XGS can unify data centres into global networks that act as vast, giga-scale AI super-factories.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Dell expands AI innovation hub in Singapore to drive regional growth

Dell Technologies has launched a new Asia Pacific and Japan AI Innovation Hub in Singapore, strengthening its role in advancing AI across the region.

The hub extends the company’s Global Innovation Hub, which has already received more than US$50 million in investment since 2019. Its focus is on driving AI transformation, enablement and leadership, in line with Singapore’s National AI Strategy 2.0.

Instead of offering only infrastructure, the hub delivers end-to-end support, from strategy to deployment, helping enterprises bridge the gap between ambition and practical results. Research shows 62% of Singaporean businesses prefer such holistic partnerships.

Since 2024, the hub has developed about 50 AI prototypes and carried out more than 100 proof-of-concepts, workshops and demonstrations across areas such as generative and predictive AI.

The projects have already influenced multiple sectors. In energy, AI solutions are strengthening infrastructure resilience and enhancing customer engagement with digital humans and chatbots.

In telecommunications, AI is supporting agility and operational efficiency, while in education, cloud-based technologies are empowering research and innovation.

Dell’s AI Centre of Excellence Lab further supports these initiatives by testing solutions for AI PCs and edge computing in collaboration with academic and hardware partners.

A strong emphasis is also placed on skills development. By the end of 2025, the hub aims to train around 10,000 students and mid-career professionals in AI engineering, platform engineering and related fields.

Working with 10 local institutes, Dell is addressing the talent shortage reported by nearly half of Singaporean organisations. Events such as the Dell InnovateFest and the Dell Innovation Challenge provide platforms for students and partners to showcase ideas and create solutions for social good.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches standalone Password Manager app for Android

Google has released its Password Manager as a standalone app for Android, separating the service from Chrome for easier access. The new app allows users to quickly view and manage saved passwords, passkeys and login details directly from their phone.

The app itself does not introduce new features. It functions mainly as a shortcut to the existing Password Manager already built into Android and Chrome.

For users, there is little practical difference between the app and the integrated option, although some may prefer the clarity of having a dedicated tool instead of navigating through browser settings.

For Google, however, the move brings advantages. By listing Password Manager in the Play Store, the company can compete more visibly with rivals like LastPass and 1Password.

Previously, many users were unaware of the built-in feature since it was hidden within Chrome. The Play Store presence also gives Google a direct way to push updates and raise awareness of the service.

The app arrives with Google’s Material 3 design refresh, giving it a cleaner look that aligns with the rest of Android. Functionality remains unchanged for now, but the shift suggests Google may expand the app in the future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hong Kong deepfake scandal exposes gaps in privacy law

The discovery of hundreds of non-consensual deepfake images on a student’s laptop at the University of Hong Kong has reignited debate about privacy, technology, and accountability. The scandal echoes the 2008 Edison Chen photo leak, which exposed gaps in law and gender double standards.

Unlike stolen private images, today’s fabrications are AI-generated composites that can tarnish reputations with a single photo scraped from social media. The dismissal that such content is ‘not real’ fails to address the damage caused by its existence.

The legal system of Hong Kong struggles to keep pace with this shift. Its privacy ordinance, drafted in the 1990s, was not designed for machine-learning fabrications, while traditional harassment and defamation laws predate the advent of AI. Victims risk harm before distribution is even proven.

The city’s privacy watchdog has launched a criminal investigation, but questions remain over whether creation or possession of deepfakes is covered by existing statutes. Critics warn that overreach could suppress legitimate uses, yet inaction leaves space for abuse.

Observers argue that just as the snapshot camera spurred the development of modern privacy law, deepfakes must drive a new legal boundary to safeguard dignity. Without reform, victims may continue facing harm without recourse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Gemini AI for government

Google has introduced a new version of its Gemini AI platform tailored specifically for US government use, called Gemini for Government. The platform combines features such as image generation, enterprise search, and AI agent development, with compliance to standards like Sec4 and FedRAMP.

Gemini includes pre-built AI agents for research and idea generation, while also offering tools to create custom agents. US government customers will pay $0.50 per year for basic access, undercutting rivals OpenAI and Anthropic, who each launched $1 government-focused AI packages earlier this year.

Google emphasised security, privacy, and automation in its pitch, positioning the product as an all-in-one solution for public sector institutions. The launch follows the Trump administration’s AI Action Plan, which seeks to promote AI growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!