Scattered Spider cyberattacks set to intensify, warn FBI and CISA

The cybercriminal group known as Scattered Spider is expected to intensify its attacks in the coming weeks, according to a joint warning issued by the FBI, CISA, and cybersecurity agencies in Canada, the UK and Australia.

These warnings highlight the group’s increasingly sophisticated methods, including impersonating employees to bypass IT support and hijack multi-factor authentication processes.

Instead of relying on old techniques, the hackers now deploy stealthy tools like RattyRAT and DragonForce ransomware, particularly targeting VMware ESXi servers.

Their attacks combine social engineering with SIM swapping and phishing, enabling them to exfiltrate sensitive data before locking systems and demanding payment — a tactic known as double extortion.

Scattered Spider, also referred to as Okta Tempest, is reportedly creating fake online identities and infiltrating internal communication channels like Slack and Microsoft Teams. In some cases, they have even joined incident response calls to gain insight into how companies are reacting.

Security agencies urge organisations to adopt phishing-resistant multi-factor authentication, audit remote access software, monitor unusual logins and behaviours, and ensure offline encrypted backups are maintained.

More incidents are expected, as the group continues refining its strategies instead of slowing down.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft’s Cloud and AI strategy lifts revenue beyond expectations

Microsoft has reported better-than-expected results for the fourth quarter of its 2025 fiscal year, attributing much of its success to the continued expansion of its cloud services and the integration of AI.

‘Cloud and AI are the driving force of business transformation across every industry and sector,’ said Satya Nadella, Microsoft’s chairman and chief executive, in a statement on Wednesday.

For the first time, Nadella disclosed annual revenue figures for Microsoft Azure, the company’s cloud computing platform. Azure generated more than $75 billion in the fiscal year ending 30 June, representing a 34 percent increase compared to the previous year.

Nadella noted that this growth was ‘driven by growth across all workloads’, including those powered by AI. On average, Azure contributed approximately $19 billion in revenue per quarter.

While this trails Amazon Web Services (AWS), which posted net sales of $29 billion in the first quarter alone, Azure remains a strong second in the cloud market. Google Cloud, by comparison, has an annual run rate of $50 billion, according to parent company Alphabet’s Q2 2025 earnings report.

‘We continue to lead the AI infrastructure wave and took share each quarter this year,’ Nadella told investors during the company’s earnings call.

However, he did not provide specific figures showing how AI factored into the results, a point of interest for financial analysts given Microsoft’s projected $80 billion in capital expenditures this fiscal year to support AI-related data centre expansion.

During the call, Bernstein Research senior analyst Mark Moerdler asked how businesses might ultimately monetise AI as a software service.

Nadella responded with a broad comparison to the cloud business, suggesting the two were now deeply connected. It was left to CFO Amy Hood to offer a more structured explanation.

‘There’s a per-user logic,’ Hood explained. ‘There are tiers of per-user. Sometimes those tiers relate to consumption. Sometimes there are pure consumption models. I think you’ll continue to see a blending of these, especially as the AI model capability grows.’

In essence, Microsoft intends to monetise AI in a manner similar to its traditional software offerings—charging either per user, by usage tier, or based on consumption.

With AI now embedded across Microsoft’s portfolio of products and services, the company appears to be positioning itself to keep attributing more of its revenue to AI-powered innovation.

The numbers suggest there is plenty of revenue to go around. Microsoft posted $76.4 billion in revenue for the quarter, up 18 percent compared to the same period last year.

Operating income stood at $34.3 billion (up 23 percent), with net income reaching $27.2 billion (up 24 percent). Earnings per share climbed 24 percent to $3.65.

For the full fiscal year, Microsoft reported $281.7 billion in revenue—an increase of 15 percent. Operating income rose to $128.5 billion (up 17 percent), while net income hit $101.8 billion (up 16 percent). Annual earnings per share reached $13.64, also up by 16 percent.

Azure forms part of Microsoft’s Intelligent Cloud division, which generated $29.9 billion in quarterly revenue, a 26 percent year-on-year increase.

The Productivity and Business Processes group, which includes Microsoft 365, LinkedIn, and Dynamics, managed to earn $33.1 billion, upping its revenue by 16 percent. Meanwhile, the More Personal Computing segment, covering Windows, Xbox, and advertising, grew nine percent to $13.5 billion.

Despite some concerns among analysts regarding Microsoft’s significant capital spending and the ambiguous short-term returns on AI investments, investor confidence remains strong.

Microsoft’s share price jumped roughly eight percent after the earnings announcement, pushing its market capitalisation above $4 trillion in after-hours trading. It became only the second company, after Nvidia, to cross that symbolic threshold.

Market observers noted that while questions remain over the precise monetisation of AI, Microsoft’s aggressive positioning in cloud infrastructure and AI services has clearly resonated with shareholders.

With AI now woven into the company’s strategic fabric, Microsoft appears determined to maintain its lead in the next phase of enterprise computing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan university launches smart farming lab

A new AI-powered agriculture lab in southern Taiwan has opened at the National Pingtung University of Science and Technology. The facility has cutting-edge sensors and automation systems to boost innovative farming capabilities.

Funded by a donation from Taiwan Hipoint, the lab enables real-time monitoring of crop conditions and automated adjustments to growing environments. The AI system analyses sensor and image data to optimise greenhouse conditions and detect early signs of pests or diseases.

Specialised chambers inside the lab simulate various environmental conditions, helping researchers identify ideal settings for plant growth. University staff say the technology is expected to play a crucial role in making agriculture more precise and resource-efficient.

The university also hosted a hands-on greenhouse training camp and showcased its innovations at a major food expo. Located near key research centres, the university aims to become Taiwan’s leading hub for agricultural technology and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China demands Nvidia explain security flaws in H20 chips

China’s top internet regulator has summoned Nvidia to explain alleged security concerns linked to its H20 computing chips.

The Cyberspace Administration of China stated that the chips, which are sold domestically, may contain backdoor vulnerabilities that could pose risks to users and systems.

Instead of ignoring the issue, Nvidia has been asked to submit technical documents and provide a formal response addressing these potential flaws.

The chips are part of Nvidia’s tailored product line for the Chinese market following US export restrictions on advanced AI processors.

The investigation signals tighter scrutiny from Chinese authorities on foreign technology amid ongoing geopolitical tensions and a global race for semiconductor dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

VPN dangers highlighted as UK’s Online Safety Act comes into force

Britons are being urged to proceed with caution before turning to virtual private networks (VPNs) in response to the new age verification requirements set by the Online Safety Act.

The law, now in effect, aims to protect young users by restricting access to adult and sensitive content unless users verify their age.

Instead of offering anonymous access, some platforms now demand personal details such as full names, email addresses, and even bank information to confirm a user’s age.

Although the legislation targets adult websites, many people have reported being blocked from accessing less controversial content, including alcohol-related forums and parts of Wikipedia.

As a result, more users are considering VPNs to bypass these checks. However, cybersecurity experts warn that many VPNs can pose serious risks by exposing users to scams, data theft, and malware. Without proper research, users might install software that compromises their privacy rather than protecting it.

With Ofcom reporting that eight per cent of children aged 8 to 14 in the UK have accessed adult content online, the new rules are viewed as a necessary safeguard. Still, concerns remain about the balance between online safety and digital privacy for adult users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian companies unite cybersecurity defences to combat AI threats

Australian companies are increasingly adopting unified, cloud-based cybersecurity systems as AI reshapes both threats and defences.

A new report from global research firm ISG reveals that many enterprises are shifting away from fragmented, uncoordinated tools and instead opting for centralised platforms that can better detect and counter sophisticated AI-driven attacks.

The rapid rise of generative AI has introduced new risks, including deepfakes, voice cloning and misinformation campaigns targeting elections and public health.

In response, organisations are reinforcing identity protections and integrating AI into their security operations to improve both speed and efficiency. These tools also help offset a growing shortage of cybersecurity professionals.

After a rushed move to the cloud during the pandemic, many businesses retained outdated perimeter-focused security systems. Now, firms are switching to cloud-first strategies that target vulnerabilities at endpoints and prevent misconfigurations instead of relying on legacy solutions.

By reducing overlap in systems like identity management and threat detection, businesses are streamlining defences for better resilience.

ISG also notes a shift in how companies choose cybersecurity providers. Firms like IBM, PwC, Deloitte and Accenture are seen as leaders in the Australian market, while companies such as TCS and AC3 have been flagged as rising stars.

The report further highlights growing demands for compliance and data retention, signalling a broader national effort to enhance cyber readiness across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

White House launches AI Action Plan with Executive Orders on exports and regulation

The White House has unveiled a sweeping AI strategy through its new publication Winning the Race: America’s AI Action Plan.

Released alongside three Executive Orders, the plan outlines the federal government’s next phase in shaping AI policy, focusing on innovation, infrastructure, and global leadership.

The AI Action Plan centres on three key pillars: accelerating AI development, establishing national AI infrastructure, and promoting American AI standards globally. Four consistent themes run through each pillar: regulation and deregulation, investment, research and standardisation, and cybersecurity.

Notably, deregulation is central to the plan’s strategy, particularly in reducing barriers to AI growth and speeding up infrastructure approval for data centres and grid expansion.

Investment plays a dominant role. Federal funds will support AI job training, data access, lab automation, and domestic component manufacturing, instead of relying on foreign suppliers.

Alongside, the plan calls for new national standards, improved dataset quality, and stronger evaluation mechanisms for AI interpretability, control, and safety. A dedicated AI Workforce Research Hub is also proposed.

In parallel, three Executive Orders were issued. One bans ‘woke’ or ideologically biased AI tools in federal use, another fast-tracks data centre development using federal land and brownfield sites, and a third launches an AI exports programme to support full-stack US AI systems globally.

While these moves open new opportunities, they also raise questions around regulation, bias, and the future shape of AI development in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Brainstorming with AI opens new doors for innovation

AI is increasingly embraced as a reliable creative partner, offering speed and breadth in idea generation. In Fast Company, Kevin Li describes how AI complements human brainstorming under time pressure, drawing from his work at Amazon and startup Stealth.

Li argues AI is no longer just a tool but a true collaborator in creative workflows. Generative models can analyse vast data sets and rapidly suggest alternative concepts, helping teams reimagine product features, marketing strategies, and campaign angles. The shift aligns with broader industry trends.

A McKinsey report from earlier this year highlighted that, while only 1% of companies consider themselves mature in AI use, most are investing heavily in this area. Creative use cases are expected to generate massive value by 2025.

Li notes that the most effective use of AI occurs when it’s treated as a sounding board. He recounts how the quality of ideas improved significantly when AI offered raw directions that humans later refined. The hybrid model is gaining traction across multiple startups and established firms alike.

Still, original thinking remains a hurdle. A recent study by PsyPost found human pairs often outperform AI tools in generating novel ideas during collaborative sessions. While AI offers scale, human teams reported more substantial creative confidence and profound originality.

The findings suggest AI may work best at the outset of ideation, followed by human editing and development. Experts recommend setting clear roles for AI in the creative cycle. For instance, tools like ChatGPT or Midjourney might handle initial brainstorming, while humans oversee narrative coherence, tone, and ethics.

The approach is especially relevant in advertising, product design, and marketing, where nuance is still essential. Creatives across X are actively sharing tips and results. One agency leader posted about reducing production costs by 30% using AI tools for routine content work.

The strategy allowed more time and budget to focus on storytelling and strategy. Others note that using AI to write draft copy or generate design options is becoming common. Yet concerns remain over ethical boundaries.

The Orchidea Innovation Blog cautioned in 2023 that AI often recycles learned material, which can limit fresh perspectives. Recent conversations on X raise alarms about over-reliance. Some fear AI-generated content will eradicate originality across sectors, particularly marketing, media, and publishing.

To counter such risks, structured prompting and human-in-the-loop models are gaining popularity. ClickUp’s AI brainstorming guide recommends feeding diverse inputs to avoid homogeneous outputs. Précis AI referenced Wharton research to show that vague prompts often produce repetitive results.

The solution: intentional, varied starting points with iterative feedback loops. Emerging platforms are tackling this in real-time. Ideamap.ai, for example, enables collaborative sessions where teams interact with AI visually and textually.

Jabra’s latest insights describe AI as a ‘thought partner’ rather than a replacement, enhancing team reasoning and ideation dynamics without eliminating human roles. Looking ahead, the business case for AI creativity is strong.

McKinsey projects hundreds of billions in value from AI-enhanced marketing, especially in retail and software. Influencers like Greg Isenberg predict $100 million niches built on AI-led product design. Frank$Shy’s analysis points to a $30 billion creative AI market by 2025, driven by enterprise tools.

Even in e-commerce, AI is transforming operations. Analytics India Magazine reports that brands build eight-figure revenues by automating design and content workflows while keeping human editors in charge. The trend is not about replacement but refinement and scale.

Li’s central message remains relevant: when used ethically, AI augments rather than replaces creativity. Responsible integration supports diverse voices and helps teams navigate the fast-evolving innovation landscape. The future of ideation lies in balance, not substitution.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google backs EU AI Code but warns against slowing innovation

Google has confirmed it will sign the European Union’s General Purpose AI Code of Practice, joining other companies, including major US model developers.

The tech giant hopes the Code will support access to safe and advanced AI tools across Europe, where rapid adoption could add up to €1.4 trillion annually to the continent’s economy by 2034.

Kent Walker, Google and Alphabet’s President of Global Affairs, said the final Code better aligns with Europe’s economic ambitions than earlier drafts, noting that Google had submitted feedback during its development.

However, he warned that parts of the Code and the broader AI Act might hinder innovation by introducing rules that stray from EU copyright law, slow product approvals or risk revealing trade secrets.

Walker explained that such requirements could restrict Europe’s ability to compete globally in AI. He highlighted the need to balance regulation with the flexibility required to keep pace with technological advances.

Google stated it will work closely with the EU’s new AI Office to help shape a proportionate, future-facing approach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act begins as tech firms push back

Europe’s AI crackdown officially begins soon, as the EU enforces the first rules targeting developers of generative AI models like ChatGPT.

Under the AI Act, firms must now assess systemic risks, conduct adversarial testing, ensure cybersecurity, report serious incidents, and even disclose energy usage. The goal is to prevent harms related to bias, misinformation, manipulation, and lack of transparency in AI systems.

Although the legislation was passed last year, the EU only released developer guidance on 10 July, leaving tech giants with little time to adapt.

Meta, which developed the Llama AI model, has refused to sign the voluntary code of practice, arguing that it introduces legal uncertainty. Other developers have expressed concerns over how vague and generic the guidance remains, especially around copyright and practical compliance.

The EU also distinguishes itself from the US, where a re-elected Trump administration has launched a far looser AI Action Plan. While Washington supports minimal restrictions to encourage innovation, Brussels is focused on safety and transparency.

Trade tensions may grow, but experts warn that developers should not rely on future political deals instead of taking immediate steps toward compliance.

The AI Act’s rollout will continue into 2026, with the next phase focusing on high-risk AI systems in healthcare, law enforcement, and critical infrastructure.

Meanwhile, questions remain over whether AI-generated content qualifies for copyright protection and how companies should handle AI in marketing or supply chains. For now, Europe’s push for safer AI is accelerating—whether Big Tech likes it or not.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!