AI-made music sparks emotional authenticity debate

Innovative AI group The Velvet Sundown has produced retro‑style blues‑rock and indie pop tracks entirely without human musicians.

While the music echoes 1970s and 80s sounds, discussions have emerged over whether these digital creations truly capture emotional depth.

Many listeners find the melodies catchy and stylistically accurate, yet some critics argue AI‑generated music lacks the spontaneity, lived experience and nuance that human artists bring.

Skeptics contend that an AI’s technical precision cannot replicate the intangible aspects of musical performance.

The rise of AI in music has opened creative possibilities but also triggered deeper cultural questions. Can technology ever replace genuine emotion, or will it remain a stylistic tool?

Experts agree human creativity remains vital, AI can enhance, but not fully substitute, the soul‑stirring impulse of authentic music-making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI model detects infections from wound photos

Mayo Clinic researchers have developed an AI system capable of detecting surgical site infections from wound photographs submitted by patients. The model was trained using over 20,000 images from more than 6,000 persons across nine hospital locations.

The AI pipeline identifies whether a photo contains a surgical incision and then evaluates that incision for infection. Known as Vision Transformer, the model accurately recognises incisions and scores high in AUC in infection detection.

Medical staff review outpatient wound images manually, which can delay care and burden resources. Automating this process may improve early diagnosis, reduce unnecessary visits, and speed up responses to high-risk cases.

Researchers believe the tool could eventually serve as a frontline screening method, especially helpful in rural or understaffed areas. Consistent performance across diverse patient groups also suggests a lower risk of algorithmic bias, though further validation remains essential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI locks down operations after DeepSeek model concerns

OpenAI has significantly tightened its internal security following reports that DeepSeek may have replicated its models. DeepSeek allegedly used distillation techniques to launch a competing product earlier this year, prompting a swift response.

OpenAI has introduced strict access protocols to prevent information leaks, including fingerprint scans, offline servers, and a policy restricting internet use without approval. Sensitive projects such as its AI o1 model are now discussed only by approved staff within designated areas.

The company has also boosted cybersecurity staffing and reinforced its data centre defences. Confidential development information is now shielded through ‘information tenting’.

These actions coincide with OpenAI’s $30 billion deal with Oracle to lease 4.5 gigawatts of data centre capacity across the United States. The partnership plays a central role in OpenAI’s growing Stargate infrastructure strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Phishing 2.0: How AI is making cyber scams more convincing

Phishing remains among the most widespread and dangerous cyber threats, especially for individuals and small businesses. These attacks rely on deception—emails, texts, or social messages that impersonate trusted sources to trick people into giving up sensitive information.

Cybercriminals exploit urgency and fear. A typical example is a fake email from a bank saying your account is at risk, prompting you to click a malicious link. Even when emails look legitimate, subtle details—like a strange sender address—can be red flags.

In one recent scam, Netflix users received fake alerts about payment failures. The link led to a fake login page where credentials and payment data were stolen. Similar tactics have been used against QuickBooks users, small businesses, and Microsoft 365 customers.

Small businesses are frequent targets due to limited security resources. Emails mimicking vendors or tech companies often trick employees into handing over credentials, giving attackers access to sensitive systems.

Phishing works because it preys on human psychology: trust, fear, and urgency. And with AI, attackers can now generate more convincing content, making detection harder than ever.

Protection starts with vigilance. Always check sender addresses, avoid clicking suspicious links, and enable multi-factor authentication (MFA). Employee training, secure protocols for sensitive requests, and phishing simulations are critical for businesses.

Phishing attacks will continue to grow in sophistication, but with awareness and layered security practices, users and businesses can stay ahead of the threat.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Groq opens AI data centre in Helsinki

Groq has opened its first European AI data centre in Helsinki, Finland, in collaboration with Equinix. The facility offers European users fast, secure, and low-latency AI inference services, aiming to improve performance and data governance.

The launch follows Groq’s existing partnership with Equinix, which already includes a site in Dallas. The new centre complements Groq’s global network, including facilities in the US, Canada and Saudi Arabia.

CEO Jonathan Ross stated the centre provides immediate infrastructure for developers building fast at scale. Equinix highlighted Finland’s reliable power and sustainable energy as key factors in the decision to host capacity there.

The data centre supports GroqCloud, delivering over 20 million tokens per second across its network. European businesses are expected to benefit from improved AI performance and operational efficiency.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Training AI sustainably depends on where and how

Organisations are urged to place AI model training in regions powered by renewable energy. For instance, training in Canada rather than Poland could reduce carbon emissions by approximately 85 %. Strategic location choices are becoming vital for greener AI.

Selecting appropriate hardware also plays a pivotal role. Research shows that a high-end NVIDIA H100 GPU carries three times the manufacturing carbon footprint of a more energy-efficient NVIDIA L4. Opting for the proper GPU can deliver performance without undue environmental cost.

Efficiency should be embedded at every stage of the AI process. From hardware procurement and algorithm design to operational deployment, even fractional improvements across the supply chain can significantly reduce overall carbon output, ensuring that today’s progress doesn’t harm tomorrow’s planet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US Cyber Command proposes $5M AI Initiative for 2026 budget

US Cyber Command is seeking $5 million in its fiscal year 2026 budget to launch a new AI project to advance data integration and operational capabilities.

While the amount represents a small fraction of the command’s $1.3 billion research and development (R&D) portfolio, the effort reflects growing emphasis on incorporating AI into cyber operations.

The initiative follows congressional direction set in the fiscal year (FY) 2023 National Defense Authorization Act, which tasked Cyber Command and the Department of Defense’s Chief Information Officer—working with the Chief Digital and Artificial Intelligence Officer, DARPA, the NSA, and the Undersecretary of Defense for Research and Engineering—to produce a five-year guide and implementation plan for rapid AI adoption.

However, this roadmap, developed shortly after, identified priorities for deploying AI systems, applications, and supporting data processes across cyber forces.

Cyber Command formed an AI task force within its Cyber National Mission Force (CNMF) to operationalise these priorities. The newly proposed funding would support the task force’s efforts to establish core data standards, curate and tag operational data, and accelerate the integration of AI and machine learning solutions.

Known as Artificial Intelligence for Cyberspace Operations, the project will focus on piloting AI technologies using an agile 90-day cycle. This approach is designed to rapidly assess potential solutions against real-world use cases, enabling quick iteration in response to evolving cyber threats.

Budget documents indicate the CNMF plans to explore how AI can enhance threat detection, automate data analysis, and support decision-making processes. The command’s Cyber Immersion Laboratory will be essential in testing and evaluating these cyber capabilities, with external organisations conducting independent operational assessments.

The AI roadmap identifies five categories for applying AI across Cyber Command’s enterprise: vulnerabilities and exploits; network security, monitoring, and visualisation; modelling and predictive analytics; persona and identity management; and infrastructure and transport systems.

To fund this effort, Cyber Command plans to shift resources from its operations and maintenance account into its R&D budget as part of the transition from FY2025 to FY2026.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

How agentic AI is transforming cybersecurity

Cybersecurity is gaining a new teammate—one that never sleeps and acts independently. Agentic AI doesn’t wait for instructions. It detects threats, investigates, and responds in real-time. This new class of AI is beginning to change the way we approach cyber defence.

Unlike traditional AI systems, Agentic AI operates with autonomy. It sets objectives, adapts to environments, and self-corrects without waiting for human input. In cybersecurity, this means instant detection and response, beyond simple automation.

With networks more complex than ever, security teams are stretched thin. Agentic AI offers relief by executing actions like isolating compromised systems or rewriting firewall rules. This technology promises to ease alert fatigue and keep up with evasive threats.

A 2025 Deloitte report says 25% of GenAI-using firms will pilot Agentic AI this year. SailPoint found that 98% of organisations will expand AI agent use in the next 12 months. But rapid adoption also raises concern—96% of tech workers see AI agents as security risks.

The integration of AI agents is expanding to cloud, endpoints, and even physical security. Yet with new power comes new vulnerabilities—from adversaries mimicking AI behaviour to the risk of excessive automation without human checks.

Key challenges include ethical bias, unpredictable errors, and uncertain regulation. In sectors like healthcare and finance, oversight and governance must keep pace. The solution lies in balanced control and continuous human-AI collaboration.

Cybersecurity careers are shifting in response. Hybrid roles such as AI Security Analysts and Threat Intelligence Automation Architects are emerging. To stay relevant, professionals must bridge AI knowledge with security architecture.

Agentic AI is redefining cybersecurity. It boosts speed and intelligence but demands new skills and strong leadership. Adaptation is essential for those who wish to thrive in tomorrow’s AI-driven security landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US targets Southeast Asia to stop AI chip leaks to China

The US is preparing stricter export controls on high-end Nvidia AI chips destined for Malaysia and Thailand, in a move to block China’s indirect access to advanced GPU hardware.

According to sources cited by Bloomberg, the new restrictions would require exporters to obtain licences before sending AI processors to either country.

The change follows reports that Chinese engineers have hand-carried data to Malaysia for AI training after Singapore began restricting chip re-exports.

Washington suspects Chinese firms are using Southeast Asian intermediaries, including shell companies, to bypass existing export bans on AI chips like Nvidia’s H100.

Although some easing has occurred between the US and China in areas such as ethane and engine components, Washington remains committed to its broader decoupling strategy. The proposed measures will reportedly include safeguards to prevent regional supply chain disruption.

Malaysia’s Trade Minister confirmed earlier this year that the US had requested detailed monitoring of all Nvidia chip shipments into the country.

As the global race for AI dominance intensifies, Washington appears determined to tighten enforcement and limit Beijing’s access to advanced computing power.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware disrupts Ingram Micro’s systems and operations

Ingram Micro has confirmed a ransomware attack that affected internal systems and forced some services offline. The global IT distributor says it acted quickly to contain the incident, implemented mitigation steps, and involved cybersecurity experts.

The company is working with a third-party firm to investigate the breach and has informed law enforcement. Order processing and shipping operations have been disrupted while systems are being restored.

While details remain limited, the attack is reportedly linked to the SafePay ransomware group.

According to BleepingComputer, the gang exploited Ingram’s GlobalProtect VPN to gain access last Thursday.

In response, Ingram Micro shut down multiple platforms, including GlobalProtect VPN and its Xvantage AI platform. Employees were instructed to work remotely as a precaution during the response effort.

SafePay first appeared in late 2024 and has targeted over 220 companies. It often breaches networks using password spraying and compromised credentials, primarily through VPNs.

Ingram Micro has not disclosed what data was accessed or the size of the ransom demand.

The company apologised for the disruption and said it is working to restore systems as quickly as possible.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!