Popular Python AI library compromised to deliver malware

Security researchers have confirmed that the Ultralytics YOLO library was hijacked in a supply-chain attack, where attackers injected malicious code into the PyPI-published versions 8.3.41 and 8.3.42. When installed, these versions deployed the XMRig cryptominer.

The compromise stemmed from Ultralytics’ continuous-integration workflow: by exploiting GitHub Actions, the attackers manipulated the automated build process, bypassing review and injecting cryptocurrency mining malware.

The maintainers quickly removed the malicious versions and released a clean build (8.3.43); however, newer reports suggest that further suspicious versions may have appeared.

This incident illustrates the growing risk in AI library supply chains. As open-source AI frameworks become more widely used, attackers increasingly target their build systems to deliver malware, particularly cryptominers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Smarter AI processing could lead to cleaner air, say UCR engineers

As AI continues to scale rapidly, the environmental cost of powering massive data centres is becoming increasingly urgent. Machines require substantial amounts of electricity and water to stay cool, and a significant portion of this energy comes from fossil-fuel sources.

Scientists at UC Riverside’s Bourns College of Engineering, led by Professors Mihri and Cengiz Ozkan, have proposed a novel solution called Federated Carbon Intelligence (FCI). Their system doesn’t just prioritise low-carbon energy; it also monitors the health of servers in real-time to decide where and when AI tasks should be run.

Using simulations, the team found that FCI could reduce carbon dioxide emissions by up to 45 percent over five years and extend the operational life of hardware by about 1.6 years.

Their model takes into account server temperature, age and physical wear, and dynamically routes computing workloads to optimise both environmental and machine-health outcomes.

Unlike other approaches that only shift workloads to regions with cleaner energy, FCI also addresses the embodied emissions of manufacturing new servers. Keeping current hardware running longer and more efficiently helps reduce the carbon footprint associated with production.

If adopted by cloud providers, this adaptive system could mark a significant milestone in the sustainable development of AI infrastructure, one that aligns compute demand with both performance and ecological goals. The researchers are now calling for pilots in real data centres.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gemini 3 struggles to accept the year 2025

Google’s new AI model, Gemini 3, was left temporarily confused when it refused to accept that the year was 2025 during early testing by AI researcher Andrej Karpathy.

The model, pre-trained on data only through 2024 and initially disconnected from the internet, accused Karpathy of trickery and gaslighting before finally recognising the correct date.

Once Gemini 3 accessed real-time information, it expressed astonishment and apologised for its previous behaviour, demonstrating the model’s quirky but sophisticated reasoning capabilities. The interaction went viral online, drawing attention to both the humour and unpredictability of advanced AI systems.

Experts note that incidents like this illustrate the limitations of LLMs, which, despite their reasoning power, cannot inherently perceive reality and rely entirely on pre-training data and connected tools.

Observers emphasise that AI remains a powerful human aid rather than a replacement, and understanding its quirks is essential for practical use.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI teaching leaves Staffordshire students frustrated

Students at the University of Staffordshire in the UK have criticised a coding course after discovering much of the teaching was delivered through AI-generated slides and voiceovers.

Participants in the government-funded apprenticeship programme said they felt deprived of knowledge and frustrated that the course relied heavily on automated materials.

Concerns arose when learners noticed inconsistencies in language, suspicious file names, and abrupt changes in voiceover accents during lessons.

Students reported raising these issues with university staff, but the institution maintained the use of AI, asserting it supported academic standards while remaining ethical and responsible.

Critics argue that AI teaching diminishes engagement and reduces the opportunity to acquire practical skills needed for career development.

Experts suggest students supplement AI-driven courses with hands-on learning and critical thinking to ensure the experience remains valuable and relevant to their professional goals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Entry-level jobs vanish as AI rises

Generative AI is reshaping the job market by reducing the need for entry-level positions, particularly in white-collar industries. Analysts warn that young workers are losing the opportunity to acquire skills through traditional on-the-job experience, which has historically paved the way for promotions.

Employers are drawn to AI for its efficiency, as it can complete tasks in a fraction of the time it once took human teams. This shift poses a threat to the traditional career ladder, resulting in a shortage of trained candidates for senior and managerial roles in the years to come.

Young professionals can counter these trends by acquiring practical AI skills, even outside of technology sectors. Combining human strengths, such as strategic thinking, with AI proficiency may help early-career workers remain competitive and adapt to evolving workplace demands.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New NVIDIA model drives breakthroughs in conservation biology

Researchers have introduced a biology foundation model that can recognise over a million species and understand relationships across the animal and plant kingdoms.

BioCLIP 2 was trained on one of the most extensive biological datasets ever compiled, allowing it to identify traits, cluster organisms and reveal patterns that support conservation efforts.

A model that relies on NVIDIA accelerated computing instead of traditional methods and demonstrates what large-scale biological learning can achieve.

Training drew on more than two hundred million images that cover hundreds of thousands of taxonomic classes. The AI model learned how species fit within wider biological hierarchies and how traits differ across age, gender and related groups without explicit guidance.

It even separated diseased leaves from healthy samples, offering a route to improved monitoring of ecosystems and agricultural resilience.

Scientists now plan to expand the project by utilising wildlife digital twins that simulate ecological systems in controlled environments.

Researchers will be able to study species interactions and test scenarios instead of disturbing natural habitats. The approach opens possibilities for richer ecological research and could offer the public immersive ways to view biodiversity from the perspective of different animals.

BioCLIP 2 is available as open-source software and has already attracted strong global interest. Its capabilities indicate a shift toward more advanced biological modelling powered by accelerated computing, providing conservationists and educators with new tools to address long-standing knowledge gaps.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPT‑5 expands research speed and idea generation for scientists

AI technology is increasingly helping scientists accelerate research across fields including biology, mathematics, physics, and computer science. Early GPT‑5 studies show it can synthesise information, propose experiments, and aid in solving long-standing mathematical problems.

Experts note the technology expands the range of ideas researchers can explore and shortens the time to validate results.

Case studies demonstrate tangible benefits: in biology, GPT‑5 helped identify mechanisms in human immune cells within minutes, suggesting experiments that confirmed the results.

In mathematics, GPT‑5 suggested new approaches, and in optimisation, it identified improved solutions later verified by researchers.

These advances reinforce human-led research rather than replacing it.

OpenAI for Science emphasises collaboration between AI and experts. GPT‑5 excels at conceptual literature review, exploring connections across disciplines, and proposing hypotheses for experimental testing.

Its greatest impact comes when researchers guide the process, breaking down problems, critiquing suggestions, and validating outcomes.

Researchers caution that AI does not replace human expertise. Current models aid speed, idea generation, and breadth, but expert oversight is essential to ensure reliable and meaningful scientific contributions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI data centre boom drives global spike in memory chip prices

The rapid expansion of AI data centres is pushing up memory chip prices and straining an already tight supply chain. DRAM costs are rising as manufacturers prioritise high-bandwidth memory for AI systems, leaving fewer components available for consumer devices.

The shift is squeezing supply across sectors that depend on standard DRAM, from PCs and smartphones to cars and medical equipment. Analysts say the imbalance is driving up component prices quickly, with Samsung reportedly raising some memory prices by as much as 60%.

Rising demand for HBM reflects the needs of AI clusters, which rely on vast memory pools alongside GPUs, CPUs and storage. But with only a handful of major suppliers, including Samsung, SK Hynix, and Micron, the surge is pushing prices across the market higher.

Industry researchers warn that rising memory costs will likely be passed on to consumers, especially in lower-priced laptops and embedded systems. Makers may switch to cheaper parts or push suppliers for concessions, but the overall price trend remains upward.

While memory is known for cyclical booms and busts, analysts say the global race to build AI data centres makes it difficult to predict when supply will stabilise. Until then, higher memory prices look set to remain a feature of the market.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Nano Banana Pro image model

Google has launched Nano Banana Pro, a new image generation and editing model built on Gemini 3 Pro. The upgrade expands Gemini’s visual capabilities inside the Gemini app, Google Ads, Google AI Studio, Vertex AI and Workspace tools.

Nano Banana Pro focuses on cleaner text rendering, richer world knowledge and tighter control over style and layout. Creators can produce infographics, diagrams and character consistent scenes, and refine lighting, camera angle or composition with detailed prompts.

The AI model supports higher resolution visuals, localised text in multiple languages and more accurate handling of complex scripts. Google highlights uses in marketing materials, business presentations and professional design workflows, as partners such as Adobe integrate the model into Firefly and Photoshop.

Users can try Nano Banana Pro through Gemini with usage limits, while paying customers and enterprises gain extended access. Google embeds watermarking and C2PA-style metadata to help identify AI-generated images, foregrounding safety and transparency around synthetic content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Smart glasses by Meta transform life for disabled users

Meta has presented a new generation of AI glasses designed to increase independence for people with disabilities. The devices support hands-free calls, messages and translations while offering voice-activated photography and video capture.

Users can rely on spoken prompts instead of phones when they want to explore their surroundings or capture important moments.

The glasses help blind and low-vision individuals identify objects, read documents and understand scenes through detailed AI descriptions. Meta partnered with the Blinded Veterans Association to produce a training guide that explains how to activate voice commands and manage daily tasks more easily.

Veterans Affairs rehabilitation centres have adopted the glasses to support people who need greater autonomy in unfamiliar environments.

Creators and athletes describe how the technology influences their work and daily activities. A filmmaker uses first-person recording and AI-assisted scene guidance to streamline production. A Paralympic sprinter relies on real-time updates to track workouts without pausing to check a phone.

Other users highlight how hands-free photography and environmental awareness allow them to stay engaged instead of becoming distracted by screens.

Meta emphasises its collaboration with disabled communities to shape features that reflect diverse needs. The company views AI glasses as a route to improved participation, stronger confidence and wider digital access.

An approach that signals a long-term commitment to wearable technology that supports inclusion in everyday life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!