Neptune RAT malware targeting Windows users

A highly advanced malware known as Neptune RAT is making waves in the cybersecurity world, posing a major threat to Windows PC users. Labelled by experts as the ‘most advanced RAT ever,’ it is capable of hijacking systems, stealing cryptocurrency, extracting passwords, and even launching ransomware attacks.

According to cybersecurity firm CYFIRMA, Neptune RAT is being distributed via platforms like GitHub, Telegram and YouTube, and is available as malware-as-a-service, allowing virtually anyone to deploy it for a fee.

Neptune RAT’s feature set is alarmingly broad. It includes a crypto clipper that silently redirects cryptocurrency transactions by replacing wallet addresses with those controlled by the attackers.

It also comes with a password-stealing tool that can extract credentials from over 270 applications, including popular browsers like Chrome. Beyond theft, the malware can spy on users in real-time, disable antivirus tools including Windows Defender, and encrypt files for ransom, making it a formidable threat.

Cybersecurity experts are urging users to avoid clicking on unknown links or downloading suspicious files from platforms where the malware is circulating. In extreme cases, Neptune RAT even includes a data-wiping feature, allowing attackers to destroy all data on a compromised system.

Users are advised to stay cautious online and consider identity theft protection plans that offer financial recovery and insurance should a system replacement become necessary.

For more information on these topics, visit diplomacy.edu.

Dangerous WhatsApp desktop bug prompts update

A critical vulnerability has been discovered in WhatsApp Desktop for Windows, potentially allowing attackers to execute malicious code through deceptive file attachments.

Tracked as CVE-2025-30401, the flaw affects all versions prior to 2.2450.6 and poses a high security risk. The issue arises from a mismatch between how WhatsApp displays attachments and how the system opens them, enabling attackers to disguise executable files as harmless media.

When a user opens an attachment from within WhatsApp, the app displays the file based on its MIME type, such as an image. However, Windows opens the file using its extension, which could be malicious, like .exe.

The inconsistency could lead users to unknowingly launch harmful programs by trusting the attachment’s appearance. Security experts warn the exploit is especially dangerous in group chats, where a single malicious file could target several people at once.

Meta, WhatsApp’s parent company, has released version 2.2450.6 to fix the issue and is urging all users to update immediately.

Security researchers have likened the threat to previous vulnerabilities in the app, including one in 2024 that allowed silent execution of scripts. Given the high severity rating and ease of exploitation, users are advised not to delay updating their software.

For more information on these topics, visit diplomacy.edu.

OpenAI’s Sam Altman responds to Miyazaki’s AI animation concerns

The recent viral trend of AI-generated Ghibli-style images has taken the internet by storm. Using OpenAI’s GPT-4o image generator, users have been transforming photos, from historic moments to everyday scenes, into Studio Ghibli-style renditions.

A trend like this has caught the attention of notable figures, including celebrities and political personalities, sparking both excitement and controversy.

While some praise the trend for democratising art, others argue that it infringes on copyright and undermines the efforts of traditional artists. The debate intensified when Hayao Miyazaki, the co-founder of Studio Ghibli, became a focal point.

In a 2016 documentary, Miyazaki expressed his disdain for AI in animation, calling it ‘an insult to life itself’ and warning that humanity is losing faith in its creativity.

OpenAI’s CEO, Sam Altman, recently addressed these concerns, acknowledging the challenges posed by AI in art but defending its role in broadening access to creative tools. Altman believes that technology empowers more people to contribute, benefiting society as a whole, even if it complicates the art world.

Miyazaki’s comments and Altman’s response highlight a growing divide in the conversation about AI and creativity. As the debate continues, the future of AI in art remains a contentious issue, balancing innovation with respect for traditional artistic practices.

For more information on these topics, visit diplomacy.edu.

Meta rolls out restricted teen accounts across platforms

Meta is expanding its ‘Teen Accounts’ feature to Facebook and Messenger following its initial launch on Instagram last September

The rollout begins in the US, UK, Australia, and Canada, with plans to reach more countries soon. 

These accounts are designed to give younger users an app experience with stronger safety measures, automatically activating restrictions to limit exposure to harmful content and interactions.

Teen users will be automatically placed in a more controlled environment that restricts who can message, comment, or tag them. 

Only friends and previously contacted users can reach out via Messenger or see their stories, but tagging and mentions are also limited. 

These settings require parental approval for any changes, and teens under 16 must have consent to alter key safety features.

On Instagram, Meta is introducing stricter safeguards. Users under 16 now need parental permission to go live or to turn off the tool that blurs images containing suspected nudity in direct messages. 

Meta also implements reminders to limit screen time, prompting teens to log off after one hour and enabling overnight ‘Quiet mode’ to reduce late-night use.

The initiative follows increasing pressure on social media platforms to address concerns around teen mental health. 

In recent years, US lawmakers and the Surgeon General have highlighted the risks associated with young users’ exposure to unregulated digital environments. 

Some states have even mandated parental consent for teen access to social platforms.

Meta reports that over 54 million Instagram accounts have migrated to Teen Accounts. 

According to the company, 97% of users aged 13 to 15 keep the default protections in place. 

A study commissioned by Meta and Ipsos found that 94% of surveyed parents support Teen Accounts, with 85% saying the controls help ensure more positive online experiences for their children.

As digital safety continues to evolve as a priority, Meta’s expansion of Teen Accounts signals the willingness to build more accountable, youth-friendly online spaces across its platforms.

For more information on these topics, visit diplomacy.edu.

Southampton Airport launches AI assistant to support passengers

Southampton Airport has launched an advanced AI-powered digital assistant to enhance passenger experience and accessibility throughout its terminal. The technology, developed in collaboration with Hello Lamp Post, offers real-time flight updates, personalised navigation assistance, and tailored support, especially for those requiring special assistance.

Following a successful trial at Glasgow Airport with Connected Places Catapult, the AI platform demonstrated a 50% reduction in customer service queries and supported over 12,000 additional passengers annually. Passenger satisfaction during the pilot reached 86%, prompting Southampton to expand the tool for all travellers. The assistant is accessible via QR codes placed throughout the terminal, effectively acting as a virtual concierge.

The initiative forms part of the airport’s broader commitment to inclusive and efficient travel. Southampton Airport recently received the Civil Aviation Authority’s top ‘Very Good’ rating for accessibility. Airport Managing Director Gavin Williams praised the new tool’s ability to enhance customer journeys, while Hello Lamp Post’s CEO, Tiernan Mines, highlighted the value in easing pressure on staff by handling routine queries.

For more information on these topics, visit diplomacy.edu.

New Jersey criminalises AI-generated nude deepfakes of minors

New Jersey has become the first US state to criminalise the creation and sharing of AI-generated nude images of minors, following a high-profile campaign led by 14-year-old Francesca Mani. The US legislation, signed into law on 2 April by Governor Phil Murphy, allows victims to sue perpetrators for up to $1,000 per image and includes criminal penalties of up to five years in prison and fines of up to $30,000.

Mani launched her campaign after discovering that boys at her school had used an AI “nudify” website to target her and other girls. Refusing to accept the school’s minimal disciplinary response, she called for lawmakers to take decisive action against such deepfake abuses. Her efforts gained national attention, including a feature on 60 Minutes, and helped drive the new legal protections.

The law defines deepfakes as media that convincingly depicts someone doing something they never actually did. It also prohibits the use of such technology for election interference or defamation. Although the law’s focus is on malicious misuse, questions remain about whether exemptions will be made for legitimate uses in film, tech, or education sectors.

For more information on these topics, visit diplomacy.edu.

AI tool boosts accuracy of cancer treatment predictions

A Slovenian-US biotech company, Genialis, is harnessing AI to revolutionise cancer treatment by tackling a major obstacle: the lack of reliable biomarkers to predict how patients will respond to therapy. Using an AI-driven model developed from over a million global samples, the company aims to personalise treatment with far greater accuracy.

Founded nine years ago as a spin-off from the University of Ljubljana, Genialis is now headquartered in Boston but maintains strong ties to Slovenia, employing 22 local experts. Initially focused on tools for biologists, the firm shifted towards personalised medicine six years ago, now offering diagnostic insights that predict whether a patient is likely to respond to a specific cancer drug or treatment.

Genialis’ proprietary “Supermodel” analyses RNA data from a diverse range of patients using machine learning, boosting the likelihood of treatment success from 20–30% to as high as 65% when paired with their biomarkers. While the software is already used in research settings, the ultimate goal is to integrate it into routine clinical care. Despite the promise, challenges remain, including securing quality data and investment. Co-founders Rafael Rosengarten and Miha Štajdohar remain optimistic, believing AI-powered precision medicine is the future of effective cancer therapy.

For more information on these topics, visit diplomacy.edu.

Digital Morocco 2030 strategy focuses on tech-driven transformation

Morocco has set ambitious goals to boost its economy through investment in emerging technologies, aiming for a 10% increase in GDP by 2030. As part of its Digital Morocco 2030 strategy, the government is committing over 11 billion dirhams ($1.1 billion) by 2026 to drive digital transformation, create more than 240,000 jobs, and train 100,000 young people annually in digital skills.

The roadmap prioritises digitising government services through a Unified Administrative Services Portal, with the long-term goal of placing Morocco among the world’s top 50 tech nations. Blockchain plays a central role in this vision, being adopted to improve transparency and efficiency in public services, and already undergoing trials in private sectors like healthcare and finance.

Despite an ongoing official ban, digital asset ownership has surged, more than six million Moroccans now hold such assets, representing over 15% of the population. In parallel, the country is rapidly expanding its use of AI. Notably, Morocco has introduced AI into its judiciary, launched an AI-powered university learning system, and trained over 1,000 small- and medium-sized businesses in AI adoption through partnerships with LinkedIn and the European Bank for Reconstruction and Development.

For more information on these topics, visit diplomacy.edu.

Metro Bank teams up with Ask Silver to fight fraud

Metro Bank has introduced an AI-powered scam detection tool, becoming the first UK bank to offer customers instant scam checks through a simple WhatsApp service.

Developed in partnership with Ask Silver, the Scam Checker allows users to upload images or screenshots of suspicious emails, websites, or documents for rapid analysis and safety advice.

The tool is free for personal and business customers, who receive alerts if the communication is flagged as fraudulent. Ask Silver’s technology not only identifies potential scams but also automatically reports them to relevant authorities.

The company was founded after one of the co-founders’ family members lost £150,000 to a scam, fuelling its mission to prevent similar crimes.

The launch comes amid a surge in impersonation scams across the United Kingdom, with over £1 billion lost to fraud in 2023. Metro Bank’s head of fraud, Baz Thompson, said the tool helps counter tactics that rely on urgency and pressure.

Customers are also reminded that the bank will never request sensitive information or press them to act quickly via emails or texts.

For more information on these topics, visit diplomacy.edu.

Trump administration pushes for pro-AI shift in US federal agencies

The White House announced on Monday a shift in how US federal agencies will approach AI, prioritising innovation over the stricter regulatory framework previously established under President Biden. 

A new memorandum from the Office of Management and Budget instructs agencies to appoint chief AI officers and craft policies to expand the use of AI technologies across government operations.

This pivot includes repealing two Biden-era directives emphasising transparency and safeguards against AI misuse. 

The earlier rules required federal agencies to implement protective measures for civil rights and limit unchecked acquisition of AI tools. 

These protections have now been replaced with a call for a more ‘forward-leaning and pro-innovation’ stance, removing what the current administration views as excessive bureaucratic constraints.

Federal agencies are now expected to develop AI strategies within six months. These plans must identify barriers to responsible AI implementation and improve how the technology is used enterprise-wide. 

The administration also encouraged the development of specific policies for generative AI, emphasising maximising the use of American-made solutions and enhancing interoperability between systems.

The policy change is part of President Trump’s broader rollback of previous AI governance, including his earlier revocation of a 2023 executive order signed by Biden that required developers to disclose sensitive training data. 

The new framework aims to streamline AI procurement processes and eliminate what the administration labels unnecessary reporting burdens while still maintaining basic privacy protections.

Federal agencies have already begun integrating AI into their operations. The Federal Aviation Administration, for example, has applied machine learning to analyse safety reports and identify emerging aviation risks. 

Under the new guidelines, such initiatives are expected to accelerate, signalling a broader federal embrace of AI across sectors.

For more information on these topics, visit diplomacy.edu.