Microsoft users at risk from tax-themed cyberattack

As the US tax filing deadline of April 15 approaches, cybercriminals are ramping up phishing attacks designed to exploit the urgency many feel during this stressful period.

Windows users are particularly at risk, as attackers are targeting Microsoft account credentials by distributing emails disguised as tax-related reminders.

These emails include a PDF attachment titled ‘urgent reminder,’ which contains a malicious QR code. Once scanned, it leads users through fake bot protection and CAPTCHA checks before prompting them to enter their Microsoft login details, details that are then sent to a server controlled by criminals.

Security researchers, including Peter Arntz from Malwarebytes, warn that the email addresses in these fake login pages are already pre-filled, making it easier for unsuspecting victims to fall into the trap.

Entering your password at this stage could hand your credentials to malicious actors, possibly operating from Russia, who may exploit your account for maximum profit.

The form of attack takes advantage of both the ticking tax clock and the stress many feel trying to meet the deadline, encouraging impulsive and risky clicks.

Importantly, this threat is not limited to Windows users or those filing taxes by the April 15 deadline. As phishing techniques become more advanced through the use of AI and automated smartphone farms, similar scams are expected to persist well beyond tax season.

The IRS rarely contacts individuals via email and never to request sensitive information through links or attachments, so any such message should be treated with suspicion instead of trust.

To stay safe, users are urged to remain vigilant and avoid clicking on links or scanning codes from unsolicited emails. Instead of relying on emails for tax updates or returns, go directly to official websites.

The IRS offers resources to help recognise and report scams, and reviewing this guidance could be an essential step in protecting your personal information, not just today, but in the months ahead.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE experts warn on AI privacy risks in art apps

A surge in AI applications transforming selfies into Studio Ghibli-style artwork has captivated social media, but UAE cybersecurity experts are raising concerns over privacy and data misuse.

Dr Mohamed Al Kuwaiti, Head of Cybersecurity for the UAE Government, warned that engaging with unofficial apps could lead to breaches or leaks of personal data. He emphasised that while AI’s benefits are clear, users must understand how their personal data is handled by these platforms.

He called for strong cybersecurity standards across all digital platforms, urging individuals to be more cautious with their data.

Media professionals are also sounding alarms. Adel Al-Rashed, an Emirati journalist, cautioned that free apps often mimic trusted platforms but could exploit user data. He advised users to stick to verified applications, noting that paid services, like ChatGPT’s Pro edition, offer stronger privacy protections.

While acknowledging the risks, social media influencer Ibrahim Al-Thahli highlighted the excitement AI brings to creative expression. He urged users to focus on education and safe engagement with the technology, underscoring the UAE’s goal to build a resilient digital economy.

For more information on these topics, visit diplomacy.edu.

Victims of AI-driven sex crimes in Korea continue to grow

South Korea is facing a sharp rise in AI-related digital sex crimes, with deepfake pornography and online abuse increasingly affecting young women and children.

According to figures released by the Ministry of Gender Equality and Family and the Women’s Human Rights Institute, over 10,000 people sought help last year, marking a 14.7 percent increase from 2023.

Women made up more than 70 percent of those who contacted the Advocacy Center for Online Sexual Abuse Victims.

The majority were in their teens or twenties, with abuse often occurring via social media, messaging apps, and anonymous platforms. A growing portion of victims, including children under 10, were targeted due to the easy accessibility of AI tools.

The most frequently reported issue was ‘distribution anxiety,’ where victims feared the release of sensitive or manipulated videos, followed by blackmail and illegal filming.

Deepfake cases more than tripled in one year, with synthetic content often involving the use of female students’ images. In one notable incident, a university student and his peers used deepfake techniques to create explicit fake images of classmates and shared them on Telegram.

With over 300,000 pieces of illicit content removed in 2024, authorities warn that the majority of illegal websites are hosted overseas, complicating efforts to take down harmful material.

The South Korean government plans to strengthen its response by expanding educational outreach, supporting victims further, and implementing new laws to prevent secondary harm by allowing the removal of personal information alongside explicit images.

For more information on these topics, visit diplomacy.edu.

ChatGPT accused of enabling fake document creation

Concerns over digital security have intensified after reports revealed that OpenAI’s ChatGPT has been used to generate fake identification cards.

The incident follows the recent introduction of a popular Ghibli-style feature, which led to a sharp rise in usage and viral image generation across social platforms.

Among the fakes circulating online were forged versions of India’s Aadhaar ID, created with fabricated names, photos, and even QR codes.

While the Ghibli release helped push ChatGPT past 150 million active users, the tool’s advanced capabilities have now drawn criticism.

Some users demonstrated how the AI could replicate Aadhaar and PAN cards with surprising accuracy, even using images of well-known figures like OpenAI CEO Sam Altman and Tesla’s Elon Musk. The ease with which these near-perfect replicas were produced has raised alarms about identity theft and fraud.

The emergence of AI-generated IDs has reignited calls for clearer AI regulation and transparency. Critics are questioning how AI systems have access to the formatting of official documents, with accusations that sensitive datasets may be feeding model development.

As generative AI continues to evolve, pressure is mounting on both developers and regulators to address the growing risk of misuse.

For more information on these topics, visit diplomacy.edu.

DeepSeek highlights the risk of data misuse

The launch of DeepSeek, a Chinese-developed LLM, has reignited long-standing concerns about AI, national security, and industrial espionage.

While issues like data usage and bias remain central to AI discourse, DeepSeek’s origins in China have introduced deeper geopolitical anxieties. Echoing the scrutiny faced by TikTok, the model has raised fears of potential links to the Chinese state and its history of alleged cyber espionage.

With China and the US locked in a high-stakes AI race, every new model is now a strategic asset. DeepSeek’s emergence underscores the need for heightened vigilance around data protection, especially regarding sensitive business information and intellectual property.

Security experts warn that AI models may increasingly be trained using data acquired through dubious or illicit means, such as large-scale scraping or state-sponsored hacks.

The practice of data hoarding further complicates matters, as encrypted data today could be exploited in the future as decryption methods evolve.

Cybersecurity leaders are being urged to adapt to this evolving threat landscape. Beyond basic data visibility and access controls, there is growing emphasis on adopting privacy-enhancing technologies and encryption standards that can withstand future quantum threats.

Businesses must also recognise the strategic value of their data in an era where the lines between innovation, competition, and geopolitics have become dangerously blurred.

For more information on these topics, visit diplomacy.edu.

Blockchain app ARK fights to keep human creativity ahead of AI

Nearly 20 years after his AI career scare, screenwriter Ed Bennett-Coles and songwriter Jamie Hartman have developed ARK, a blockchain app designed to safeguard creative work from AI exploitation.

The platform lets artists register ownership of their ideas at every stage, from initial concept to final product, using biometric security and blockchain verification instead of traditional copyright systems.

ARK aims to protect human creativity in an AI-dominated world. ‘It’s about ring-fencing the creative process so artists can still earn a living,’ Hartman told AFP.

The app, backed by Claritas Capital and BMI, uses decentralised blockchain technology instead of centralised systems to give creators full control over their intellectual property.

Launching summer 2025, ARK challenges AI’s ‘growth at all costs’ mentality by emphasising creative journeys over end products.

Bennett-Coles compares AI content to online meat delivery, efficient but soulless, while human artistry resembles a grandfather’s butcher trip, where the experience matters as much as the result.

The duo hopes their solution will inspire industries to modernise copyright protections before AI erodes them completely.

For more information on these topics, visit diplomacy.edu.

New AI firm Deep Cogito launches versatile open models

A new San Francisco-based startup, Deep Cogito, has unveiled its first family of AI models, Cogito 1, which can switch between fast-response and deep-reasoning modes instead of being limited to just one approach.

These hybrid models combine the efficiency of standard AI with the step-by-step problem-solving abilities seen in advanced systems like OpenAI’s o1. While reasoning models excel in fields like maths and physics, they often require more computing power, a trade-off Deep Cogito aims to balance.

The Cogito 1 series, built on Meta’s Llama and Alibaba’s Qwen models instead of starting from scratch, ranges from 3 billion to 70 billion parameters, with larger versions planned.

Early tests suggest the top-tier Cogito 70B outperforms rivals like DeepSeek’s reasoning model and Meta’s Llama 4 Scout in some tasks. The models are available for download or through cloud APIs, offering flexibility for developers.

Founded in June 2024 by ex-Google DeepMind product manager Dhruv Malhotra and former Google engineer Drishan Arora, Deep Cogito is backed by investors like South Park Commons.

The company’s ambitious goal is to develop general superintelligence,’ AI that surpasses human capabilities, rather than merely matching them. For now, the team says they’ve only scratched the surface of their scaling potential.

For more information on these topics, visit diplomacy.edu.

Dutch researchers to face new security screenings

The Dutch government has proposed new legislation requiring background checks for thousands of researchers working with sensitive technologies. The plan, announced by Education Minister Eppo Bruins, aims to block foreign intelligence from accessing high-risk scientific work.

Around 8,000 people a year, including Dutch citizens, would undergo screenings involving criminal records, work history, and possible links to hostile regimes.

Intelligence services would support the process, which targets sectors like AI, quantum computing, and biotech.

Universities worry the checks may deter global talent due to delays and bureaucracy. Critics also highlight a loophole: screenings occur only once, meaning researchers could still be approached by foreign governments after being cleared.

While other countries are introducing similar measures, the Netherlands will attempt to avoid unnecessary delays. Officials admit, however, that no system can eliminate all risks.

For more information on these topics, visit diplomacy.edu.

Man uses AI avatar in New York court

A 74-year-old man representing himself in a New York State appeal has apologised after using an AI-generated avatar during court proceedings.

Jerome Dewald submitted a video featuring a youthful digital figure to deliver part of his legal argument, prompting confusion and criticism from the judges. One justice described the move as misleading, expressing frustration over the lack of prior disclosure.

Dewald later explained he intended to ease his courtroom anxiety and present his case more clearly, not to deceive.

In a letter to the judges, he acknowledged that transparency should have taken priority and accepted responsibility for the confusion caused. His case, a contract dispute with a former employer, remains under review by the appellate court.

The incident has reignited debate over the role of AI in legal settings. Recent years have seen several high-profile cases where AI-generated content introduced errors or false information, highlighting the risks of using generative technology without proper oversight.

Legal experts say such incidents are becoming increasingly common as AI tools become more accessible.

For more information on these topics, visit diplomacy.edu.

OpenAI’s Sam Altman responds to Miyazaki’s AI animation concerns

The recent viral trend of AI-generated Ghibli-style images has taken the internet by storm. Using OpenAI’s GPT-4o image generator, users have been transforming photos, from historic moments to everyday scenes, into Studio Ghibli-style renditions.

A trend like this has caught the attention of notable figures, including celebrities and political personalities, sparking both excitement and controversy.

While some praise the trend for democratising art, others argue that it infringes on copyright and undermines the efforts of traditional artists. The debate intensified when Hayao Miyazaki, the co-founder of Studio Ghibli, became a focal point.

In a 2016 documentary, Miyazaki expressed his disdain for AI in animation, calling it ‘an insult to life itself’ and warning that humanity is losing faith in its creativity.

OpenAI’s CEO, Sam Altman, recently addressed these concerns, acknowledging the challenges posed by AI in art but defending its role in broadening access to creative tools. Altman believes that technology empowers more people to contribute, benefiting society as a whole, even if it complicates the art world.

Miyazaki’s comments and Altman’s response highlight a growing divide in the conversation about AI and creativity. As the debate continues, the future of AI in art remains a contentious issue, balancing innovation with respect for traditional artistic practices.

For more information on these topics, visit diplomacy.edu.