AI voice hacks put fake Musk and Zuckerberg at crosswalks

Crosswalk buttons in several Californian cities have been hacked to play AI-generated voices impersonating tech moguls Elon Musk and Mark Zuckerberg, delivering bizarre and satirical messages to pedestrians.

The spoof messages, which mock the CEOs with lines like ‘Can we be friends?’ and ‘Cooking our grandparents’ brains with AI slop,’ have been heard in Palo Alto, Redwood City, and Menlo Park.

US Palo Alto officials confirmed that 12 intersections were affected and the audio systems have since been disabled.

While the crosswalk signals themselves remain operational, authorities are investigating how the hack was carried out. Similar issues are being addressed in nearby cities, with local governments moving quickly to secure the compromised systems.

The prank, which uses AI voice cloning, appears to layer these spoofed messages on top of the usual accessibility features rather than replacing them entirely.

Though clearly comedic in intent, the incident has raised concerns about the growing ease with which public systems can be manipulated using generative technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft users at risk from tax-themed cyberattack

As the US tax filing deadline of April 15 approaches, cybercriminals are ramping up phishing attacks designed to exploit the urgency many feel during this stressful period.

Windows users are particularly at risk, as attackers are targeting Microsoft account credentials by distributing emails disguised as tax-related reminders.

These emails include a PDF attachment titled ‘urgent reminder,’ which contains a malicious QR code. Once scanned, it leads users through fake bot protection and CAPTCHA checks before prompting them to enter their Microsoft login details, details that are then sent to a server controlled by criminals.

Security researchers, including Peter Arntz from Malwarebytes, warn that the email addresses in these fake login pages are already pre-filled, making it easier for unsuspecting victims to fall into the trap.

Entering your password at this stage could hand your credentials to malicious actors, possibly operating from Russia, who may exploit your account for maximum profit.

The form of attack takes advantage of both the ticking tax clock and the stress many feel trying to meet the deadline, encouraging impulsive and risky clicks.

Importantly, this threat is not limited to Windows users or those filing taxes by the April 15 deadline. As phishing techniques become more advanced through the use of AI and automated smartphone farms, similar scams are expected to persist well beyond tax season.

The IRS rarely contacts individuals via email and never to request sensitive information through links or attachments, so any such message should be treated with suspicion instead of trust.

To stay safe, users are urged to remain vigilant and avoid clicking on links or scanning codes from unsolicited emails. Instead of relying on emails for tax updates or returns, go directly to official websites.

The IRS offers resources to help recognise and report scams, and reviewing this guidance could be an essential step in protecting your personal information, not just today, but in the months ahead.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI could be Geneva’s lifeline in times of crisis

International Geneva is at a crossroads. With mounting budget cuts, declining trust in multilateralism, and growing geopolitical tensions, the city’s role as a hub for global cooperation is under threat.

In his thought-provoking blog, ‘Don’t waste the crisis: How AI can help reinvent International Geneva’, Jovan Kurbalija, Executive Director of Diplo, argues that AI could offer a way forward—not as a mere technological upgrade but as a strategic tool for transforming the city’s institutions and reviving its humanitarian spirit. Kurbalija envisions AI as a means to re-skill Geneva’s workforce, modernise its organisations, and preserve its vast yet fragmented knowledge base.

With professions such as translators, lawyers, and social scientists potentially playing pivotal roles in shaping AI tools, the city can harness its multilingual, highly educated population for a new kind of innovation. A bottom-up approach is key: practical steps like AI apprenticeships, micro-learning platforms, and ‘AI sandboxes’ would help institutions adapt at their own pace while avoiding the pitfalls of top-down tech imposition.

Organisations must also rethink how they operate. AI offers the chance to cut red tape, lighten the administrative burden on NGOs, and flatten outdated hierarchies in favour of more agile, data-driven decision-making.

At the same time, Geneva can lead by example in ethical AI governance—by ensuring accountability, protecting human rights and knowledge, and defending what Kurbalija calls our ‘right to imperfection’ in an increasingly optimised world. Ultimately, Geneva’s challenge is not technological—it’s organisational.

As AI tools become cheaper and more accessible, the real work lies in how institutions and communities embrace change. Kurbalija proposes a dedicated Geneva AI Fund to support apprenticeships, ethical projects, and local initiatives. He argues that this crisis could be Geneva’s opportunity to reinvent itself for survival and to inspire a global model of human-centred AI governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE experts warn on AI privacy risks in art apps

A surge in AI applications transforming selfies into Studio Ghibli-style artwork has captivated social media, but UAE cybersecurity experts are raising concerns over privacy and data misuse.

Dr Mohamed Al Kuwaiti, Head of Cybersecurity for the UAE Government, warned that engaging with unofficial apps could lead to breaches or leaks of personal data. He emphasised that while AI’s benefits are clear, users must understand how their personal data is handled by these platforms.

He called for strong cybersecurity standards across all digital platforms, urging individuals to be more cautious with their data.

Media professionals are also sounding alarms. Adel Al-Rashed, an Emirati journalist, cautioned that free apps often mimic trusted platforms but could exploit user data. He advised users to stick to verified applications, noting that paid services, like ChatGPT’s Pro edition, offer stronger privacy protections.

While acknowledging the risks, social media influencer Ibrahim Al-Thahli highlighted the excitement AI brings to creative expression. He urged users to focus on education and safe engagement with the technology, underscoring the UAE’s goal to build a resilient digital economy.

For more information on these topics, visit diplomacy.edu.

AI transforms global healthcare with major growth ahead

The healthcare sector is poised for significant growth as AI continues to revolutionise the industry. A new report from Avant Technologies predicts an influx of AI-powered solutions in healthcare, with key technology giants leading the charge.

Avant Technologies and Ainnova, in their joint venture, plan to showcase their AI-powered Vision AI platform at the 2025 Mexico Healthcare Innovation Summit.

The platform, aimed at early disease detection, is nearing approval from the US Food and Drug Administration (FDA) and is already in clinical trials in Southeast Asia and South America.

Apple and Amazon are also entering the AI healthcare space, with Apple launching an AI-powered health coach to guide users on diet and exercise, while Amazon is expanding its AI solutions with a healthcare chatbot.

Meanwhile, GE Healthcare has seen success with its AI-driven cardiac imaging, which has garnered FDA approval. The World Health Organization (WHO) supports AI integration in healthcare, particularly for outpatient care and early diagnosis, though it has urged regulators to be cautious of potential risks.

AI in healthcare is expected to grow exponentially, reaching a market valuation of $613 billion by 2034. The sector’s rapid expansion is driven by increasing adoption rates, particularly for early disease detection, administrative efficiency, and personalised medicine.

Despite data privacy concerns, the adoption of AI tools in fields like dermatology, oncology, and cardiovascular health is expected to surge. North America is predicted to lead the market, followed by Europe and South Asia, as more healthcare institutions embrace AI technologies.

For more information on these topics, visit diplomacy.edu.

Hackers leak data from Indian software firm in major breach

A major cybersecurity breach has reportedly compromised a software company based in India, with hackers claiming responsibility for stealing nearly 1.6 million rows of sensitive data on 19 December 2024.

A hacker identified as @303 is said to have accessed and exposed customer information and internal credentials, with the dataset later appearing on a dark web forum via a user known as ‘frog’.

The leaked data includes email addresses linked to major Indian insurance providers, contact numbers, and possible administrative access credentials.

Analysts found that the sample files feature information tied to employees of companies such as HDFC Ergo, Bajaj Allianz, and ICICI Lombard, suggesting widespread exposure across the sector.

Despite the firm’s stated dedication to safeguarding data, the incident raises doubts about its cybersecurity protocols.

The breach also comes as India’s insurance regulator, IRDAI, has begun enforcing stricter cyber measures. In March 2025, it instructed insurers to appoint forensic auditors in advance and perform full IT audits instead of waiting for threats to surface.

A breach like this follows a string of high-profile incidents, including the Star Health Insurance leak affecting 31 million customers.

With cyberattacks in India up by 261% in early 2024 and the average cost of a breach now ₹19.5 crore, experts warn that insurance firms must adopt stronger protections instead of relying on outdated defences.

For more information on these topics, visit diplomacy.edu.

AI site faces backlash for copying Southern Oregon news

A major publishing organisation has issued a formal warning to Good Daily News, an AI-powered news aggregator, demanding it cease the unauthorised scraping of content from local news outlets across Southern Oregon and beyond. The News Media Alliance, which represents 2,200 publishers, sent the letter on 25 March, urging the national operator to respect publishers’ rights and stop reproducing material without permission.

Good Daily runs over 350 online ‘local’ news websites across 47 US states, including Daily Medford and Daily Salem in Oregon. Though the platforms appear locally based, they are developed using AI and managed by one individual, Matt Henderson, who has registered mailing addresses in both Ashland, Oregon and Austin, Texas. Content is reportedly scraped from legitimate local news sites, rewritten by AI, and shared in newsletters, sometimes with source links, but often without permission.

News Media Alliance president Danielle Coffey said such practices undermine the time, resources, and revenue of local journalism. Many publishers use digital tools to block automated scrapers, though this comes at a financial cost. The organisation is working with the Oregon Newspaper Publishers Association and exploring legal options. Others in the industry, including Heidi Wright of the Fund for Oregon Rural Journalism, have voiced strong support for the warning, calling for greater action to defend the integrity of local news.

For more information on these topics, visit diplomacy.edu.

WooCommerce responds to alleged data breach claim

A hacker going by the alias ‘Satanic’ recently claimed responsibility for a significant data breach affecting websites that use WooCommerce, a leading eCommerce platform. The attacker alleged that over 4.4 million customer records were compromised, including personal and corporate data such as email addresses, phone numbers, physical addresses, and social media profiles, as well as company revenues, staff sizes, and tech stacks.

The original announcement was made on Breach Forums, a known cybercrime forum, where the hacker stated that the data was available for sale via private messages or Telegram. While initial reports—including one by HackRead—linked the breach to WooCommerce-based stores, WooCommerce has since issued an official statement denying that its systems were involved in the incident.

‘We can confirm that no WooCommerce data has been involved in the breach described in these articles. Our team quickly investigated the data samples and compared them against our own records. We determined that the data was not obtained through a breach of WooCommerce.com or any other Automattic services.’ — Jay Walsh, Director of Communications, WooCommerce.

The company believes that the leaked data originated from a third-party service that aggregates publicly available information about e-commerce sites. It is unclear whether the data was accessed legally or obtained through other means.

The attacker claimed the breach was achieved by exploiting vulnerabilities in third-party systems integrated with WooCommerce-powered websites—such as CRMs or marketing platforms—rather than through WooCommerce itself. However, no technical evidence has been shared to substantiate this claim.

The incident follows previous breach claims by the same hacker involving platforms like Magento and Twilio’s SendGrid, the latter of which was also denied by the company.

WooCommerce, owned by Automattic, powers a large share of global online shops. While the platform remains secure according to its developers, the case highlights ongoing concerns about the security of third-party tools and integrations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU prepares new data strategy for AI growth

The European Commission will soon launch a consultation on its upcoming Data Union Strategy, a key part of efforts to boost Europe’s leadership in AI.

The strategy, set to be published by the end of the year, aims to make it easier for businesses and public bodies to share data securely and efficiently across the EU.

The initiative supports the broader AI Continent Action Plan, expected to be unveiled this week, which seeks to encourage faster adoption of AI technologies by European companies.

Instead of relying on fragmented systems, the Commission wants to improve data access, digital infrastructure, and cloud capabilities while investing in talent and streamlining complex processes.

The plan includes the creation of AI factories where companies can train models using EU-based resources, and a separate Cloud and AI Development Act later this year will promote energy-efficient investments to support these goals.

Public feedback on the Data Union Strategy will be gathered from April to June as part of the consultation process.

Despite the ambition, the Commission acknowledges ongoing concerns such as uncertainty around international data flows and challenges accessing suitable data for generative AI.

Strict privacy laws like the GDPR, instead of enabling wider AI training, have led to frustration from major tech firms over regulatory delays in Europe.

For more information on these topics, visit diplomacy.edu.

Meta to block livestreaming for under 16s without parental permission

Meta will soon prevent children under 16 from livestreaming on Instagram unless their parents explicitly approve.

The new safety rule is part of broader efforts to protect young users online and will first be introduced in the UK, US, Canada and Australia, before being extended to the rest of Europe and beyond in the coming months.

The company explained that teenagers under 16 will also need parental permission to disable a feature that automatically blurs images suspected of containing nudity in direct messages.

These updates build on Meta’s teen supervision programme introduced last September, which gives parents more control over how their children use Instagram.

Instead of limiting the changes to Instagram alone, Meta is now extending similar protections to Facebook and Messenger.

Teen accounts on those platforms will be set to private by default, and will automatically block messages from strangers, reduce exposure to violent or sensitive content, and include reminders to take breaks after an hour of use. Notifications will also pause during usual bedtime hours.

Meta said these safety tools are already being used across at least 54 million teen accounts. The company claims the new measures will better support teenagers and parents alike in making social media use safer and more intentional, instead of leaving young users unprotected or unsupervised online.

For more information on these topics, visit diplomacy.edu.