Generative phishing techniques are becoming harder to detect as attackers use subtle visual tricks in web addresses to impersonate trusted brands. A new campaign reported by Cybersecurity News shows how simple character swaps create fake websites that closely resemble real ones on mobile browsers.
The phishing attacks rely on a homoglyph technique where the letters ‘r’ and ‘n’ are placed together to mimic the appearance of an ‘m’ in a domain name. On smaller screens, the difference is difficult to spot, allowing phishing pages to appear almost identical to real Microsoft or Marriott login sites.
Cybersecurity researchers observed domains such as rnicrosoft.com being used to send fake security alerts and invoice notifications designed to lure victims into entering credentials. Once compromised, accounts can be hijacked for financial fraud, data theft, or wider access to corporate systems.
Experts warn that mobile browsing increases the risk, as users are less likely to inspect complete URLs before logging in. Directly accessing official apps or typing website addresses manually remains the safest way to avoid falling into these traps.
Security specialists also continue to recommend passkeys, strong, unique passwords, and multi-factor authentication across all major accounts, as well as heightened awareness of domains that visually resemble familiar brands through character substitution.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A consortium of 10 central European banks has established a new company, Qivalis, to develop and issue a euro-pegged stablecoin, targeting a launch in the second half of 2026, subject to regulatory approval.
The initiative seeks to offer a European alternative to US dollar-dominated digital payment systems and strengthen the region’s strategic autonomy in digital finance.
The participating banks include BNP Paribas, ING, UniCredit, KBC, Danske Bank, SEB, Caixabank, DekaBank, Banca Sella, and Raiffeisen Bank International, with BNP Paribas joining after the initial announcement.
Former Coinbase Germany chief executive Jan-Oliver Sell will lead Qivalis as CEO, while former NatWest chair Howard Davies has been appointed chair. The Amsterdam-based company plans to build a workforce of up to 50 employees over the next two years.
Initial use cases will focus on crypto trading, enabling fast, low-cost payments and settlements, with broader applications planned later. The project emerges as stablecoins grow rapidly, led by dollar-backed tokens, while limited € alternatives drive regulatory interest and ECB engagement.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Oklahoma lawmakers have introduced Senate Bill 2064, proposing a legal framework that allows businesses, state employees, and residents to receive payments in Bitcoin without designating it as legal tender.
The bill recognises Bitcoin as a financial instrument, aligning with constitutional limits while enabling its voluntary use across payroll, procurement, and private transactions.
Under the proposal, state employees could opt to receive wages in Bitcoin, US dollars, or a combination of both at the start of each pay period. Payments would be settled at prevailing market rates and deposited into either self-hosted wallets or approved custodial accounts.
Vendors contracting with the state could also choose Bitcoin on a per-transaction basis, while crypto-native firms would benefit from reduced regulatory friction.
The legislation instructs the State Treasurer to appoint a payment processor and develop operational rules, with contracts targeted for completion by early 2027.
If approved, the framework would take effect in November 2026, positioning Oklahoma among a small group of US states exploring direct Bitcoin integration into public finance, alongside initiatives already launched in Texas and New Hampshire.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Indonesia is promoting blended finance as a key mechanism to meet the growing investment needs of AI and digital infrastructure. By combining public and private funding, the government aims to accelerate the development of scalable digital systems while aligning investments with sustainability goals and local capacity-building.
The rapid global expansion of AI is driving a sharp rise in demand for computing power and data centres. The government views this trend as both a strategic economic opportunity and a challenge that requires sound financial governance and well-designed policies to ensure long-term national benefits.
International financial institutions and global investors are increasingly supportive of public–private financing models. Such partnerships are seen as essential for mobilising large-scale, long-term capital and supporting the sustainable development of AI-related infrastructure in developing economies.
To attract sustained investment, the government is improving the overall investment climate through regulatory simplification, licensing reforms, integration of the Online Single Submission system, and incentives such as tax allowances and tax holidays. These measures are intended to support advanced technology sectors that require significant and continuous capital outlays.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A growing unease among writers is emerging as AI tools reshape how language is produced and perceived. Long-established habits, including the use of em dashes and semicolons, are increasingly being viewed with suspicion as machine-generated text becomes more common.
The concern is not opposition to AI itself, but the blurring of boundaries between human expression and automated output. Writers whose work was used to train large language models without consent say stylistic traits developed over decades are now being misread as algorithmic authorship.
Academic and editorial norms are also shifting under this pressure. Teaching practices that once valued rhythm, voice, and individual cadence are increasingly challenged by stricter stylistic rules, sometimes framed as safeguards against sloppy or machine-like writing rather than as matters of taste or craft.
At the same time, productivity tools embedded into mainstream software continue to intervene in the writing process, offering substitutions and revisions that prioritise clarity and efficiency over nuance. Such interventions risk flattening language and discouraging the idiosyncrasies that define human authorship.
As AI becomes embedded in publishing, education, and professional writing, the debate is shifting from detection to preservation. Many writers warn that protecting human voice and stylistic diversity is essential, arguing that affectless, uniform prose would erode creativity and trust.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
More than 800 creatives in the US have signed an anti-AI campaign accusing big technology companies of exploiting human work. High-profile figures from film and television in the country have backed the initiative, which argues that training AI on creative content without consent amounts to theft.
The campaign was launched by the Human Artistry Campaign, a coalition representing creators, unions and industry groups in the country. Supporters say AI systems should not be allowed to use artistic work without permission and fair compensation.
Actors and filmmakers in the US warned that unchecked AI adoption threatens livelihoods across film, television and music. Campaign organisers said innovation should not come at the expense of creators’ rights or ownership of their work.
The statement adds to growing pressure on lawmakers and technology firms in the US. Creative workers are calling for clearer rules on how AI can be developed and deployed across the entertainment industry.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A multi-faceted phishing campaign is abusing LinkedIn private messages to deliver weaponised malware using DLL sideloading, security researchers have warned. The activity relies on PDFs and archive files that appear trustworthy to bypass conventional security controls.
Attackers contact targets on LinkedIn and send self-extracting archives disguised as legitimate documents. When opened, a malicious DLL is sideloaded into a trusted PDF reader, triggering memory-resident malware that establishes encrypted command-and-control channels.
Using LinkedIn messages increases engagement by exploiting professional trust and bypassing email-focused defences. DLL sideloading allows malicious code to run inside legitimate applications, complicating detection.
The campaign enables credential theft, data exfiltration and lateral movement through in-memory backdoors. Encrypted command-and-control traffic makes containment more difficult.
Organisations using common PDF software or Python tooling face elevated risk. Defenders are advised to strengthen social media phishing awareness, monitor DLL loading behaviour and rotate credentials where compromise is suspected.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A new analysis found Grok generated an estimated three million sexualised images in 11 days, including around 23,000 appearing to depict children. The findings raise serious concerns over safeguards, content moderation, and platform responsibility.
The surge followed the launch of Grok’s one-click image editing feature in late December, which quickly gained traction among users. Restrictions were later introduced, including paid access limits and technical measures to prevent image undressing.
Researchers based their estimates on a random sample of 20,000 images, extrapolating from these results to more than 4.6 million images generated during the study period. Automated tools and manual review identified sexualised content and confirmed cases involving individuals appearing under 18.
Campaigners have warned that the findings expose significant gaps in AI safety controls, particularly in protecting children. Calls are growing for stricter oversight, stronger accountability, and more robust safeguards before large-scale AI image deployment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Police in Japan have arrested a man accused of creating and selling non-consensual deepfake pornography using AI tools. The Tokyo Metropolitan Police Department said thousands of manipulated images of female celebrities were distributed through paid websites.
Investigators in Japan allege the suspect generated hundreds of thousands of images over two years using freely available generative AI software. Authorities say the content was promoted on social media before being sold via subscription platforms.
The arrest follows earlier cases in Japan and reflects growing concern among police worldwide. In South Korea, law enforcement has reported hundreds of arrests linked to deepfake sexual crimes, while cases have also emerged in the UK.
European agencies, including Europol, have also coordinated arrests tied to AI-generated abuse material. Law enforcement bodies say the spread of accessible AI tools is forcing rapid changes in forensic investigation and in the handling of digital evidence.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Stanford University, ETH Zurich, and EPFL have launched a transatlantic partnership to develop open-source AI models prioritising societal values over commercial interests.
The partnership was formalised through a memorandum of understanding signed during the World Economic Forum meeting in Davos.
The agreement establishes long-term cooperation in AI research, education, and innovation, with a focus on large-scale multimodal models. The initiative aims to strengthen academia’s influence over global AI by promoting transparency, accountability, and inclusive access.
Joint projects will develop open datasets, evaluation benchmarks, and responsible deployment frameworks, alongside researcher exchanges and workshops. The effort aims to embed human-centred principles into technical progress while supporting interdisciplinary discovery.
Academic leaders said the alliance reinforces open science and cultural diversity amid growing corporate influence over foundation models. The collaboration positions universities as central drivers of ethical, trustworthy, and socially grounded AI development.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!