Young investors warned on crypto and AI advice

Australia’s financial regulator has warned young investors to be cautious with social media influencers and AI chatbots. A survey by the Australian Securities and Investments Commission found one in four Gen Z Australians invest in crypto, often guided by online content.

The survey of 1,127 participants aged 18 to 28 showed 63% use social media for financial information, 18% rely on AI platforms, and 30% consult YouTube. AI was the most trusted source at 64%, but over half still trust influencers and social media despite possible misinformation.

ASIC previously issued warnings to 18 influencers suspected of promoting high-risk products without a licence. Commissioner Alan Kirkland said some social media marketing promotes crypto scams or risky super switches that threaten young people’s key assets.

The regulator is also watching AI financial guidance. Personalised advice from unlicensed sources is illegal, and young investors should carefully check sources, especially as crypto exchanges increasingly use AI bots for trading guidance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Switzerland is at the centre of a quiet rebellion in chip design

A Swiss-based open-source technology is quietly challenging the semiconductor industry’s concentration of power, in which most of the world’s digital devices depend on instruction set architectures licensed by just two companies: Intel in the US and ARM in the UK.

The RISC-V International Association, headquartered in Zurich since 2020, maintains an open-source alternative that allows chip designers to build without paying licensing fees or seeking permission from governments that control proprietary architectures.

The appeal has grown considerably. The association now counts more than 4,500 members, including US heavyweights such as Nvidia, Microsoft, and Google alongside Chinese giants Huawei, Tencent, and Alibaba, with Nvidia alone shipping over a billion RISC-V cores in 2024.

Switzerland’s political neutrality has been central to the association’s appeal, with its CEO Andrea Gallo describing the Zurich base as ‘a testament to our neutrality across all time zones, geographies and cultures.’

However, experts caution that RISC-V still faces a steep climb before it can challenge industry leaders. Frank Gürkaynak of ETH Zurich noted that the real challenge is not building a processor but assembling the entire software ecosystem around it, a task requiring hundreds of years of combined working hours.

The association is now collaborating with Linux to create an open-source trio of software, architecture, and hardware, with ambitions for RISC-V to become the global ISA of choice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools encourage exploration in creative tasks

AI is often associated with automation and job replacement, yet new research from Swansea University suggests a different role. Findings indicate that AI can serve as a creative collaborator, encouraging exploration and deeper engagement during design tasks.

Researchers from the university’s Computer Science Department ran an experiment with over 800 participants using an AI-supported system to design virtual cars.

Rather than optimising results, the system generated galleries with varied design ideas, including effective, unusual, and intentionally flawed concepts.

According to lead researcher Sean Walton, exposure to AI-generated suggestions increased participants’ involvement. Many users spent longer working on the task and produced stronger designs after interacting with the system’s diverse proposals.

The study in ACM Transactions on Interactive Intelligent Systems argues that traditional methods for evaluating AI tools are too narrow. Researchers believe broader assessments are needed to measure how AI affects human thinking, emotions, and creative exploration.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-powered browsing arrives in Chrome for India New Zealand and Canada

Chrome is bringing its advanced AI features to users in India, New Zealand and Canada, aiming to simplify daily browsing tasks and provide instant support. The updates include the integration of Gemini in Chrome and support for over 50 languages.

Users can now interact with a personalised browsing assistant without switching tabs, receiving instant answers, summaries or creative suggestions. Gemini in Chrome allows multitasking and remembers previously visited pages for easier navigation.

Integrations with Google apps such as Gmail, Maps and YouTube enhance productivity directly from the browser. Users can draft emails, schedule meetings, or extract key points from videos without leaving their current page.

Chrome’s AI can also consolidate information from multiple open tabs, streamlining tasks like research or shopping. Nano Banana 2 allows users to transform images on the web in real time, without uploading files or switching windows.

Security remains a priority, with Chrome designed to detect threats and require confirmations for sensitive actions. Gemini in Chrome benefits from automated testing and updates to maintain robust protection as users explore new AI features.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI technology set to reshape farming and rural life in South Korea

South Korea has launched a national agenda to expand AI across agriculture, aiming to boost productivity and improve living standards in rural communities. Officials from the Ministry of Agriculture, Food and Rural Affairs and the Ministry of Science and ICT presented the strategy as part of a wider digital transformation effort.

Plans include expanding smart farm models that reduce labour-intensive tasks and allow more farmers to benefit from automated technologies. Shared machinery centres and autonomous farming tools such as drones will be developed with support from the Rural Development Administration.

Authorities also intend to apply AI to agricultural distribution through smart logistics facilities that manage receiving, sorting and shipping processes. Around 300 smart Agricultural Products Processing Centres are expected to operate nationwide by 2030.

Livestock grading systems using AI will be introduced to improve accuracy and consumer trust across pork and beef processing facilities. Officials aim to raise the share of AI-graded meat from 19.4 percent in 2025 to 70 percent by 2030.

Beyond production, the programme seeks to expand ‘smart rural communities’ offering AI-based services such as transport, daily living support and farming assistance. Policymakers believe that a stronger digital infrastructure will help rural regions respond to climate pressures and an ageing population.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deepfake attacks push organisations to rethink cybersecurity strategies

Organisations are strengthening their cybersecurity strategies as deepfake attacks become more convincing and easier to produce using generative AI.

Security experts alert that enterprises must move beyond basic detection tools and adopt layered security strategies to defend against the growing threat of deepfake attacks targeting communications and digital identity.

Many existing tools for identifying manipulated media are still imperfect. Digital forensics expert Hany Farid estimates that some systems used to detect deepfake attacks are only about 80 percent effective and often fail to explain how they determine whether an image, video, or audio recording is authentic. The lack of explainability also raises challenges for legal investigations and public verification of suspicious media.

Cybersecurity companies are creating new technologies to improve the detection of deepfake attacks by analysing slight signals that are difficult for humans to notice. Firms such as GetReal Security, Reality Defender, Deep Media, and Sensity AI examine lighting consistency, shadow angles, voice patterns, and facial movements. Environmental indicators such as device location, metadata, and IP information can also help security teams spot potential deepfake attacks.

However, experts say detection alone cannot fully protect organisations from deepfake attacks. Companies are increasingly conducting internal red-team exercises that simulate impersonation scenarios to expose weaknesses in verification procedures. Multi-factor authentication techniques can reduce the risk of employees responding to fraudulent communications.

Another emerging defence involves digital provenance systems designed to track the origin and modification history of digital content. Initiatives such as the Coalition for Content Provenance and Authenticity (C2PA) embed cryptographically signed metadata into media files, allowing organisations to verify whether content linked to suspected deepfake attacks has been altered.

Recent experiments highlight how testing these threats can be. In February, cybersecurity company Reality Defender conducted an exercise with NATO by introducing deepfake media into a simulated military scenario. The findings showed how easily even experienced officials can struggle to identify manipulated communications, reinforcing calls for automated systems capable to detecting deepfake attacks across critical infrastructure.

As generative AI tools continue to advance, organisations are expected to combine detection technologies, stronger verification procedures, and provenance tracking to reduce the risks posed by deepfake attacks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hackers target WhatsApp and Signal in global encrypted messaging attacks

Foreign state-backed hackers are targeting accounts on WhatsApp and Signal used by government officials, diplomats, military personnel, and other high-value individuals, according to a security alert issued by the Portuguese Security Intelligence Service (SIS).

Portuguese authorities described the activity as part of a global cyber-espionage campaign aimed at gaining access to sensitive communications and extracting privileged information from Portugal and allied countries. The advisory did not identify the origin of the suspected attackers.

The warning follows similar alerts from other European intelligence agencies. Earlier this week, Dutch authorities reported that hackers linked to Russia were conducting a global campaign targeting the messaging accounts of officials, military personnel, and journalists.

Security agencies say the attackers are not exploiting vulnerabilities in the messaging platforms themselves. Both WhatsApp and Signal rely on end-to-end encryption designed to protect the content of messages from interception.

Instead, the campaign focuses on social engineering tactics that trick users into granting access to their accounts. According to the SIS report, attackers use phishing messages, malicious links, fake technical support requests, QR-code lures, and impersonation of trusted contacts.

The agency also warned that AI tools are increasingly being used to make such attacks more convincing. AI can help impersonate support staff, mimic familiar voices or identities, and conduct more realistic conversations through messages, phone calls, or video.

Once attackers gain access to an account, they may be able to read private messages, group chats, and shared files via WhatsApp and Signal. They can also impersonate the compromised user to launch additional phishing attacks targeting the victim’s contacts.

The alert echoes a previous warning issued by the Cybersecurity and Infrastructure Security Agency (CISA), which reported that encrypted messaging apps are increasingly being used as entry points for spyware and phishing campaigns targeting high-value individuals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

When AI preserves human memory

Technology companies are exploring a controversial new frontier: the digital afterlife. A recent patent granted to Meta Platforms proposes AI systems capable of keeping the accounts of deceased users active on social media, generating posts and responses that mimic their tone, humour and online behaviour.

Developers describe the concept as a way to soften the emotional impact of a person’s disappearance from online communities. Critics, however, warn that the technology risks transforming grief into a commercial feature while raising difficult questions about consent, identity and memory.

The idea falls within a broader trend often referred to as ‘grief tech’. Several companies already offer services that recreate the voices or personalities of the deceased through AI models trained on messages, recordings and personal data. A similar concept was patented by Microsoft in 2021, while digital companions and AI chatbots are increasingly marketed as tools to preserve a person’s legacy.

Supporters argue that AI can play a positive role in preserving history and memory. Cultural institutions are already using AI to restore photographs, digitise fragile archives and revive endangered languages. Projects such as the interactive testimonies created by the USC Shoah Foundation allow future generations to ask questions of recorded Holocaust survivors through AI-driven simulations.

Yet applying similar technologies to personal memory has proved far more controversial. AI systems can replicate patterns in speech or writing, but they rely entirely on existing data. A digital recreation of a person may therefore reflect only fragments of their life or amplify certain narratives while ignoring others.

Concerns also extend to manipulation and misinformation. Advances in generative AI and deepfake technology make it increasingly possible to fabricate convincing messages, audio or video, potentially distorting both personal and collective memory.

Psychologists warn that interacting with AI versions of deceased loved ones could also complicate the grieving process by blurring the boundary between remembrance and artificial presence.

Legal and ethical questions remain largely unresolved. European data protection rules, including the ‘right to be forgotten’, give individuals some control over their digital footprints. However, AI systems capable of recreating behaviour or generating new content raise new challenges for privacy law and consent after death.

International organisations are beginning to examine the issue. Ethical guidelines from UNESCO stress transparency, accountability and respect for human rights in the development of AI, while European regulators are assessing how emerging technologies might fit within broader AI governance frameworks.

Debate over digital resurrection highlights a deeper philosophical question about technology’s role in human life. AI may help preserve stories and cultural heritage, yet the ability to replicate a person’s voice or personality forces society to reconsider the meaning of memory, identity and loss in the digital age.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

EU approves signature of global AI framework

The European Parliament has approved the Council of Europe Framework Convention on Artificial Intelligence, the first international legally binding treaty on AI governance.

With 455 votes in favour, 101 against, and 74 abstentions, Parliament endorsed the EU’s signature to embed existing AI legislation in a global framework. The move reinforces the safe and rights-respecting deployment of AI across the EU and worldwide.

The convention sets standards for transparency, documentation, risk management, and oversight, applying to both public authorities and private actors acting on their behalf.

It establishes a global baseline for AI governance while allowing the EU to maintain higher protections under the AI Act, GDPR, and other EU legislation covering product safety, liability, and non-discrimination.

The EU co-rapporteurs highlighted that the agreement demonstrates the EU’s commitment to human-centric AI. By prioritising democracy, accountability, and fundamental rights, the framework aims to ensure AI strengthens open societies while supporting stable economic growth.

Negotiations on the convention began in 2022 with participation from the EU member states, international partners, civil society, academia, and industry. Current signatories include the EU, the UK, Ukraine, Canada, Israel, and the United States, with the convention open to additional global partners.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Writer files lawsuit against Grammarly over AI feature using experts’ identities

A journalist has filed a class action lawsuit against Grammarly after the company introduced an AI feature that generated editorial feedback by imitating well-known writers and public figures without their permission.

The legal complaint was submitted by investigative journalist Julia Angwin, who argued that the tool unlawfully used the identities and reputations of authors and commentators.

The feature, known as ‘Expert Review’, produced automated critiques presented as if they came from figures such as Stephen King, Carl Sagan and technology journalist Kara Swisher.

Such a feature was available to subscribers paying an annual fee and was designed to simulate professional editorial guidance.

Critics quickly questioned both the quality of the generated feedback and the decision to associate the tool with real individuals who had not authorised the use of their names or expertise.

Technology writer Casey Newton tested the system by submitting one of his own articles and receiving automated feedback attributed to an AI version of Swisher. The response appeared generic, casting doubt on the value of linking such commentary to prominent personalities.

Following criticism from writers and researchers, the feature was disabled. Shishir Mehrotra, chief executive of Grammarly’s parent company Superhuman, issued a public apology while defending the broader concept behind the tool.

The lawsuit reflects growing tensions around AI systems that replicate creative styles or professional expertise.

As generative AI technologies expand across writing and publishing industries, questions surrounding consent, intellectual labour and identity rights are becoming increasingly prominent.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!