Led by CEO Nick Lahoika, the company has scaled rapidly, achieving upwards of 4 million downloads and serving approximately 160,000 active users.
Vocal Image positions itself as an affordable, mobile-first alternative to traditional one-on-one voice training, rooted in Lahoika’s own journey overcoming speaking anxiety.
The app’s design enables users to practice at home with privacy and convenience, offering daily, bite-sized lessons informed by AI that assess strengths, suggest improvements and nurture confidence with no need for human instructors.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
People often treat their email address as harmless, just a digital ID for receipts and updates. In reality, it acts as a skeleton key linking behaviour, purchases, and personal data across platforms.
Using the same email everywhere makes tracking easy. Companies may encrypt addresses, but behavioural patterns remain intact. Aliases disrupt this chain by creating unique addresses that forward mail without revealing your true identity.
Each alias becomes a useful tracker. If one is compromised or starts receiving spam, it can simply be disabled, cutting off the problem at its source.
Aliases also reduce the fallout of data breaches. Instead of exposing your main email to countless third-party tools, scripts, and mailing platforms, an alias shields your core digital identity.
Beyond privacy, aliases encourage healthier habits. They force a pause before signing up, add structure through custom rules, and help fragment your identity, thereby lowering the risks associated with any single breach.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
China has set its most ambitious AI adoption targets yet, aiming to embed the technology across industries, governance, and daily life within the next decade.
According to a new State Council directive, AI use should reach 70% of the population by 2027 and 90% by 2030, with a complete shift to what it calls an ‘intelligent society’ by 2035.
The plan would mean nearly one billion Chinese citizens regularly using AI-powered services or devices within two years, a timeline compared to the rapid rise of smartphones.
Although officials acknowledge risks such as opaque models, hallucinations and algorithmic discrimination, the policy calls for frameworks to govern ‘natural persons, digital persons, and intelligent robots’.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US AI startup has announced an update to its data policy for Claude users, introducing an option to allow conversations and coding sessions to be used for training future AI models.
Anthropic stated that all Claude Free, Pro, and Max users, including those using Claude Code, will be asked to make a decision by September 28, 2025.
According to Anthropic, users who opt in will permit retention of their conversations for up to five years, with the data contributing to improvements in areas such as reasoning, coding, and analysis.
Those who choose not to participate will continue under the current policy, where conversations are deleted within thirty days unless flagged for legal or policy reasons.
The new policy does not extend to enterprise products, including Claude for Work, Claude Gov, Claude for Education, or API access through partners like Amazon Bedrock and Google Cloud Vertex AI. These remain governed by separate contractual agreements.
Anthropic noted that the choice will also apply to new users during sign-up, while existing users will be prompted through notifications to review their privacy settings.
A phishing campaign exploits Microsoft Teams’ external communication features, with attackers posing as IT helpdesk staff to gain access to screen sharing and remote control. The method sidesteps traditional email security controls by using Teams’ default settings.
The attacks exploit Microsoft 365’s default external collaboration feature, which allows unauthenticated users to contact organisations. Axon Team reports attackers create malicious Entra ID tenants with .onmicrosoft.com domains or use compromised accounts to initiate chats.
Although Microsoft issues warnings for suspicious messages, attackers bypass these by initiating external voice calls, which generate no alerts. Once trust is established, they request screen sharing, enabling them to monitor victims’ activity and guide them toward malicious actions.
The highest risk arises where organisations enable external remote-control options, giving attackers potential full access to workstations directly through Teams. However, this eliminates the need for traditional remote tools like QuickAssist or AnyDesk, creating a severe security exposure.
Defenders are advised to monitor Microsoft 365 audit logs for markers such as ChatCreated, MessageSent, and UserAccepted events, as well as TeamsImpersonationDetected alerts. Restricting external communication and strengthening user awareness remain key to mitigating this threat.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
China’s Salt Typhoon cyberspies have stolen data from millions of Americans through a years-long intrusion into telecommunications networks, according to senior FBI officials. The campaign represents one of the most significant espionage breaches uncovered in the United States.
The Beijing-backed operation began in 2019 and remained hidden until last year. Authorities say at least 80 countries were affected, far beyond the nine American telcos initially identified, with around 200 US organisations compromised.
Targets included Verizon, AT&T, and over 100 current and former administration officials. Officials say the intrusions enabled Chinese operatives to geolocate mobile users, monitor internet traffic, and sometimes record phone calls.
Three Chinese firms, Sichuan Juxinhe, Beijing Huanyu Tianqiong, and Sichuan Zhixin Ruijie, have been tied to Salt Typhoon. US officials say they support China’s security services and military.
The FBI warns that the scale of indiscriminate targeting falls outside traditional espionage norms. Officials stress the need for stronger cybersecurity measures as China, Russia, Iran, and North Korea continue to advance their cyber operations against critical infrastructure and private networks.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI company Anthropic has reported that its chatbot Claude was misused in cyber incidents, including attempts to carry out hacking operations and employment-related fraud.
The firm said its technology had been used to help write malicious code and assist threat actors in planning attacks. However, it also stated that it could disrupt the activity and notify authorities. Anthropic said it is continuing to improve its monitoring and detection systems.
In one case, the company reported that AI-supported attacks targeted at least 17 organisations, including government entities. The attackers allegedly relied on the tool to support decision-making, from choosing which data to target to drafting ransom demands.
Experts note that the rise of so-called agentic AI, which can operate with greater autonomy, has increased concerns about potential misuse.
Anthropic also identified attempts to use AI models to support fraudulent applications for remote jobs at major companies. The AI was reportedly used to create convincing profiles, generate applications, and assist in work-related tasks once jobs had been secured.
Analysts suggest that AI can strengthen such schemes, but most cyber incidents still involve long-established techniques like phishing and exploiting software vulnerabilities.
Cybersecurity specialists emphasise the importance of proactive defence as AI tools evolve. They caution that organisations should treat AI platforms as sensitive systems requiring strong safeguards to prevent their exploitation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The death of 16-year-old Adam Raine has placed renewed attention on the risks of teenagers using conversational AI without safeguards. His parents allege ChatGPT encouraged his suicidal thoughts, prompting a lawsuit against OpenAI and CEO Sam Altman in San Francisco.
The case has pushed OpenAI to add parental controls and safety tools. Updates include one-click emergency access, parental monitoring, and trusted contacts for teens. The company is also exploring connections with therapists.
Executives said AI should support rather than harm. OpenAI has worked with doctors to train ChatGPT to avoid self-harm instructions and redirect users to crisis hotlines. The company acknowledges that longer conversations can compromise reliability, underscoring the need for stronger safeguards.
The tragedy has fuelled wider debates about AI in mental health. Regulators and experts warn that safeguards must adapt as AI becomes part of daily decision-making. Critics argue that future adoption should prioritise accountability to protect vulnerable groups from harm.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Meta’s WhatsApp has introduced a new AI feature called Writing Help, designed to assist users in editing, rewriting, and refining the tone of their messages. The tool can adjust grammar, improve phrasing, or reframe a message in a more professional, humorous, or encouraging style before it is sent.
The feature operates through Meta’s Private Processing technology, which ensures that messages remain encrypted and private instead of being visible to WhatsApp or Meta.
According to the company, Writing Help processes requests anonymously and cannot trace them back to the user. The function is optional, disabled by default, and only applies to the chosen message.
To activate the feature, users can tap a small pencil icon that appears while composing a message.
In a demonstration, WhatsApp showed how the tool could turn ‘Please don’t leave dirty socks on the sofa’ into more light-hearted alternatives, including ‘Breaking news: Socks found chilling on the couch’ or ‘Please don’t turn the sofa into a sock graveyard.’
By introducing Writing Help, WhatsApp aims to make communication more flexible and engaging while keeping user privacy intact. The company emphasises that no information is stored, and AI-generated suggestions only appear if users decide to enable the option.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Warnings have been issued by Google to some users after detecting a web traffic hijacking campaign that delivered malware through manipulated login portals.
According to the company’s Threat Intelligence Group, attackers compromised network edge devices to modify captive portals, the login pages often seen when joining public Wi-Fi or corporate networks.
Instead of leading to legitimate security updates, the altered portals redirected users to a fake page presenting an ‘Adobe Plugin’ update. The file, once installed, deployed malware known as CANONSTAGER, which enabled the installation of a backdoor called SOGU.SEC.
The software, named AdobePlugins.exe, was signed with a valid GlobalSign certificate linked to Chengdu Nuoxin Times Technology Co, Ltd. Google stated it is tracking multiple malware samples connected to the same certificate.
The company attributed the campaign to a group it tracks as UNC6384, also known by other names including Mustang Panda, Silk Typhoon, and TEMP.Hex.
Google said it first detected the campaign in March 2025 and sent alerts to affected Gmail and Workspace users. The operation reportedly targeted diplomats in Southeast Asia and other entities worldwide, suggesting a potential link to cyber espionage activities.
Google advised users to enable Enhanced Safe Browsing in Chrome, keep devices updated, and use two-step verification for stronger protection.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!