Oakley unveils smart glasses featuring Meta technology

Meta has partnered with Oakley to launch a new line of smart glasses designed for active lifestyles. The flagship model, Oakley Meta HSTN, will be available for preorder from 11 July for $499.

Additional Oakley models featuring Meta’s innovative technology are set to launch later in the summer, starting at $399.

https://twitter.com/1Kapisch/status/1936045567626617315

The glasses include a front-facing camera, open-ear speakers, and microphones embedded in the frame, much like the Meta Ray-Bans. When paired with a smartphone, users can listen to music, take calls, and interact with Meta AI.

With built-in cameras and microphones, Meta AI can also describe surroundings, answer visual questions, and translate languages.

With their sleek, sports-ready design and IPX4 water resistance, the glasses are geared toward athletes. They offer 8 hours of battery life—twice that of the Meta Ray-Bans—and come with a charging case that extends usage to 48 hours. Video capture quality has also improved, now supporting 3K resolution.


Customers can choose from five frame and lens combinations with prescription lenses for an added cost. Colours include warm grey, black, brown smoke, and clear, while lens options include Oakley’s PRIZM and transitions.

The $499 limited-edition version features gold accents and gold PRIZM lenses. Sales will cover major markets across North America, Europe, and Australia.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!


Hidden privacy risk: Meta AI app may make sensitive chats public

Meta’s new AI app raises privacy concerns as users unknowingly expose sensitive personal information to the public.

The app includes a Discover feed where anyone can view AI chats — even those involving health, legal or financial data. Many users have accidentally shared full resumes, private conversations and medical queries without realising they’re visible to others.

Despite this, Meta’s privacy warnings are minimal. On iPhones, there’s no clear indication during setup that chats will be made public unless manually changed in settings.

Android users see a brief, easily missed message. Even the ‘Post to Feed’ button is ambiguous, often mistaken as referring to a user’s private chat history rather than public content.

Users must navigate deep into the app’s settings to make chats private. They can restrict who sees AI prompts there, stop sharing on Facebook and Instagram, and delete previous interactions.

Critics argue the app’s lack of clarity burdens users, leaving many at risk of oversharing without realising it.

While Meta describes the Discover feed as a way to explore creative AI usage, the result has been a chaotic mix of deeply personal content and bizarre prompts.

Privacy experts warn that the situation mirrors Meta’s longstanding issues with user data. Users are advised to avoid sharing personal details with the AI entirely and immediately turn off all public sharing options.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

North Korea’s BlueNoroff uses deepfakes in Zoom calls to hack crypto workers

The North Korea-linked threat group BlueNoroff has been caught deploying deepfake Zoom meetings to target an employee at a cryptocurrency foundation, aiming to install malware on macOS systems.

According to cybersecurity firm Huntress, the attack began through a Telegram message that redirected the victim to a fake Zoom site. Over several weeks, the employee was lured into a group video call featuring AI-generated replicas of company executives.

When the employee encountered microphone issues during the meeting, the fake participants instructed them to download a Zoom extension, which instead executed a malicious AppleScript.

The script covertly fetched multiple payloads, installed Rosetta 2, and prompted for the system password while wiping command histories to hide forensic traces. Eight malicious binaries were uncovered on the compromised machine, including keyloggers, information stealers, and remote access tools.

BlueNoroff, also known as APT38 and part of the Lazarus Group, has a track record of targeting financial and blockchain organisations for monetary gain. The group’s past operations include the Bybit and Axie Infinity breaches.

Their campaigns often combine deep social engineering with sophisticated multi-stage malware tailored for macOS, with new tactics now mimicking audio and camera malfunctions to trick remote workers.

Cybersecurity analysts have noted that BlueNoroff has fractured into subgroups like TraderTraitor and CryptoCore, specialising in cryptocurrency theft.

Recent offshoot campaigns involve fake job interview portals and dual-platform malware, such as the Python-based PylangGhost and GolangGhost trojans, which harvest sensitive data from victims across operating systems.

The attackers have impersonated firms like Coinbase and Uniswap, mainly targeting users in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Massive data leak exposes 16 billion login credentials from Google, Facebook, and more

One of the largest-ever leaks of stolen login data has come to light, exposing more than 16 billion records across widely used services, including Facebook, Google, Telegram, and GitHub. The breach, uncovered by researchers at Cybernews, highlights a growing threat to individuals and organisations.

The exposed data reportedly originated from info stealer malware, previous leaks, and credential-stuffing tools. A total of 30 separate datasets were identified, some containing over 3.5 billion entries.

These were briefly available online due to unsecured cloud storage before being removed. Despite the swift takedown, the data had already been collected and analysed.

Experts have warned that the breach could lead to identity theft, phishing, and account takeovers. Smaller websites and users with poor cybersecurity practices are especially vulnerable. Many users continue to reuse passwords or minor variations of them, increasing the risk of exploitation.

While the leak is severe, users employing two-factor authentication (2FA), password managers, or passkeys are less likely to be affected.

Passkeys, increasingly adopted by companies like Google and Apple, offer a phishing-resistant login method that bypasses the need for passwords altogether.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Episource data breach impacts patients at Sharp Healthcare

Episource, a UnitedHealth Group-owned health analytics firm, has confirmed that patient data was compromised during a ransomware attack earlier this year.

The breach affected customers, including Sharp Healthcare and Sharp Community Medical Group, who have started notifying impacted patients. Although electronic health records and patient portals remained untouched, sensitive data such as health plan details, diagnoses and test results were exposed.

The cyberattack, which occurred between 27 January and 6 February, involved unauthorised access to Episource’s internal systems.

A forensic investigation verified that cybercriminals viewed and copied files containing personal information, including insurance plan data, treatment plans, and medical imaging. Financial details and payment card data, however, were mostly unaffected.

Sharp Healthcare confirmed that it was informed of the breach on 24 April and has since worked closely with Episource to identify which patients were impacted.

Compromised information may include names, addresses, insurance ID numbers, doctors’ names, prescribed medications, and other protected health data.

The breach follows a troubling trend of ransomware attacks targeting healthcare-related businesses, including Change Healthcare in 2024, which disrupted services for months. Comparitech reports at least three confirmed ransomware attacks on healthcare firms already in 2025, with 24 more suspected.

Given the scale of patient data involved, experts warn of growing risks tied to third-party healthcare service providers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UBS employee data leaked after Chain IQ ransomware attack

UBS Group AG has confirmed a serious data breach affecting around 130,000 of its employees, following a cyberattack on its third-party supplier, Chain IQ Group AG.

The exposed information included employee names, emails, phone numbers, roles, office locations, and preferred languages. No client data has been impacted, according to UBS.

Chain IQ, a procurement services firm spun off from UBS in 2013, was reportedly targeted by the cybercrime group World Leaks, previously known as Hunters International.

Unlike traditional ransomware operators, World Leaks avoids encryption and instead steals data, threatening public release if ransoms are not paid.

While Chain IQ has acknowledged the breach, it has not disclosed the extent of the stolen data or named all affected clients. Notably, companies such as Swiss Life, AXA, FedEx, IBM, KPMG, Swisscom, and Pictet are among its clients—only Pictet has confirmed it was impacted.

Cybersecurity experts warn that the breach may have long-term implications for the Swiss banking sector. Leaked employee data could be exploited for impersonation, fraud, phishing scams, or even blackmail.

The increasing availability of generative AI may further amplify the risks through voice and video impersonation, potentially aiding in money laundering and social engineering attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon restructures around AI, cuts expected

Amazon CEO Andy Jassy has signalled that more job cuts are likely as the company embraces AI to streamline its operations. In a letter to staff, he said the adoption of generative AI is driving major shifts in roles, especially within corporate functions.

Jassy described generative AI as a once-in-a-lifetime technology and highlighted its growing role across Amazon services, including Alexa+, shopping tools and logistics. He pointed to smarter assistants and improved fulfilment systems as early benefits of AI investments.

While praising the efficiency gains AI delivers, Jassy admitted some roles will no longer be needed, and others will be redefined. The long-term outcome remains uncertain, but fewer corporate roles are expected as AI adoption continues.

He encouraged staff to embrace the technology by learning, experimenting and contributing to AI-related innovations. Workshops and team brainstorming were recommended as Amazon looks to reinvent itself with leaner, more agile teams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT loses chess match to Atari 2600

ChatGPT, was trounced in a chess match by a 1979 video game running on an Atari 2600 emulator. Citrix engineer Robert Caruso set up the match using Video Chess to test how the AI would perform against vintage gaming software.

The result was unexpectedly lopsided. ChatGPT confused rooks for bishops, forgot piece positions and made repeated beginner mistakes, eventually asking for the match to be restarted. Even when standard chess notation was used, its performance failed to improve.

Caruso described the 90-minute session as full of basic blunders, saying the AI would have been laughed out of a primary school chess club. His post highlighted the limitations of ChatGPT’s architecture, which is built for language understanding, not strategic board gameplay.

While the experiment doesn’t mean ChatGPT is entirely useless at chess, it suggests users are better off discussing the game with the bot than challenging it. OpenAI has not yet responded to the light-hearted but telling critique.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s robotics industry set to double by 2028, led by drones and humanoid robots

China’s robotics industry is on course to double in size by 2028, with Morgan Stanley projecting market growth from US$47 billion in 2024 to US$108 billion.

With an annual expansion rate of 23 percent, the country is expected to strengthen its leadership in this fast-evolving field. Analysts credit China’s drive for innovation and cost efficiency as key to advancing next-generation robotics.

A cornerstone of the ‘Made in China 2025’ initiative, robotics is central to the nation’s goal of dominating global high-tech industries. Last year, China accounted for 40 percent of the worldwide robotics market and over half of all industrial robot installations.

Recent data shows industrial robot production surged 35.5 percent in May, while service robot output climbed nearly 14 percent.

Morgan Stanley anticipates drones will remain China’s largest robotics segment, set to grow from US$19 billion to US$40 billion by 2028.

Meanwhile, the humanoid robot sector is expected to see an annual growth rate of 63 percent, expanding from US$300 million in 2025 to US$3.4 billion by 2030. By 2050, China could be home to 302 million humanoid robots, making up 30 percent of the global population.

The researchers describe 2025 as a milestone year, marking the start of mass humanoid robot production.

They emphasise that automation is already reshaping China’s manufacturing industry, boosting productivity and quality instead of simply replacing workers and setting the stage for a brighter industrial future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Workplace deepfake abuse: What employers must know

Deepfake technology—AI-generated videos, images, and audio—has entered the workplace in alarming ways.

Once difficult to produce, deepfakes are now widely accessible and are being used to harass, impersonate, or intimidate employees. These synthetic media attacks can cause deep psychological harm, damage reputations, and expose employers to serious legal risks.

While US federal law hasn’t yet caught up, new legislation like the Take It Down Act and Florida’s Brooke’s Law require platforms to remove non-consensual deepfake content within 48 hours.

Meanwhile, employers could face claims under existing workplace laws if they fail to act on deepfake harassment. Inaction may lead to lawsuits for creating a hostile environment or for negligent oversight.

Most workplace policies still don’t mention synthetic media and something like this creates blind spots, especially during investigations, where fake images or audio could wrongly influence decisions.

Employers need to shift how they assess evidence and protect both accused and accuser fairly. It’s time to update handbooks, train staff, and build clear response plans that include digital impersonation and deepfake abuse.

By treating deepfakes as a modern form of harassment instead of just a tech issue, organisations can respond faster, protect staff, and maintain trust. Proactive training, updated policies, and legal awareness will be crucial to workplace safety in the age of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!