India’s Gen Z founders go viral with AI and robotics ‘Hacker House’ in Bengaluru

A viral video has captured the imagination of tech enthusiasts by offering a rare look inside a ‘Hacker House’ in Bengaluru’s HSR Layout, where a group of Gen Z Indian founders are quietly shaping the future of AI and robotics.

Spearheaded by Localhost, the initiative provides young developers aged 16 to 22 with funding, workspace, and a collaborative environment to rapidly build real-world tech products — no media hype, just raw innovation.

The video, shared by Canadian entrepreneur Caleb Friesen, shows teenage coders intensely focused on their projects. From AI-powered noise-cancelling systems and assistive robots to innovative real estate and podcasting tools, each room in the shared house hums with creativity.

The youngest, 16-year-old Harish, stands out for his deep focus, while Suhas Sumukh, who leads the Bengaluru chapter, acts as both a guide and mentor.

Rather than pitch decks and polished PR, what resonated online was the authenticity and dedication. Caleb’s walk-through showed residents too engrossed in their work to acknowledge his arrival.

Viewers responded with admiration, calling it a rare glimpse into ‘the real future of Indian tech’. The video has since crossed 1.4 million views, sparking global curiosity.

At the heart of the movement is Localhost, founded by Kei Hayashi, which helps young developers build fast and learn faster.

As demand grows for similar hacker houses in Mumbai, Delhi, and Hyderabad, the initiative may start a new chapter for India’s startup ecosystem — fuelled by focus, snacks, and a poster of Steve Jobs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hidden privacy risk: Meta AI app may make sensitive chats public

Meta’s new AI app raises privacy concerns as users unknowingly expose sensitive personal information to the public.

The app includes a Discover feed where anyone can view AI chats — even those involving health, legal or financial data. Many users have accidentally shared full resumes, private conversations and medical queries without realising they’re visible to others.

Despite this, Meta’s privacy warnings are minimal. On iPhones, there’s no clear indication during setup that chats will be made public unless manually changed in settings.

Android users see a brief, easily missed message. Even the ‘Post to Feed’ button is ambiguous, often mistaken as referring to a user’s private chat history rather than public content.

Users must navigate deep into the app’s settings to make chats private. They can restrict who sees AI prompts there, stop sharing on Facebook and Instagram, and delete previous interactions.

Critics argue the app’s lack of clarity burdens users, leaving many at risk of oversharing without realising it.

While Meta describes the Discover feed as a way to explore creative AI usage, the result has been a chaotic mix of deeply personal content and bizarre prompts.

Privacy experts warn that the situation mirrors Meta’s longstanding issues with user data. Users are advised to avoid sharing personal details with the AI entirely and immediately turn off all public sharing options.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act challenges 68% of European businesses, AWS report finds

As AI becomes integral to digital transformation, European businesses struggle to adapt to new regulations like the EU AI Act.

A report commissioned by AWS and Strand Partners revealed that 68% of surveyed companies find the EU AI Act difficult to interpret, with compliance absorbing around 40% of IT budgets.

Businesses unsure of regulatory obligations are expected to invest nearly 30% less in AI over the coming year, risking a slowdown in innovation across the continent.

The EU AI Act, effective since August 2024, introduces a phased risk-based framework to regulate AI in the EU. Some key provisions, including banned practices and AI literacy rules, are already enforceable.

Over the next year, further requirements will roll out, affecting AI system providers, users, distributors, and non-EU companies operating within the EU. The law prohibits exploitative AI applications and imposes strict rules on high-risk systems while promoting transparency in low-risk deployments.

AWS has reaffirmed its commitment to responsible AI, which is aligned with the EU AI Act. The company supports customers through initiatives like AI Service Cards, its Responsible AI Guide, and Bedrock Guardrails.

AWS was the first primary cloud provider to receive ISO/IEC 42001 certification for its AI offerings and continues to engage with the EU institutions to align on best practices. Amazon’s AI Ready Commitment also offers free education on responsible AI development.

Despite the regulatory complexity, AWS encourages its customers to assess how their AI usage fits within the EU AI Act and adopt safeguards accordingly.

As compliance remains a shared responsibility, AWS provides tools and guidance, but customers must ensure their applications meet the legal requirements. The company updates customers as enforcement advances and new guidance is issued.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

North Korea’s BlueNoroff uses deepfakes in Zoom calls to hack crypto workers

The North Korea-linked threat group BlueNoroff has been caught deploying deepfake Zoom meetings to target an employee at a cryptocurrency foundation, aiming to install malware on macOS systems.

According to cybersecurity firm Huntress, the attack began through a Telegram message that redirected the victim to a fake Zoom site. Over several weeks, the employee was lured into a group video call featuring AI-generated replicas of company executives.

When the employee encountered microphone issues during the meeting, the fake participants instructed them to download a Zoom extension, which instead executed a malicious AppleScript.

The script covertly fetched multiple payloads, installed Rosetta 2, and prompted for the system password while wiping command histories to hide forensic traces. Eight malicious binaries were uncovered on the compromised machine, including keyloggers, information stealers, and remote access tools.

BlueNoroff, also known as APT38 and part of the Lazarus Group, has a track record of targeting financial and blockchain organisations for monetary gain. The group’s past operations include the Bybit and Axie Infinity breaches.

Their campaigns often combine deep social engineering with sophisticated multi-stage malware tailored for macOS, with new tactics now mimicking audio and camera malfunctions to trick remote workers.

Cybersecurity analysts have noted that BlueNoroff has fractured into subgroups like TraderTraitor and CryptoCore, specialising in cryptocurrency theft.

Recent offshoot campaigns involve fake job interview portals and dual-platform malware, such as the Python-based PylangGhost and GolangGhost trojans, which harvest sensitive data from victims across operating systems.

The attackers have impersonated firms like Coinbase and Uniswap, mainly targeting users in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IBM combines watsonx and Guardium to tackle AI compliance

IBM has unveiled new software capabilities that integrate AI security and governance, claiming the industry’s first unified solution to manage the risks of agentic AI.

The enhancements merge IBM’s watsonx.governance platform—which supports oversight, transparency, and lifecycle management of AI systems—with Guardium AI Security, a tool built to protect AI models, data, and operational usage.

By unifying these tools, IBM’s solution offers enterprises the ability to oversee both governance and security across AI deployments from a single interface. It also supports compliance with 12 major frameworks, including the EU AI Act and ISO 42001.

The launch aims to address growing concerns around AI safety, regulation, and accountability as businesses scale AI-driven operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Episource data breach impacts patients at Sharp Healthcare

Episource, a UnitedHealth Group-owned health analytics firm, has confirmed that patient data was compromised during a ransomware attack earlier this year.

The breach affected customers, including Sharp Healthcare and Sharp Community Medical Group, who have started notifying impacted patients. Although electronic health records and patient portals remained untouched, sensitive data such as health plan details, diagnoses and test results were exposed.

The cyberattack, which occurred between 27 January and 6 February, involved unauthorised access to Episource’s internal systems.

A forensic investigation verified that cybercriminals viewed and copied files containing personal information, including insurance plan data, treatment plans, and medical imaging. Financial details and payment card data, however, were mostly unaffected.

Sharp Healthcare confirmed that it was informed of the breach on 24 April and has since worked closely with Episource to identify which patients were impacted.

Compromised information may include names, addresses, insurance ID numbers, doctors’ names, prescribed medications, and other protected health data.

The breach follows a troubling trend of ransomware attacks targeting healthcare-related businesses, including Change Healthcare in 2024, which disrupted services for months. Comparitech reports at least three confirmed ransomware attacks on healthcare firms already in 2025, with 24 more suspected.

Given the scale of patient data involved, experts warn of growing risks tied to third-party healthcare service providers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches AI voice chat in Search app for Android and iOS

Google has started rolling out its new ‘Search Live in AI Mode’ for the Google app on Android and iOS, offering users the ability to have seamless voice-based conversations with Search.

Currently available only in the US for those signed up to the AI Mode experiment in Labs, the feature was previewed at last month’s Google I/O conference.

The tool uses a specially adapted version of Google’s Gemini AI model, fine-tuned to deliver smarter voice interactions. It combines the model’s capabilities with Google Search’s information infrastructure to provide real-time spoken responses.

Using a technique called ‘query fan-out’, the system retrieves a wide range of web content, helping users discover more varied and relevant information.

The new mode is particularly useful when multitasking or on the go. Users can tap a ‘Live’ icon in the Google app and ask spoken queries like how to keep clothes from wrinkling in a suitcase.

Follow-up questions are handled just as naturally, and related links are displayed on-screen, letting users read more without breaking their flow.

To use the feature, users can tap a sparkle-shaped waveform icon under the Search bar or next to the search field. Once activated, a full-screen interface appears with voice control options and a scrolling list of relevant links.

Even with the phone locked or other apps open, the feature keeps running. A mute button, transcript view, and voice style settings—named Cassini, Cosmo, Neso, and Terra—offer additional control over the experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UBS employee data leaked after Chain IQ ransomware attack

UBS Group AG has confirmed a serious data breach affecting around 130,000 of its employees, following a cyberattack on its third-party supplier, Chain IQ Group AG.

The exposed information included employee names, emails, phone numbers, roles, office locations, and preferred languages. No client data has been impacted, according to UBS.

Chain IQ, a procurement services firm spun off from UBS in 2013, was reportedly targeted by the cybercrime group World Leaks, previously known as Hunters International.

Unlike traditional ransomware operators, World Leaks avoids encryption and instead steals data, threatening public release if ransoms are not paid.

While Chain IQ has acknowledged the breach, it has not disclosed the extent of the stolen data or named all affected clients. Notably, companies such as Swiss Life, AXA, FedEx, IBM, KPMG, Swisscom, and Pictet are among its clients—only Pictet has confirmed it was impacted.

Cybersecurity experts warn that the breach may have long-term implications for the Swiss banking sector. Leaked employee data could be exploited for impersonation, fraud, phishing scams, or even blackmail.

The increasing availability of generative AI may further amplify the risks through voice and video impersonation, potentially aiding in money laundering and social engineering attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ryuk ransomware hacker extradited to US after arrest in Ukraine

A key member of the infamous Ryuk ransomware gang has been extradited to the US after his arrest in Kyiv, Ukraine.

The 33-year-old man was detained in April 2025 at the request of the FBI and arrived in the US on 18 June to face multiple charges.

The suspect played a critical role within Ryuk by gaining initial access to corporate networks, which he then passed on to accomplices who stole data and launched ransomware attacks.

Ukrainian authorities identified him during a larger investigation into ransomware groups like LockerGoga, Dharma, Hive, and MegaCortex that targeted companies across Europe and North America.

According to Ukraine’s National Police, forensic analysis revealed the man’s responsibility for locating security flaws in enterprise networks.

Information gathered by the hacker allowed others in the gang to infiltrate systems, steal data, and deploy ransomware payloads that disrupted various industries, including healthcare, during the COVID pandemic.

Ryuk operated from 2018 until mid-2020 before rebranding as the notorious Conti gang, which later fractured into several smaller but still active groups. Researchers estimate that Ryuk alone collected over $150 million in ransom payments before shutting down.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI helps Google curb scams and deepfakes in India

Google has introduced its Safety Charter for India to combat rising online fraud, deepfakes and cybersecurity threats. The charter outlines a collaborative plan focused on user safety, responsible AI development and protection of digital infrastructure.

AI-powered measures have already helped Google detect 20 times more scam-related pages, block over 500 million scam messages monthly, and issue 2.5 billion suspicious link warnings. Its ‘Digikavach’ programme has reached over 177 million Indians with fraud prevention tools and awareness campaigns.

Google Pay alone averted financial fraud worth ₹13,000 crore in 2024, while Google Play Protect stopped nearly 6 crore high-risk app installations. These achievements reflect the company’s ‘AI-first, secure-by-design’ strategy for early threat detection and response.

The tech giant is also collaborating with IIT-Madras on post-quantum cryptography and privacy-first technologies. Through language models like Gemini and watermarking initiatives such as SynthID, Google aims to build trust and inclusion across India’s digital ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!