Perplexity has vowed to contest the copyright infringement claims filed by Dow Jones and the New York Post. The California-based AI company denied the accusations in a blog post, calling them misleading. News Corp, owner of both media entities, launched the lawsuit on Monday, accusing Perplexity of extensive illegal copying of its content.
The conflict began after the two publishers allegedly contacted Perplexity in July with concerns over unauthorised use of their work, proposing a licensing agreement. According to Perplexity, the startup replied the same day, but the media companies decided to move forward with legal action instead of continuing discussions.
CEO Aravind Srinivas expressed his surprise over the lawsuit at the WSJ Tech Live event on Wednesday, noting the company had hoped for dialogue instead. He emphasised Perplexity’s commitment to defending itself against what it considers an unwarranted attack.
Perplexity is challenging Google’s dominance in the search engine market by providing summarised information from trusted sources directly through its platform. The case reflects ongoing tensions between publishers and tech firms over the use of copyrighted content for AI development.
An Indian court has instructed insurer Star Health to assist Telegram in identifying chatbots responsible for leaking sensitive customer data through the messaging app. Star Health, the country’s largest insurer, sought the directive after a report revealed that a hacker leaked private information, including medical and tax documents, via Telegram chatbots.
Justice K Kumaresh Babu of the Madras High Court ordered Star Health to provide details on the chatbots so Telegram could delete them. Telegram’s legal representative, Thriyambak Kannan, stated that while the app can’t independently track data leaks, it will remove the chatbots if the insurer supplies specific information.
Star Health is facing a $68,000 ransom demand and has launched an investigation into the leak, which includes claims about potential involvement of its chief security officer. However, the insurer has found no evidence implicating the officer.
A cyberattack on Change, the tech unit of UnitedHealth, exposed personal information of 100 million people. The breach, reported in February, is now officially recognised as the largest healthcare data breach in US history. Hackers, identified as the ALPHV group, disrupted claims processing, impacting patients and providers nationwide.
UnitedHealth started notifying affected individuals in June, warning that the breach may have compromised member IDs, diagnoses, treatment data, social security numbers, and billing codes. The company is still investigating the full impact and working to contact those affected promptly.
The hack mirrors the scale of a 2015 breach at health insurer Anthem, which compromised nearly 79 million records. UnitedHealth’s business is forecast to take a hit of $705 million this year due to payment disruptions and customer notifications.
The US healthcare giant provided loans to help providers cope with financial strain caused by the incident. Despite ongoing recovery efforts, the breach continues to highlight the sector’s vulnerabilities to ransomware attacks.
LinkedIn has been fined 310 million euros by European Union regulators for breaching the bloc’s strict data privacy rules. The penalty targets the Microsoft-owned platform for improperly using personal data to target users with ads.
Ireland’s Data Protection Commission (DPC) issued the fine, criticising LinkedIn for failing to handle user data lawfully, fairly, and transparently. As LinkedIn’s European headquarters is in Dublin, the DPC acts as the platform’s lead privacy regulator across the EU.
The investigation found LinkedIn lacked a lawful basis to collect personal information for advertising, violating the General Data Protection Regulation (GDPR). Regulators have ordered the company to align its practices with GDPR standards.
LinkedIn maintains it was operating within the rules but confirmed it is adjusting its advertising practices to meet compliance requirements. Deputy Commissioner Graham Doyle stressed that processing data without legal grounds undermines the fundamental right to privacy.
The United States Justice Department introduced new rules on Monday to safeguard federal and personal data from foreign adversaries such as China, Russia, and Iran. The regulations aim to limit certain business transactions that could transfer sensitive American data to these countries.
The proposal implements an executive order from President Biden and seeks to prevent the misuse of American financial, health, and genomic data by foreign governments for purposes like espionage and cyber attacks. Countries such as Venezuela, Cuba, and North Korea are also included in the list of nations targeted by the rule.
Among the data types restricted from transfer are human genomic data on more than 100 individuals, and financial or health data on over 10,000 people. Geolocation data on more than 1,000 US devices will also be restricted under the new rule.
The Justice Department plans to enforce compliance through both civil and criminal penalties. Apps like TikTok could potentially violate the new regulations if they transfer sensitive data to their Chinese parent companies.
Meta Platforms and its CEO, Mark Zuckerberg, successfully defended against a lawsuit claiming the company misled shareholders about child safety on Facebook and Instagram. A US federal judge dismissed the case on Tuesday.
Judge Charles Breyer ruled that the plaintiff, Matt Eisner, failed to demonstrate that shareholders experienced financial harm due to Meta’s disclosures. He stated that federal law does not require companies to reveal all decisions regarding child safety measures or focus on their shortcomings.
Eisner had sought to delay Meta’s 2024 annual meeting and void its election results unless the company revised its proxy statement. However, the judge emphasised that many of Meta’s commitments in its proxy materials were aspirational and not legally binding. His dismissal, issued with prejudice, prevents Eisner from filing the same case again.
Meta still faces legal challenges from state attorneys general and hundreds of lawsuits from children, parents, and schools, accusing the company of fostering social media addiction. Other platforms, such as TikTok and Snapchat, also confront similar legal actions.
A Londoner who had his phone stolen while walking near the Science Museum believes Google’s new AI security update would have made a big difference. Tyler, whose phone was snatched by a thief on a bike, struggled to lock it remotely as he couldn’t remember his password. The update, which uses AI and sensors to detect when a phone is stolen, would automatically lock the screen to prevent thieves from accessing data.
Google’s new feature allows users to remotely lock a stolen device using just their phone number, a measure welcomed by Tyler as he believes it would have helped him secure his device in moments of panic. The initiative is part of a broader effort to combat phone theft, with mobile phones now accounting for 69% of all thefts in London. Last year, over 11,800 robberies involved phone thefts.
Sadiq Khan, the Mayor of London, also supports the update, having previously lobbied phone companies to make their devices less attractive to criminals. Tech experts say the update’s AI-driven security, combined with the Offline Device Lock feature, will make it harder for thieves to access stolen phones.
Tyler hopes the new technology will deter criminals from stealing phones altogether, as the devices would become worthless once locked. Without resale value, he believes phone thefts will be a waste of time for criminals.
AI tools were introduced at Everest PR to streamline tasks, but the results were not as expected. Founder Anurag Garg noticed that instead of boosting efficiency, the technology created additional stress. His team reported that using AI tools like ChatGPT was time-consuming and added new complexities, leading to frustration and burnout.
Garg’s team struggled to keep up with frequent software updates and found that managing multiple AI platforms made their work harder. This sentiment is echoed in surveys showing many workers feel AI tools increase their workloads rather than reduce them. A study revealed that 61% believe AI will increase their chances of burnout, with the figure rising to 87% among younger workers.
Even legal professionals are feeling overwhelmed by AI’s impact on their workloads. Leah Steele, a coach for lawyers, explained that tech-driven environments often lead to reduced job satisfaction and fear of redundancy. The Law Society also highlights the challenges of implementing AI, emphasising that learning new tools requires time and effort, which can add pressure rather than alleviate it.
While some argue that AI can empower small firms by enhancing productivity, others stress the need for proper usage to prevent overwhelm. Garg has now reduced his team’s reliance on AI, finding that a more selective approach has improved employee well-being and reconnected them with their work.
Zoom has announced a partnership with Suki, a leading AI medical scribe provider, to offer doctors on its platform an AI-powered tool that automates note-taking during telehealth visits. With Zoom accounting for over a third of telehealth appointments in the US, this move aims to help clinicians reduce time spent on paperwork, improving efficiency during virtual consultations.
The partnership marks Zoom’s shift from solely being a video-conferencing company to integrating AI tools designed for workplace efficiency, a vision supported by its CEO, Eric Yuan. Suki was selected after Zoom evaluated other AI medical scribe startups, further boosting Suki’s presence after raising $70M in funding earlier this month.
This development highlights a broader trend in healthcare, with companies like Amazon’s One Medical and Microsoft’s Nuance also leveraging AI for medical note-taking, helping providers manage documentation more effectively. Despite growing competition, investors believe there is still room for specialised AI solutions in both large healthcare systems and smaller medical practices.
Nexus and Utimaco have joined forces to enhance security for mobile identities, IoT devices, and critical infrastructure. The strategic partnership reflects a commitment to addressing escalating cybersecurity threats, especially as organisations increasingly adopt mobile-first environments and connected devices.
At the core of this collaboration are integrated security solutions that combine Nexus’ Public Key Infrastructure (PKI) platform with Utimaco’s Hardware Security Module (HSM) and encryption technologies. Specifically, these capabilities enable organisations to issue PKI-based mobile identities for secure access and authentication without traditional passwords while simultaneously allowing manufacturers to assign trusted identities to IoT devices during production.
Furthermore, the solutions support compliance with regulations such as VS-NfD and the EU Cyber Resilience Act (CRA), ensuring that sensitive information is protected and mitigating risks associated with counterfeit products and unauthorised access. A practical application of these integrated solutions is already evident in a major European telecommunications provider, which has successfully secured the provisioning and communication of its IoT devices, significantly reducing risks and maintaining regulatory compliance.
That partnership represents a proactive approach to cybersecurity, providing organisations with the tools needed to navigate the complexities of digital identity management and the secure deployment of connected devices. By leveraging each other’s expertise, Nexus and Utimaco aim to deliver robust solutions that enhance user convenience and strengthen overall security measures. As security threats evolve, the collaboration prioritises user flexibility and strong protection, paving the way for a more secure digital landscape.