A lawsuit filed by Elon Musk’s company X against Media Matters, scheduled for trial in April 2025, marks the latest development in a contentious legal battle. The US District Court for the Northern District of Texas set this date following allegations from X that Media Matters misrepresented the prevalence of hate speech on social media platforms, specifically targeting content on X’s platform.
Media Matters, a nonprofit watchdog group, has been accused by X of distorting data and exaggerating the likelihood of encountering extremist content. X claims that Media Matters’ methodology does not accurately reflect real user experiences, prompting a legal challenge that has garnered significant attention.
In response to Thursday’s court order, neither X nor Media Matters provided immediate comments. However, Media Matters President Angelo Carusone previously denounced the lawsuit as baseless and an attempt to stifle criticism of Elon Musk. Motions for summary judgment are expected by December, with a decision potentially influencing the case’s outcome before it reaches trial.
The lawsuit is part of a broader pattern for Musk, who has faced legal setbacks in similar cases aiming to challenge watchdog groups. Earlier this year, a federal judge in California dismissed a lawsuit by X against the Center for Countering Digital Hate, criticising it as retaliatory rather than protective of platform integrity. The outcome of these legal battles could affect how social media platforms and watchdog organisations navigate issues of content moderation and free speech moving forward.
Microsoft has decided to delay the rollout of its AI-powered ‘Recall’ feature, which tracks and stores computer usage histories, citing privacy concerns. Initially planned for launch with new computers next week, Recall will now undergo a preview phase within its Windows Insider Program (WIP) in the coming weeks rather than being widely available to Copilot+ PC users starting 18 June.
The Recall feature, designed to record everything from web browsing to voice chats for later retrieval, aims to help users remember past activities even months later. Microsoft emphasised that the delay is part of their commitment to ensuring a trusted and secure customer experience, seeking additional feedback before a broader release.
Copilot+ PCs, introduced in May, integrate AI capabilities and were set to include Recall as a key feature. The WIP, which allows enthusiastic users to test upcoming Windows features, will play a crucial role in gathering feedback on Recall before its eventual wider availability.
Privacy concerns surfaced swiftly after Recall’s announcement, with critics suggesting potential misuse for surveillance purposes. Elon Musk likened the feature to a scenario from the dystopian TV series ‘Black Mirror’, reflecting broader anxieties about the implications of pervasive technology on personal privacy and security.
OpenAI has announced the appointment of retired US Army General Paul M. Nakasone, former head of the National Security Agency (NSA), to its board of directors. Nakasone, who led the NSA from 2018 until earlier this year, will join OpenAI’s Safety and Security Committee. This committee, prioritised by CEO Sam Altman, focuses on enhancing the company’s understanding of how AI can be leveraged to improve cybersecurity by swiftly identifying and countering threats.
The addition of Nakasone follows notable departures from OpenAI related to safety concerns, including co-founder Ilya Sutskever and Jan Leike. Sutskever was involved in the controversial firing and reinstatement of CEO Sam Altman, while Leike has publicly criticised the company’s current focus on product development over safety measures.
OpenAI board chair Bret Taylor emphasised the importance of securely developing and deploying AI to realize its potential benefits for humanity. He highlighted Nakasone’s extensive experience in cybersecurity as a valuable asset to guiding the organisation toward this goal.
The current OpenAI board comprises Nakasone, Altman, Adam D’Angelo, Larry Summers, Bret Taylor, Dr Sue Desmond-Hellmann, Nicole Seligman, and Fidji Simo, with Microsoft’s Dee Templeton holding a non-voting observer position.
Lawmakers criticised Microsoft for failing to prevent these cyberattacks, which exposed federal networks to significant risk. They highlighted a report by the Cyber Safety Review Board (CSRB) that condemned Microsoft for lack of transparency regarding the China hack, labelling it preventable. Smith acknowledged the report’s findings and stated that Microsoft acted on most of its recommendations. He emphasised the growing threat posed by nations like China, Russia, North Korea, and Iran, which are increasingly sophisticated and aggressive in their cyberattacks.
During the hearing, Smith defended Microsoft’s role, saying that the US State Department’s discovery of the hack demonstrated the collaborative nature of cybersecurity. However, Congressman Bennie Thompson expressed dissatisfaction, stressing that Microsoft is responsible for detecting such breaches. Given its substantial investments there, panel members also inquired about Microsoft’s operations in China. Smith noted that the company earns around 1.5% of its revenue from China and is working to reduce its engineering presence in the country.
Despite facing significant criticism over the past year, some panel members, including Republican Congresswoman Marjorie Taylor Greene, commended Smith for accepting responsibility. In response to the CSRB’s findings, Microsoft has pledged to prioritise security above all else, launching a new cybersecurity initiative in November to bolster its defences and ensure greater transparency moving forward.
Japan has passed a new law requiring tech giants like Google and Apple to allow access to third-party smartphone apps and payment systems on their platforms, threatening substantial fines for non-compliance. Like the EU’s Digital Markets Act, this legislation mandates fair access to operating systems, browsers, and search engines, with fines reaching up to 30% of revenue for continued anti-competitive behaviour.
The law was approved by Japan’s National Diet with no amendments and aimed to align Japan’s digital market regulations with those of the United States and Europe. That move is intended to foster fair competition and improve the competitive environment for software, such as app stores while ensuring consumer security. The law is set to take effect by the end of 2025.
Japan’s Fair Trade Commission highlighted the necessity for this new legal framework to address the dominance of major tech companies. Although the law does not explicitly name companies, it targets those like Google and Apple, often seen as a ‘duopoly’ in the smartphone app market. The EU’s similar regulatory efforts, particularly the Digital Markets Act, have faced criticism from Apple regarding potential risks to user privacy and security.
Apple has declined to award a bug bounty to Kaspersky, the cybersecurity company, after disclosing four zero-day vulnerabilities in iPhone software. These vulnerabilities were reportedly exploited to spy on Kaspersky employees and diplomats from Russia. A spokesperson for Kaspersky stated that their research team believed their findings were eligible for Apple’s Bug Bounty rewards. However, upon inquiry, they received a decline from Apple’s Security team, citing the company’s policy.
Bug bounties serve as incentives for researchers to disclose vulnerabilities to companies, rather than selling them to malicious actors. Kaspersky’s disclosure last year revealed a highly sophisticated spying campaign dubbed ‘Operation Triangulation.’ Eugene Kaspersky, the company’s CEO, described it as ‘an extremely complex, professionally targeted cyberattack’ affecting several dozen iPhones of top and middle-management employees.
The campaign, suspected to be state-sponsored due to its sophistication and intelligence-focused targeting, utilised 13 separate bullet points in its attack chain. Simultaneously, Russia’s Federal Security Service (FSB) accused the United States and Apple of collaborating to spy on Russian diplomats.
The FSB’s allegations aligned with Russia’s computer security agency’s claim that both campaigns shared the same indicators of compromise. A critical concern was a vulnerability known as CVE-2023-38606, which affected an unusual hardware feature unused by iOS firmware. Kaspersky suggested it may have been included in the iPhone operating system mistakenly or for debugging purposes. Apple refuted claims of collaboration with any government to insert backdoors into its products, emphasising its commitment to user privacy and security.
Japanese Prime Minister Fumio Kishida has directed his government to expedite the drafting of legislation to establish an active cyber defense system, enabling pre-emptive measures against cyberattacks. Addressing the inaugural meeting of an expert panel convened at the prime minister’s office, Kishida emphasised the pressing need to bolster the country’s cyber response capabilities.
The government of Japan aims to present the proposed legislation during the upcoming extraordinary parliamentary session scheduled for autumn. During the meeting, Digital Transformation Minister Taro Kono outlined three critical areas for discussion – enhancing information sharing between the public and private sectors, identifying servers involved in cyberattacks, and determining the extent of governmental authority.
Kono urged the panel consisting of 17 experts such as specialists on cybersecurity and lawyers to provide progress reports on these issues within the coming months, highlighting the urgency of addressing cybersecurity challenges. Kono highlighted the importance of establishing a system on par with those of the United States and European nations, while also safeguarding the rights and interests of the people.
Amidst the onslaught of cybersecurity related incidents affecting hospitals in the United States, Microsoft and Google have pledged their support to assist operators better safeguard patient data. The ongoing cybersecurity risks have sent hospital staff scampering back to the dark ages of paper filing of patient records. The back tracking has begun to affect staff’s ability to manage other aspects of patient care, given the exponential increase in the data collected and stored by the healthcare sector over the last five years.
The rate at which this sector collects data outpaces many top performers including manufacturing, finance and media and entertainment, and is expected to account for approximately 36% CAGR of the global volume of data by 2025. Nurses at the recently hit Ascension network of hospitals bemoan the increased level of insecurity cybersecurity threats now bring. In 2023, the sector faced a 128% increase in cyber insecurity threats. In an effort to find a solution to this problem facing some 1800 community hospitals in the US, White House National Security Council officials have solicited the support of the two top tech firms.
Other measures the current administration has proposed include minimum cybersecurity requirements for hospitals, a stance some claim could further penalise patients.
Why does it matter?
Cybersecurity attacks on hospitals not only present a threat to patients and staff at these hospitals but they have had a trickle down effect on the entire economy. In fact, it has been estimated that about one-third of Americans have had their data stolen, and given the trend of hospitals paying ransoms, hackers are becoming increasingly emboldened.
Apple is integrating OpenAI’s ChatGPT into Siri, as announced at its WWDC 2024 keynote. The partnership will allow iOS 18 and macOS Sequoia users to access ChatGPT for free, with privacy measures ensuring that queries aren’t logged. Additionally, paid ChatGPT subscribers can link their accounts to access premium features on Apple devices.
Apple had been negotiating with Google and OpenAI to enhance its AI capabilities, ultimately partnering with OpenAI. The enhanced feature will utilise OpenAI’s GPT-4o model, which will power ChatGPT in Apple’s upcoming operating systems.
OpenAI CEO Sam Altman expressed enthusiasm for the partnership, highlighting shared commitments to safety and innovation. However, Elon Musk, the billionaire CEO of Tesla, SpaceX, and the social media company X announced a ban on Apple devices from his companies if Apple integrates OpenAI technology at the operating system level. Musk labelled this move an ‘unacceptable security violation’ and stated that visitors would be required to leave their Apple devices in a Faraday cage at the entrance to his facilities.
Why does it matter?
The new business plan aims to significantly enhance Siri’s capabilities with advanced AI features. The chatbot will be seamlessly integrated into Apple’s systemwide writing tools, enriching the user experience across Apple devices.
Central to this integration is a robust consent mechanism that requires users’ permission before sending any questions, documents, or photos to ChatGPT. Siri will present the responses directly, emphasising Apple’s commitment to user privacy and transparent data handling practices.
Elon Musk, the billionaire CEO of Tesla, SpaceX, and the social media company X announced on Monday that he would ban Apple devices from his companies if Apple integrates OpenAI technology at the operating system level. Musk called this move an ‘unacceptable security violation’ and declared that visitors would have to leave their Apple devices in a Faraday cage at the entrance to his facilities.
The statement followed Apple’s announcement of new AI features across its apps and operating platforms, including a partnership with OpenAI to incorporate ChatGPT technology into its devices. Apple emphasised that these AI features are designed with privacy at their core, using both on-device processing and cloud computing to ensure data security. Musk, however, expressed scepticism, arguing that Apple’s reliance on OpenAI undermines its ability to protect user privacy and security effectively.
If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation.
Industry experts, such as Ben Bajarin, CEO of Creative Strategies, believe that Musk’s stance is unlikely to gain widespread support. Bajarin noted that Apple aims to reassure users that its private cloud services are as secure as on-device data storage. He explained that Apple anonymises and firewalls user data, ensuring that Apple itself does not access it.
Musk’s criticism of OpenAI is not new; he co-founded the organisation in 2015 but sued it earlier this year, alleging it strayed from its mission to develop AI for the benefit of humanity. Musk has since launched his own AI startup, xAI, valued at $24 billion after a recent funding round, to compete directly with OpenAI and develop alternatives to its popular ChatGPT.