According to research conducted by Proofpoint, the volume of mobile political spam ahead of the 2024 election has tripled compared to the 2022 midterms. The study indicates a growing trend among US voters to seek information through digital platforms, which can increase their vulnerability to cybercriminal activities.
With 60% of American adults favouring digital media for news consumption and 86% using smartphones, tablets, or computers, there is a notable reliance on digital channels. Nearly all US voters (97%) have access to mobile messaging services. Despite the widespread trust in mobile messaging, Proofpoint warns that the surge in smishing, impersonation, and unwanted spam messages is nowadays eroding this confidence.
While many voters are cautious about fake news on social media, fewer recognise the significant risks associated with mobile messaging and email impersonation tactics. Notably, incidents of election-related smishing attacks have risen by over 7% in the past nine months compared to the previous period.
The increase in mobile political messaging, commonly used by campaigns and interest groups, has coincided with a rise in malicious activities. For instance, following former President Donald J. Trump’s guilty verdict in his ‘hush money’ trial, there was a notable 240% increase in unwanted political messaging within 48 hours, with reported volumes reaching tens of millions.
Why does it matter?
Proofpoint emphasised the importance of voters proactively defending themselves against impersonation attacks during this election season. They advise voters to be cautious with unsolicited messages, particularly those urging immediate action. The company also called on mobile operators to prioritise the protection of their users. Maintaining a healthy level of scepticism is crucial for all parties involved.
To mitigate the risks associated with malicious mobile messaging, voters are advised to refrain from opening attachments or clicking on links in such messages. Instead, it is recommended that you enter known URLs into web browsers directly. Thoroughly scrutinising all election-related digital communications is essential to verify their authenticity.
Last year, a cyberattack on Infosys McCamish Systems affected over six million customers, as revealed in a new filing with data protection authorities. The breach, first reported in February, was traced back to November 2023, with unauthorised activity occurring between 29 October and 2 November 2023.
The compromised data includes Social Security Numbers, birth dates, medical records, biometric data, email addresses, usernames and passwords, driver’s license or state ID numbers, financial account details, payment card information, passport numbers, tribal ID numbers, and US military ID numbers.
Infosys McCamish Systems, an outsourcing service provider for financial and insurance companies, began notifying affected customers on 27 June, several months after the incident. With the help of third-party eDiscovery experts, the company conducted a thorough review to identify the compromised personal information and its owners.
The company has informed impacted organisations and offers 24 months of credit monitoring to affected individuals, although there has yet to be evidence of stolen information being used fraudulently. The LockBit ransomware group is believed to be behind the attack, which encrypted over 2,000 computers. The stolen data is expected to be used for phishing and identity fraud.
Key internet technical bodies, including the Internet Engineering Task Force, World Wide Web Consortium, Internet Research Task Force, and the Internet Society’s Board of Trustees, have signed an open letter to the UN arguing against a centralised governance of the internet, which they argue is being proposed in the UN’s Global Digital Compact (GDC). The letter states that some of the proposals in the latest version of the GDC, released on 26 June 2024, can be interpreted as mandating more centralised internet governance, which the technical bodies believe would be detrimental to the internet and global economies and societies.
The GDC aims to create international consensus on principles for an ‘inclusive, open, sustainable, fair, safe and secure digital future’. However, the technical bodies argue that the GDC is being developed through a multilateral process between states, with very limited engagement of the open, inclusive, and consensus-driven methods used to develop the internet and web to date.
Specifically, the GDC proposes the establishment of an international scientific panel on AI to conduct risk assessments, an office to facilitate follow-ups on the compact, and calls on the UN to play a key role in promoting cooperation and harmonisation of data governance initiatives. The technical bodies view these proposals as steps towards more centralised internet governance, which they believe would be detrimental.
The University Hospital Centre in Zagreb, Croatia, was hit by a cyberattack on 27 June, claimed by the LockBit ransomware group. The attack crippled the hospital’s networks, forcing emergency patients to be redirected to other facilities. Despite the disruption, hospital officials assured that patient safety was never compromised. Over 100 experts worked tirelessly to restore the IT systems, bringing the hospital back online within 24 hours.
LockBit, a Russian-affiliated ransomware group, posted on its dark leak site that it had stolen a large cache of sensitive data from the hospital in Croatia, including medical records and employee information. The hospital has not confirmed the specifics of the stolen data but has involved the authorities, and a criminal investigation is underway. LockBit, operating since 2019, has been linked to over 1,400 attacks globally and continues to evade law enforcement despite setbacks like the FBI and Interpol’s Operation Cronos.
The attack on KBC Zagreb coincided with multiple cyberattacks on Croatian government agencies by another Russian-linked group, NoName057(16). Known for targeting the critical infrastructure of nations supporting Ukraine, NoName denied responsibility for the hospital attack, emphasising their principle of not targeting medical facilities. NoName has been responsible for numerous cyberattacks across Europe, affecting several countries’ banking systems and critical infrastructure.
Wise, a well-known money transfer and fintech company, stated that the personal data of some customers had been compromised in the recent Evolve Bank and Trust data breach. There is uncertainty about the extent of the breach and its impact on third-party companies, their customers, and users, as an increasing number of companies have come forward in recent days to disclose that they have been affected.
In an official statement, Wise states it had worked with Evolve from 2020 to 2023 and shared with the latter USD account details. This personal data included names, addresses, dates of birth, contact information, and Social Security numbers or Employer Identification Numbers. The statement suggests that due to the breach, there is a potential risk that customers’ personal information might be exposed. The extent of the impact on Wise customers remains undisclosed as the company continues its investigation. Yet the company assured that affected Wise customers would be notified via email. Despite the breach at Evolve, Wise assured that their systems remained integral and facilitated customers’ secure access to their accounts.
Evolve highlighted its ongoing efforts to address the cybersecurity incident following the ransomware attack by the LockBit cybercrime group by noting there was limited data loss and minimal operational disruptions due to available backups. Evolve ensured that it would individually notify all persons affected by the breach. Affirm, EarnIn, Marqeta, Melio, and Mercury, among other Evolve partners, are investigating the impact on their customers.
The Detroit Police Department has agreed to new rules limiting how it can use facial recognition technology after a legal settlement was reached with Robert Williams, who was wrongfully arrested based on the technology in 2020. Williams was detained for over 30 hours after software identified him with video surveillance of another Black man stealing watches. With the support of the American Civil Liberties Union of Michigan, he submitted a complaint in 2020 and then sued in 2021.
So far, Detroit police are responsible for three of the seven reported instances when the use of facial recognition has led to a wrongful arrest. Detroit’s police chief, James White, has blamed ‘human error’, and not the software, saying his officers relied too much on the technology.
What does this change concretely?
To combat human error, Detroit police officers will now be trained in the risks of facial recognition in policing. Another change states that suspects identified by the technology must be linked to the crime by other evidence before being used in photo lineups. Along with other policy changes, the police department will have to launch an audit into facial recognition searches since 2017, when it first started using the technology.
In spite of this incident, police say facial recognition technology is too useful a tool to be abandoned entirely. According to the head of informatics with Detroit’s crime intelligence unit, Stephen Lamoreaux, the Police Department remains ‘very keen to use technology in a meaningful way for public safety.’ However, some cities like San Francisco have banned its use because of concerns about privacy and racial bias. Microsoft also said it would not be providing its facial recognition software to the US police until a national framework for the using facial recognition based on human rights is put in place.
Meta asserts that its model complies with a ruling from EU’s top court and is aligned with the DMA, expressing a willingness to engage with the Commission to resolve the issue. However, if found guilty, Meta could face fines of up to 10% of its global annual turnover. The Commission aims to conclude its investigation by March next year.
The charge follows a recent DMA-related charge against Apple for similar non-compliance, highlighting the EU’s efforts to regulate Big Tech and empower users to control their data.
Cambodia recently launched its messaging app, CoolApp, which is supported by former Prime Minister Hun Sen. He has emphasised that the app is crucial for national security, aiming to protect Cambodian information from foreign interference. Hun Sen’s endorsement of CoolApp aligns with his long-standing approach of maintaining tight control over the country’s communication channels, especially in the face of external influences. He compared the app to other national messaging services like China’s WeChat and Russia’s Telegram, indicating a desire for Cambodia to have a secure, homegrown platform.
However, the introduction of CoolApp has raised significant concerns among critics and opposition leaders. They argue that the app could be a tool for government surveillance, potentially used to monitor and suppress political discourse. Mu Sochua, an exiled opposition leader, warned that CoolApp represents a new method for mass surveillance and control of public discourse, reminiscent of practices seen in China. Another opposition figure, Sam Rainsy, called for a boycott of the app, suggesting that its true purpose is to strengthen the repressive tools available to the Cambodian regime. These concerns are amplified by Cambodia’s recent history of internet censorship, media blackouts, and persecution of government critics.
CoolApp’s founder and CEO, Lim Cheavutha, claims the app uses end-to-end encryption to ensure user privacy and has reached 150,000 downloads, with expectations to reach up to 1 million. However, these assurances do little to alleviate fears of government surveillance, given Cambodia’s history of using technology to control dissent.
The app’s launch comes amid broader security challenges in Cambodia, including online scams by Chinese gangs and close ties with China’s surveillance-heavy regime. The following situation highlights the ongoing tension between Cambodia’s national security and civil liberties.
Audi will integrate ChatGPT into its vehicles’ infotainment systems starting July, leveraging Microsoft Azure OpenAI Service. This integration will cover approximately two million Audi models equipped with the MIB 3 system since 2021. Drivers can interact with their cars using natural language, benefiting from voice control over infotainment, navigation, and climate systems, alongside accessing general knowledge.
Marcus Keith, Audi’s Vice President of Interior, Infotainment, and Connectivity Development, highlighted the seamless merging of ChatGPT’s capabilities with Audi’s voice control, promising customers an enhanced in-cabin experience with secure AI-based knowledge access.
.@Audi is taking in-car technology to the next level with ChatGPT, powered by @Microsoft Azure OpenAI Service. Come July, nearly two million Audi owners will reap the benefits of enhanced communication with their vehicles. https://t.co/PMyLkvNlr1pic.twitter.com/CpPLXfV9tl
— Microsoft News and Stories (@MSFTnews) June 27, 2024
This move follows Mercedes-Benz’s introduction of ChatGPT into its MBUX Voice Assistant in 2023, expanding AI usage across its US vehicle lineup. Volkswagen Group also showcased Cerence Inc.’s Chat Pro at CES 2024, extending AI integration via cloud updates in European models. Similarly, Škoda Auto announced ChatGPT integration into its Laura voice assistant for selected vehicle platforms, prioritising data security alongside enhanced AI functionalities.
Why does it matter?
These developments underscore the automotive industry’s commitment to integrating advanced AI technologies into vehicles, aiming to elevate user experience through intuitive and informative in-car interactions.
The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.
OpenAI has launched CriticGPT, a new model based on GPT-4, designed to identify and critique errors in ChatGPT’s outputs. The tool aims to enhance human trainers’ effectiveness by assisting them in providing feedback on the chatbot’s performance.
Similar to ChatGPT’s training process, CriticGPT learns through human feedback, focusing on identifying intentionally inserted errors in ChatGPT’s code outputs. Evaluations showed that CriticGPT’s critiques were preferred over ChatGPT’s in 63% of cases involving naturally occurring bugs, highlighting its ability to minimize irrelevant feedback.
OpenAI plans to further develop CriticGPT’s capabilities, aiming to integrate advanced methods to improve human-generated feedback for GPT-4. The initiative underscores the ongoing role of human oversight in refining AI technologies despite their increasing automation capabilities.