Hong Kong restricts apps like WhatsApp and WeChat for civil servants

The Hong Kong government has banned most civil servants from using widely used apps, including WhatsApp, WeChat, and Google Drive, on work computers to reduce security risks. The Digital Policy Office’s updated IT security guidelines allow government workers to access these services on personal devices at work, and managers can grant exceptions to the ban if required.

Experts in cybersecurity agree with the policy, pointing to similar restrictions in other governments, including the United States and China, amid increasing concerns over data leaks and hacking threats. Sun Dong, Secretary for Innovation, Technology and Industry, noted that stricter controls were essential given the growing complexity of cybersecurity challenges.

The ban is intended to minimise potential breaches by preventing malware from bypassing security measures through encrypted messages, according to Francis Fong, the honorary president of the Hong Kong Information Technology Federation. Anthony Lai, director of VX Research Limited, called the decision prudent, citing low cybersecurity awareness among some staff and limited monitoring of internal systems.

Data breaches have previously compromised tens of thousands of Hong Kong citizens’ personal information, raising public concern about government cybersecurity protocols. The updated guidelines aim to address these vulnerabilities while increasing overall data security.

Missouri Attorney General accuses Google of censoring conservatives

Missouri’s Attorney General Andrew Bailey announced an investigation into Google on Thursday, accusing the tech giant of censoring conservative speech. Bailey’s statement, shared on social media platform X, criticised Google, calling it “the biggest search engine in America,” and alleged that it has engaged in bias during what he referred to as “the most consequential election in our nation’s history.” Bailey did not cite specific examples of censorship, sparking quick dismissal from Google, which labelled the claims “totally false” and maintained its commitment to showing “useful information to everyone—no matter what their political beliefs are.”

Republicans have long contended that major social media platforms and search engines demonstrate an anti-conservative bias, though tech firms like Google have repeatedly denied these allegations. Concerns around this issue have intensified during the 2024 election campaign, especially as social media and online search are seen as significant factors influencing public opinion. Bailey’s investigation is part of a larger wave of Republican-led inquiries into potential online censorship, often focused on claims that conservative voices and views are suppressed.

Adding to these concerns, Donald Trump, the Republican presidential candidate, recently pledged that if he wins the upcoming election, he would push for the prosecution of Google, alleging that its search algorithm unfairly targets him by prioritising negative news stories. Trump has not offered evidence for these claims, and Google has previously stated its search results are generated based on relevance and quality to serve users impartially. As the November 5 election draws near, this investigation highlights the growing tension between Republican officials and major tech platforms, raising questions about how online content may shape future political campaigns.

Meta partners with Reuters for AI news content

Meta Platforms announced a new partnership with Reuters on Friday, allowing its AI chatbot to give users real-time answers about news and current events using Reuters content. The agreement marks Meta’s return to licensed news distribution after scaling back on news content due to ongoing disputes over misinformation and revenue sharing with regulators and publishers. The financial specifics of the deal remain undisclosed, as Meta and Reuters-parent Thomson Reuters have chosen to keep the terms confidential.

Meta’s AI chatbot, available on platforms like Facebook, WhatsApp, and Instagram, will now offer users summaries and links to Reuters articles when they ask news-related questions. Although Meta hasn’t clarified if Reuters content will be used to train its language models further, the company assures that Reuters will be compensated under a multi-year agreement, as reported by Axios.

Reuters, known for its fact-based journalism, confirmed its licensed content to multiple tech providers for AI usage without detailing specific deals.

Why does it matter?

The partnership reflects a growing trend in tech, with companies like OpenAI and Perplexity also forming agreements with media outlets to enhance their AI responses with verified information from trusted news sources. Reuters has already collaborated with Meta on fact-checking initiatives, a partnership that began in 2020. This latest agreement aims to improve the reliability of Meta AI’s responses to real-time questions, potentially addressing ongoing concerns around misinformation and helping to balance the distribution of accurate, trustworthy news on social media platforms.

Indian court orders Star Health to help stop data leak

An Indian court has instructed insurer Star Health to assist Telegram in identifying chatbots responsible for leaking sensitive customer data through the messaging app. Star Health, the country’s largest insurer, sought the directive after a report revealed that a hacker leaked private information, including medical and tax documents, via Telegram chatbots.

Justice K Kumaresh Babu of the Madras High Court ordered Star Health to provide details on the chatbots so Telegram could delete them. Telegram’s legal representative, Thriyambak Kannan, stated that while the app can’t independently track data leaks, it will remove the chatbots if the insurer supplies specific information.

Star Health is facing a $68,000 ransom demand and has launched an investigation into the leak, which includes claims about potential involvement of its chief security officer. However, the insurer has found no evidence implicating the officer.

Massive data breach hits UnitedHealth tech unit

A cyberattack on Change, the tech unit of UnitedHealth, exposed personal information of 100 million people. The breach, reported in February, is now officially recognised as the largest healthcare data breach in US history. Hackers, identified as the ALPHV group, disrupted claims processing, impacting patients and providers nationwide.

UnitedHealth started notifying affected individuals in June, warning that the breach may have compromised member IDs, diagnoses, treatment data, social security numbers, and billing codes. The company is still investigating the full impact and working to contact those affected promptly.

The hack mirrors the scale of a 2015 breach at health insurer Anthem, which compromised nearly 79 million records. UnitedHealth’s business is forecast to take a hit of $705 million this year due to payment disruptions and customer notifications.

The US healthcare giant provided loans to help providers cope with financial strain caused by the incident. Despite ongoing recovery efforts, the breach continues to highlight the sector’s vulnerabilities to ransomware attacks.

Musk’s America PAC prioritises Facebook and Instagram over X in strategic ad campaign to support Trump

Elon Musk’s political action committee, America Pac, established to support former US President Donald Trump, reveals a strategic focus on social media platforms like Facebook, Instagram, and YouTube, with comparatively minimal attention to X, formerly Twitter, despite Musk’s ownership. Between July and October, America Pac allocated 3 million USD to Facebook and Instagram ads and 1.5 million USD to Google and YouTube, overshadowing the 201,000 USD spent on X.

America Pac’s advertisement strategy notably emphasises geographic targeting of pivotal swing states, such as Pennsylvania, Georgia, Michigan, Nevada, Arizona, and Wisconsin. Pennsylvania, in particular, receives significant attention through both digital ad presence and Musk’s campaign efforts. Despite X generating around 32 million ad impressions, the precise impact on Facebook remains less clear due to Meta’s limited aggregate data, although individual ad engagement is notably high.

Offering financial incentives, Musk seeks to engage voters beyond conventional social media advertising. He has pledged daily 1 million USD giveaways until the election to encourage voter sign-ups for America Pac’s petition, further amplifying voter engagement through high-profile events in Pennsylvania.

Why does it matter?

Musk’s move suggests a tactical decision to leverage platforms with a broader reach and advanced targeting capabilities for their advertising efforts. Advertising remains a vital component of X’s revenue strategy. However, the multifaceted approach of focusing on other platforms reiterates America Pac’s deployment across platforms capable of maximum political influence, favouring well-established networks over Musk’s social media to optimise their outreach in swing electoral regions.

Google unveils open-source watermark for AI text

Google has released SynthID Text, a watermarking tool designed to help developers identify AI-generated content. Available for free on platforms like Hugging Face and Google’s Responsible GenAI Toolkit, this open-source technology aims to improve transparency around AI-written text. It works by embedding subtle patterns into the token distribution of text generated by AI models without affecting the quality or speed of the output.

SynthID Text has been integrated with Google’s Gemini models since earlier this year. While it can detect text that has been paraphrased or modified, the tool does have limitations, particularly with shorter text, factual responses, and content translated from other languages. Google acknowledges that its watermarking technique may struggle with these formats but emphasises the tool’s overall benefits.

As the demand for AI-generated content grows, so does the need for reliable detection methods. Countries like China are already mandating watermarking of AI-produced material, and similar regulations are being considered in US, California. The urgency is clear, with predictions that AI-generated content could dominate 90% of online text by 2026, creating new challenges in combating misinformation and fraud.

Meta prevails in shareholder child safety lawsuit

Meta Platforms and its CEO, Mark Zuckerberg, successfully defended against a lawsuit claiming the company misled shareholders about child safety on Facebook and Instagram. A US federal judge dismissed the case on Tuesday.

Judge Charles Breyer ruled that the plaintiff, Matt Eisner, failed to demonstrate that shareholders experienced financial harm due to Meta’s disclosures. He stated that federal law does not require companies to reveal all decisions regarding child safety measures or focus on their shortcomings.

Eisner had sought to delay Meta’s 2024 annual meeting and void its election results unless the company revised its proxy statement. However, the judge emphasised that many of Meta’s commitments in its proxy materials were aspirational and not legally binding. His dismissal, issued with prejudice, prevents Eisner from filing the same case again.

Meta still faces legal challenges from state attorneys general and hundreds of lawsuits from children, parents, and schools, accusing the company of fostering social media addiction. Other platforms, such as TikTok and Snapchat, also confront similar legal actions.

AI Policy Summit 2024

The RegHorizon and ETH Zurich Center for Law and Economics are organising a fifth AI Policy Summit. This years summit will be held on 1-2 November 2024.

The AI Policy Summit offers a platform for policymakers, business leaders, civil society, and academia to converge, exchange ideas, and collaboratively shape the future of AI policies. The Summit is an opportunity to be at the forefront of AI policy-making, ensuring that the technology benefits all of humanity while addressing ethical, social, and legal considerations.

More information, agenda, and the registration are available at the Summit webpage

US FTC bans fake online reviews to ensure marketplace integrity

The United States Federal Trade Commission (FTC) has introduced a rule banning the creation, purchase, and dissemination of fake online reviews, ensuring that testimonials are genuine and trustworthy. That includes reviews attributed to people who don’t exist, those generated by AI, or individuals with no real experience with the product or service.

The rule empowers the FTC to impose civil penalties on businesses and individuals knowingly engaging in such deceptive practices, holding violators accountable. By cracking down on fake reviews, the FTC protects consumers from being misled and ensures they can make informed purchasing decisions.

That initiative also promotes fair competition by penalising dishonest companies and supporting those operating with integrity, fostering a transparent and competitive marketplace. Additionally, the FTC’s rule goes beyond fake reviews by prohibiting businesses from using manipulative tactics such as unfounded legal threats, physical intimidation, or false accusations to influence their online reputation.

These measures prevent companies from using unethical strategies to control public perception, ensuring that business reputations are based on genuine consumer feedback, not coercion or deceit. The FTC aims to create a market environment that values honesty and fairness through this comprehensive approach.