Australia has introduced the Privacy and Other Legislation Amendment Bill 2024, marking a pivotal advancement in addressing privacy concerns within the digital landscape. The landmark legislation establishes stringent penalties for privacy breaches, imposing sentences of up to six years in prison for general offences and up to seven years for doxxing incidents that target protected characteristics.
Furthermore, the bill enhances the enforcement powers of the Australian Information Commissioner, enabling swift action against non-compliance with privacy laws. Restoring the Australian Privacy Commissioner as a standalone position further strengthens the oversight needed to uphold privacy standards nationwide.
In its commitment to modernising privacy laws for the digital age, Australia views the Privacy and Other Legislation Amendment Bill 2024 as the initial phase of a comprehensive strategy to safeguard citizens’ privacy. The government demonstrates its resolve to hold companies and individuals accountable by significantly increasing maximum penalties for serious privacy breaches.
Additionally, recognising the importance of collaboration, the government will continue to engage with key stakeholders—including industry representatives, small businesses, consumer groups, and the media—to ensure that the approach to privacy protection is equitable and beneficial for both individuals and society.
Slack is undergoing a major transformation as it integrates AI features into its platform, aiming to evolve from a simple messaging service to a ‘work operating system.’ CEO Denise Dresser said Slack will now serve as a hub for AI applications from companies like Salesforce, Adobe, and Anthropic. New, pricier features include AI-generated summaries of conversations and the ability to interact with AI agents for tasks such as data analysis, web searches, and image generation.
This shift follows Salesforce’s 2021 acquisition of Slack and its broader move toward AI-driven solutions. Slack’s AI integration seeks to enhance productivity by offering tools to catch up on team discussions, analyse business data, and create branded content, all within the chat environment. However, questions remain about whether users will embrace and pay for these premium features and how this change aligns with Slack’s core identity as a workplace communication tool.
Concerns around data privacy have also surfaced as Slack leans further into AI. The company faced criticism earlier this year for handling customer data, which was used for training purposes, but maintains that it does not use user messages to train its AI models. As Slack continues integrating AI, it must address growing scepticism around managing and safeguarding data.
It has not been that long since Elon Musk was hardly criticised by the British government for spreading extremist content and advocating for the freedom of speech on his platform. This freedom of speech has probably become a luxury few people can afford, especially on platforms whose owners are less committed to those principles while trying to comply with the requirements of governments worldwide. The British riots, where individuals were allegedly arrested for social media posts, further illustrate the complexity of regulating social media digital policies. While governments and like-minded people may argue that these actions are necessary to curb violent extremism and exacerbation of critical situations, others see them as a dangerous encroachment and undermining of free speech.
The line between expressing controversial opinions and inciting violence or allowing crime on social media platforms is often blurred, and the consequences of crossing it can be severe. However, let us look at a situation where someone is arrested for allegedly turning a blind eye to organised crime activities on his platform, as in the case of Telegram’s CEO.
Namely, Pavel Durov, Telegram’s founder and CEO, became another symbol of resistance against government control over digital communications alongside Elon Musk. His arrest in Paris on 25 August 2024 sparked a global debate on the fine line between freedom of speech and the responsibilities that come with running a platform that allows for uncensored, encrypted communication. French authorities allegedly detained Durov based on an arrest warrant related to his involvement in a preliminary investigation and his unwillingness to grant authorities access to his encrypted messaging app, which has over 1 billion users worldwide. The investigation concerns Telegram’s alleged role in enabling a wide range of crimes due to insufficient moderation and lack of cooperation with law enforcement. The charges against him—allegations of enabling criminal activities such as child exploitation, drug trafficking, terrorism, and fraud, as well as refusing to cooperate with authorities —are severe. However, they also raise critical questions about the extent to which a platform owner can or should be held accountable for the actions of its users.
In 2011, Durov said the Russian government asked him to delete the accounts of anti-government people on his social media platform. He refused. After the 2014 coup in Ukraine, Durov refused to provide the Russian government with information about users involved in the event. pic.twitter.com/hqnijdiBJ5
Durov’s journey from Russia to France highlights the complex interplay between tech entrepreneurship and state control. He first made his mark in Russia, founding VKontakte, a platform that quickly became a refuge for political dissenters. His refusal to comply with Kremlin demands to hand over user data and sell the platform eventually forced him out of the country in 2014. Meanwhile, Durov launched Telegram in 2013, a messaging app focused on privacy and encryption, which has since become a tool for those seeking to avoid government surveillance. However, his commitment to privacy has put him at odds with various governments, leading to a life of constant movement across borders to evade legal and political challenges.
In France, Durov’s initially promising relationship with the government soured over time. Invited by President Emmanuel Macron in 2018 to consider moving Telegram to Paris, Durov even accepted French citizenship in 2021. However, the French government’s growing concerns about Telegram’s role in facilitating illegal activities, from terrorism to drug trafficking, led to increased scrutiny. The tension as we already know, culminated in Durov’s recent detention, which is part of a broader investigation into whether platforms like Telegram enable online criminality.
Durov’s relationship with the United Arab Emirates adds another layer of complexity. After leaving Russia, Durov based Telegram in the UAE, where he was granted citizenship and received significant financial backing. However, the UAE’s restrictive political environment and stringent digital controls have made this partnership a delicate one, with Durov carefully navigating the country’s security concerns while maintaining Telegram’s operations.
Pavel Durov left Russia when the government tried to control his social media company, Telegram. But in the end, it wasn’t Putin who arrested him for allowing the public to exercise free speech. It was a western country, a Biden administration ally and enthusiastic NATO member,… https://t.co/F83E9GbNHC
The USA, too, has exerted pressure on Durov. Despite repeated attempts by US authorities to enlist his cooperation in controlling Telegram, Durov has steadfastly resisted, reinforcing his reputation as a staunch defender of digital freedom. He recently told to Tucker Carlson in an interview that the FBI approached a Telegram engineer, attempting to secretly hire him to install a backdoor that would allow US intelligence agencies to spy on users. However, his refusal to collaborate with the FBI has only heightened his standing as a symbol of resistance against governmental overreach in the digital realm.
With such an intriguing biography of his controversial tech entrepreneurship, Durov’s arrest indeed gives us reasons for speculation. At the same time, it seems not just a simple legal dispute but a symbol of the growing diplomatic and legal tensions between governments and tech platforms over control of cyberspaces. His journey from Russia to his current predicament in France highlights a broader issue: the universal challenge of balancing free expression with national security.
Accordingly, Telegram, based in Dubai and widely used across Russia and the former Soviet Union, has faced scrutiny for its role in disseminating unfiltered content, especially during the Russia-Ukraine conflict. Durov, who left Russia in 2014 after refusing to comply with government demands, has consistently maintained that Telegram is a neutral platform committed to user privacy and free speech. Additionally, his multiple citizenships, including Russian (since the devolution in 1991, previously the Soviet Union from birth), Saint Kitts and Nevis (since 2013), French (since 2021), and UAE (since 2021), are only escalating tenseness between concerned governments pressing on French President Emmanuel Macron and asking for clarifications on the matter. Even Elon Musk confronted Emanuel Macron by responding directly to his post on X, claiming that ‘It would be helpful to the global public to understand more details about why he was arrested’, as he described it as an attack on free speech.
It would be helpful to the global public to understand more details about why he was arrested
Despite the unclear circumstances and vague official evidence justifying the arrest and court process, Durov will undoubtedly face the probe and confront the accusations under the prescribed laws concerning the case. Therefore, it would be preferable to look at the relevant laws and clarify which legal measures are coherent with the case.
The legal backdrop to Durov’s arrest is complex, involving both US and EU laws that govern digital platforms. However, Section 230 of the US Communications Decency Act of 1996, often called the ‘twenty-six words that created the internet,’ is the governing law that should be consulted and under which, among others, this case would be conducted. The law, in its essence, protects online platforms from liability for user-generated content as long as they act in good faith to remove unlawful material. This legal shield has allowed platforms like Telegram to flourish, offering robust encryption and a promise of privacy that appeals to millions of users worldwide. However, this immunity is not absolute. Section 230 does not protect against federal criminal liability, which means that if Telegram is found to have knowingly allowed illegal activities to increase without taking adequate steps to curb them, Durov could indeed be held liable.
In the EU context, the recently implemented Digital Services Act (DSA) imposes stricter obligations on digital platforms, particularly those with significant user bases. Although Telegram, with its 41 million users in the EU, falls short of the ‘very large online platforms’ (VLOP) category that would subject it to the most stringent DSA requirements, it would probably still be obligated to act against illegal content. The DSA emphasises transparency, accountability, and cooperation with law enforcement—a framework that contrasts sharply with Telegram’s ethos of privacy and minimal interference.
True, Brazil is controlled by a tyrannical dictator masquerading as a judge https://t.co/kkPfNRrBOh
Similarly, Mark Zuckerberg’s Meta has been embroiled in controversies over its role in child exploitation, but especially in spreading harmful content, from political misinformation to hate speech. On the other hand, Zuckerberg’s recent confession in an official letter that, in 2021, the White House and other Biden administration officials exerted considerable pressure on Meta to suppress certain COVID-19-related content, including humour and satire, adds fuel to the fire concerning the abuse of legal measures to stifle freedom of speech and excessive content moderation by government officials. Nevertheless, both Musk and Zuckerberg have had to strike a balance between maintaining a platform that allows for open dialogue and complying with legal requirements to prevent the spread of harmful content.
When you say you are committed to freedom of expression, you are lying. We have a letter from France that proves this, without a doubt.
We had to shutdown Rumble in France because you have NO committment to freedom of expression.
The story of Chris Pavlovski, CEO of Rumble, further complicates this narrative. His decision to leave the EU following Durov’s arrest underscores the growing unease among tech leaders about the increasing regulatory pressures of the EU. Pavlovski’s departure can be seen as a preemptive move to avoid the legal and financial risks of operating in a jurisdiction that tightens its grip on digital platforms. It also reflects a broader trend of tech companies seeking more favourable regulatory environments, often at the expense of user rights and freedoms.
All these controversial examples bring us to the heart of this debate: where to draw the line between free speech and harm prevention. Encrypted platforms like Telegram offer unparalleled privacy but pose significant challenges for law enforcement. The potential for these platforms to be used by criminals and extremists cannot be ignored. However, the solution is more complex. Overzealous regulation risks stifling free expression and driving users to even more secretive and unregulated corners of the internet.
Pavel Durov’s case is a microcosm of the larger global struggle over digital rights. It forces us to confront uncomfortable questions: Do platforms like Telegram have a responsibility to monitor and control the content shared by their users, even at the cost of privacy? Should governments have the power to compel these platforms to act, or does this represent an unacceptable intrusion into the private sphere? Should social media companies that monetise content on their platforms be held responsible for the content they allow? And ultimately, how do we find the balance in the digital world we live in to optimally combine privacy and security in our society?
These questions will only become more pressing as we watch Durov’s and similar legal cases unfold. The outcome of his case could set a precedent that shapes the future of digital communication, influencing not just Telegram but all platforms that value user privacy and free speech. Either way, Durov’s case also highlights the inherent conflict between cyberspace and real space. There was once a concept that the online world—the domain of bits, bytes, and endless data streams—existed apart from the physical reality we live in. In the early days of the internet, this virtual space seemed like an expansive, unregulated frontier where the laws of the physical world did not necessarily apply. However, cyberspace was never a separate entity; rather, it was an extension, a layer added to the world we already knew. Therefore, the concept of punishment in the digital world has always been, and still is, rooted in the physical world. Those held responsible for crimes or who commit crimes online are not confined to a virtual jail; they are subject to controversies in the real world and legal systems, courts, and prisons.
A US appeals court has reinstated a lawsuit against Google, allowing Chrome users to pursue claims that the company collected their data without permission. The case centres on users who chose not to synchronise their Chrome browsers with their Google accounts yet allege that Google still gathered their information.
The 9th US Circuit Court of Appeals in San Francisco determined that a lower court had prematurely dismissed the case without adequately considering whether users had consented to the data collection. The decision follows a previous settlement where Google agreed to destroy billions of records in a similar lawsuit, which accused the company of tracking users who believed they were browsing privately in Chrome’s ‘Incognito’ mode.
Google has expressed disagreement with the ruling, asserting confidence in its privacy controls and the benefits of Chrome Sync, which helps users maintain a consistent experience across devices. However, the plaintiffs’ lawyer welcomed the court’s decision and is preparing for a trial.
Why does this matter?
Initially dismissed in December 2022, the lawsuit has now been sent back to the district court for further proceedings. The case could impact thousands of Chrome users using the browser since July 2016 without enabling the sync function, raising broader questions about the clarity and transparency of Google’s privacy policies.
The DPC is seeking a court order to stop or limit the processing of user data by X for training its AI systems, expressing concerns that this could violate the European Union’s General Data Protection Regulation (GDPR). The case may be referred to the European Data Protection Board for further review.
The legal dispute is part of a broader conflict between Big Tech companies and regulators over using personal data to develop AI technologies. Consumer organisations have accused X of breaching GDPR, a claim the company has vehemently denied, calling the DPC’s actions unwarranted and overly broad.
The Irish DPC has an important role in overseeing X’s compliance with the EU data protection laws since the platform’s operations in the EU are managed from Dublin. The current legal proceedings could significantly shift how Ireland enforces GDPR against large tech firms.
The DPC is also concerned about X’s plans to launch a new version of Grok, which is reportedly being trained using data from the EU and European Economic Area users. The privacy watchdog argues that this could worsen existing issues with data processing.
Despite X implementing some mitigation measures, such as offering users an opt-out option, these steps were not in place when the data processing began, leading to further scrutiny from the DPC. X has resisted the DPC’s requests to halt data processing or delay the release of the new Grok version, leading to an ongoing court battle.
The outcome of this case could set a precedent for how AI and data protection issues are handled across Europe.
The Federal Trade Commission (FTC), supported by the Department of Justice (DOJ), has filed a lawsuit against TikTok and its parent company ByteDance for violating children’s privacy laws. The lawsuit claims that TikTok breached the Children’s Online Privacy Protection Act (COPPA) by failing to notify and obtain parental consent before collecting data from children under 13. The case also alleges that TikTok did not adhere to a 2019 FTC consent order regarding the same issue.
According to the complaint, TikTok collected personal data from underage users without proper parental consent, using this information to target ads and build user profiles. Despite knowing these practices violated COPPA, ByteDance and TikTok allowed children to use the platform by bypassing age restrictions. Even when parents requested account deletions, TikTok made the process difficult and often did not comply.
FTC Chair Lina M. Khan stated that TikTok’s actions jeopardised the safety of millions of children, and the FTC is determined to protect kids from such violations. The DOJ emphasised the importance of upholding parental rights to safeguard children’s privacy.
The lawsuit seeks civil penalties against ByteDance and TikTok and a permanent injunction to prevent future COPPA violations. The US District Court will review the case for the Central District of California.
ANPD’s decision arose from concerns over Meta’s use of personal data to train its AI systems without users’ explicit consent. The agency warned of ‘serious and irreparable damage’ to the rights of data subjects and imposed a daily fine of 50,000 reais for non-compliance. Meta expressed disappointment, stating that the decision is a setback for innovation and AI development in Brazil.
The controversy in Brazil reflects broader global challenges for tech companies navigating stringent data privacy laws. In regions like the European Union, similar regulatory hurdles have forced Meta and other tech giants to pause their AI tool rollouts. Human Rights Watch highlighted risks associated with personal data in AI training, noting how personal photos, including those of Brazilian children, have been misused in image datasets, raising significant privacy and ethical concerns.
Meta’s response aligns with its recent actions in Europe, where it withheld its AI models due to regulatory uncertainties. This situation underscores the tension between advancing AI technologies and adhering to evolving data protection regulations.
The Detroit Police Department has agreed to new rules limiting how it can use facial recognition technology after a legal settlement was reached with Robert Williams, who was wrongfully arrested based on the technology in 2020. Williams was detained for over 30 hours after software identified him with video surveillance of another Black man stealing watches. With the support of the American Civil Liberties Union of Michigan, he submitted a complaint in 2020 and then sued in 2021.
So far, Detroit police are responsible for three of the seven reported instances when the use of facial recognition has led to a wrongful arrest. Detroit’s police chief, James White, has blamed ‘human error’, and not the software, saying his officers relied too much on the technology.
What does this change concretely?
To combat human error, Detroit police officers will now be trained in the risks of facial recognition in policing. Another change states that suspects identified by the technology must be linked to the crime by other evidence before being used in photo lineups. Along with other policy changes, the police department will have to launch an audit into facial recognition searches since 2017, when it first started using the technology.
In spite of this incident, police say facial recognition technology is too useful a tool to be abandoned entirely. According to the head of informatics with Detroit’s crime intelligence unit, Stephen Lamoreaux, the Police Department remains ‘very keen to use technology in a meaningful way for public safety.’ However, some cities like San Francisco have banned its use because of concerns about privacy and racial bias. Microsoft also said it would not be providing its facial recognition software to the US police until a national framework for the using facial recognition based on human rights is put in place.
Meta asserts that its model complies with a ruling from EU’s top court and is aligned with the DMA, expressing a willingness to engage with the Commission to resolve the issue. However, if found guilty, Meta could face fines of up to 10% of its global annual turnover. The Commission aims to conclude its investigation by March next year.
The charge follows a recent DMA-related charge against Apple for similar non-compliance, highlighting the EU’s efforts to regulate Big Tech and empower users to control their data.
In recent days, the landscape of AI integration on Apple’s devices has become a topic of discussion. Initially, it was reported that a potential partnership could involve Apple’s cooperation with Meta’s AI services. However, ‘people with knowledge on the matter’ told Bloomberg this is not the case, explaining that Apple had explored a potential partnership in March of this year, before settling on OpenAI for part of the recently announced Apple Intelligence services. Reportedly, this partnership was abandoned due to Apple’s privacy concerns. Apple has repeatedly criticised Meta’s privacy practices, making a collaboration between the two tech giants potentially damaging to Apple’s image as a privacy-focussed company.
The timing of these discussions coincides with Meta facing privacy concerns over its new AI tools in the European Union. Despite this, Meta recently rolled out these same tools in India.
Earlier this month, Apple launched its own suite of AI features under the Apple Intelligence brand, including integration in Siri. Apple partnered with OpenAI to allow iPhone users to utilise ChatGPT for specific queries. The company says Siri will always ask for your permission before connecting to ChatGPT, and give you the choice to provide it with data, like a photo, if needed for your query. “From a privacy point of view, you’re always in control and have total transparency,” said Apple senior vice president Criag Federighi. That stance underpins Apple’s strategy as it demarcates itself in the world of AI integration, balancing innovation with its core principle of user privacy.
Apple is not depending exclusively on one AI provider though. At the Worldwide Developers Conference (WWDC), it announced its willingness to work with Google to integrate the Gemini AI model into its ecosystem. They have already partnered to train Apple’s AI. The extent of this integration remains to be seen, but it indicates Apple’s strategy of diversifying its AI partnerships.