An Australian transgender woman has won a significant legal battle against a female-only social networking app, Giggle for Girls, after being removed from the platform. The Federal Court ruled that the app’s decision to revoke Roxanne Tickle’s account amounted to indirect gender identity discrimination, awarding her A$10,000 in damages plus legal costs.
The court’s decision marks the first ruling on gender identity discrimination since the country amended the Sex Discrimination Act in 2013. The judge, Robert Bromwich, highlighted that Giggle for Girls only recognised sex assigned at birth as a valid basis for identifying as a man or woman. Tickle had undergone gender-affirming surgery and had her birth certificate updated.
Experts view the ruling as a victory for transgender rights in Australia, with Professor Paula Gerber from Monash University stating that the case sends a clear message against treating transgender women differently from cisgender women. The app, which was marketed as a safe space for women, had previously suspended operations but is expected to relaunch soon.
Tickle expressed relief at the verdict, calling it ‘healing’ after facing online abuse. Giggle for Girls’ founder, Sally Grover, acknowledged the judgement and affirmed that the fight for women’s rights would continue.
Nepal lifted its ban on the Chinese-owned app TikTok more than nine months after blocking the platform due to concerns that it disrupted social harmony. The decision came after TikTok’s parent company, ByteDance, agreed to collaborate with Nepalese authorities to address cybercrime issues and regulate content on the app.
The ban, initially imposed in November by Nepal’s previous government, was a response to the rising misuse of TikTok, with over 1,600 cases of TikTok-related cybercrime reported in the country. The ban sparked protests from users who argued that it cut off a source of income and a platform for free expression, affecting the app’s 2.2 million users in Nepal.
To secure the app’s reinstatement, TikTok committed to establishing a dedicated unit to work with Nepal’s Cyber Bureau to monitor and address inappropriate content and criminal activities. This collaboration aims to enable real-time identification of offenders, which authorities hope will curb the misuse of the platform.
D-ID has recently launched an innovative AI video translation tool that allows creators to automatically translate their videos into multiple languages while simultaneously cloning the speaker’s voice and synchronising lip movements to match the translated audio. This groundbreaking feature enhances video content accessibility for a global audience, making it easier for creators to connect with viewers across language barriers.
The tool supports translations into 30 languages, including widely spoken languages such as Arabic, Mandarin, Japanese, Hindi, Spanish, and French, enabling creators to reach diverse audiences and expand their global footprint effectively. By automating the translation and dubbing process, D-ID aims to reduce localisation costs for businesses and content creators, facilitating the scaling of video marketing and communication strategies worldwide.
Additionally, the tool enters a competitive landscape where other companies, such as YouTube and Vimeo, are improving video translation capabilities as video continues dominating digital communication. D-ID’s technology targets individual creators and enterprise customers looking to enhance global outreach through effective video localisation strategies.
By combining voice cloning and lip-syncing, D-ID’s AI Video Translate creates a seamless multilingual viewing experience, positioning the company as a key player in the future of AI-driven content creation.
Flo Crivello, founder of Lindy, recently faced an unusual issue when a client was Rickrolled by one of the company’s AI assistants. Instead of providing a tutorial video, the AI sent the famous Rick Astley music video, highlighting the quirks of large language models.
The incident was traced back to the way the AI predicted the most likely sequence of text, leading it to send the prank video. Although only two such cases occurred, Crivello acted quickly, implementing a prompt to prevent further Rickrolling.
A customer reached out asking for video tutorials.
We obviously have a Lindy handling this, and I was delighted to see that she sent a video.
But then I remembered we don't have a video tutorial and realized Lindy is literally fucking rickrolling our customers. pic.twitter.com/zsvGp4NsGz
This incident underscores how deeply internet culture can influence AI models. Similar problems have surfaced in other AI systems, like Google’s, which have also struggled with the content they are trained on.
Despite these challenges, advancements in AI technology are making it easier to patch such errors. Lindy has since corrected the issue, ensuring clients receive the correct content without unwelcome surprises.
Tech platforms are under increasing pressure from Sweden and Denmark to address the rising issue of gang recruitment ads targeting young Swedes. These ads, often found on platforms like Telegram and TikTok, are being used to recruit individuals for violent crimes across the Nordic region. Concerns have grown as Swedish gang violence has begun spilling over into neighbouring countries, with incidents of Swedish gang members being hired for violent acts in Denmark.
The justice ministers of both countries announced their plans to summon tech companies to discuss their role in enabling these activities. They will demand that the platforms take greater responsibility and implement stronger measures to prevent gang-related content. If the responses from these companies are deemed insufficient, further action may be considered to increase pressure on them.
Danish Minister of Justice Peter Hummelgaard highlighted the challenges posed by encrypted services and social media, which are often used to facilitate criminal activities. Although current legal frameworks do not allow for geoblocking or shutting down such platforms, efforts are being made to explore new avenues to curb their misuse.
Sweden, which has the highest rate of gun violence in the European Union, recently announced plans to strengthen police cooperation across the Nordic region. The country is also increasing security measures at its borders with Denmark to prevent further cross-border gang activity. The growing concern over gang-related violence underscores the urgent need for coordinated efforts between governments and tech platforms.
‘2024 will be marked by an interplay between change, which is the essence of technological development, and continuity, which characterises digital governance efforts.’, said Dr Jovan Kurbalija in one of his interviews, predicting the year 2024 at its beginning.
Judging by developments in the social media realm, the year 2024 indeed appears to be the year of change, especially in the legal field, with disputes and implementations of newborn digital policies long in the ‘ongoing’ phase. Dr Kurbalija’s prediction connects us to some of the main topics Diplo and its Digital Watch Observatory are following, such as the issue of content moderation and freedom of speech in the social media world.
This taxonomic dichotomy could easily make us think of how, in the dimly lit corridors of power, where influence and control intertwine like the strands of a spider’s web, the role of social media has become a double-edged sword. On the one hand, platforms like 𝕏 stand as bastions of free speech, allowing voices to be heard that might otherwise be silenced. On the other hand, they are powerful instruments in the hands of those who control them, with the potential to shape public discourse narratives, influence public opinion, and even ignite conflicts. That is why the scrutiny 𝕏 faces for hosting extremist content raises essential questions about whether it is merely a censorship-free network, or a tool wielded by its enigmatic owner, Elon Musk, to further his agenda.
The story begins with the digital revolution, when the internet was hailed as the great equaliser, giving everyone a voice. Social media platforms emerged as the town squares of the 21st century, where ideas could be exchanged freely, unfiltered by traditional gatekeepers like governments or mainstream media. Under Musk’s ownership, 𝕏 has taken this principle to its extreme, often resisting calls for tighter content moderation to protect free speech. But as with all freedoms, this one also comes with a price.
The platform’s hands-off approach to content moderation has led to widespread concerns about its role in amplifying extremist content. The issue here is not just about spreading harmful material; it touches on the core of digital governance. Governments around the world are increasingly alarmed by the potential for social media platforms to become breeding grounds for radicalisation and violence. The recent scrutiny of 𝕏 is just the latest chapter in an ongoing struggle between the need for free expression and the imperative to maintain public safety.
The balance between these two forces is incredibly delicate in countries like Türkiye, for example, where the government has a history of cracking down on dissent. The Turkish government’s decision to block instagram for nine days in August 2024 after the platform failed to comply with local laws and sensitivities is a stark reminder of the power dynamics at play. In this context, 𝕏’s refusal to bow to similar pressures can be seen as both a defiant stand for free speech and a dangerous gamble that could have far-reaching consequences.
But the story does not end there. The influence of social media extends far beyond any one country’s borders. In the UK, the recent riots have highlighted the role of platforms like 𝕏 and Meta in both facilitating and exacerbating social unrest. While Meta has taken a more proactive approach to content moderation, removing inflammatory material and attempting to prevent the spread of misinformation, 𝕏’s more relaxed policies have allowed a more comprehensive range of content to circulate. Such an approach has included not just legitimate protest organisations but also harmful rhetoric that has fuelled violence and division.
The contrast between the two platforms is stark. Meta, with its more stringent content policies, has been criticised for stifling free speech and suppressing dissenting voices. Yet, in the context of the British riots, its approach may have helped prevent the situation from escalating further. On the other hand, 𝕏 has been lauded for its commitment to free expression, but this freedom comes at a price. The platform’s role in the riots has drawn sharp criticism, with some accusing it of enabling the very violence it claims to oppose as the government officials have vowed action against tech platforms, even though Britain’s Online Safety Act will not be fully effective until next year. Meanwhile, the EU’s Digital Services Act, which Britain is no longer part of, is already in effect and will allegedly serve as a backup in similar disputes.
The British riots also serve as a cautionary tale about the power of social media to shape public discourse. In an age where information spreads at lightning speed, the ability of platforms like 𝕏 and Meta to influence events in real time is unprecedented. This kind of lever of power is not just a threat to governments but also a powerful tool that can be used to achieve political ends. For Musk, acquiring 𝕏 represents a business opportunity and a chance to shape the global discourse in ways that align with his future vision.
Musk did not even hesitate to accuse the European Commission of attempting to pull off what he describes as an ‘illegal secret deal’ with 𝕏. In one of his posts, he claimed the EU, with its stringent new regulations aimed at curbing online extremist content and misinformation, allegedly tried to coax 𝕏 into quietly censoring content to sidestep hefty fines. Other tech giants, according to Musk, nodded in agreement, but not 𝕏. The platform stood its ground, placing its unwavering belief in free speech above all else.
The European Commission offered 𝕏 an illegal secret deal: if we quietly censored speech without telling anyone, they would not fine us.
While the European Commission fired back, accusing 𝕏 of violating parts of the EU’s Digital Services Act, Musk’s bold stance has ignited a fiery debate. And here, it is not just about rules and fines anymore—it is a battle over the very soul of digital discourse. How far should governmental oversight go? And at what point does it start to choke the free exchange of ideas? Musk’s narrative paints 𝕏 as a lone warrior, holding the line against mounting pressure, and in doing so, forces us to confront the delicate dance between regulation and the freedom to speak openly in today’s digital world.
Furthermore, the cherry on top of the cake, in this case, is Musk’s close contact and support for the potential new president of the USA, Donald Trump, generating additional doubts about the concentration and acquisition of power by social media owners, respectively, tech giants and their allies. Namely, in an interview with Donald Trump, Elon Musk openly endorsed the candidate for the US presidency, discussing, among others, topics such as regulatory policies and the juridical system, thus fueling speculation about his platform 𝕏 as a powerful oligarchic lever of power.
At this point, it is already crystal clear that governments are grappling with how to regulate these platforms and the difficult choices they are faced with. On the one hand, there is a clear need to implement optimal measures in order to achieve greater oversight in preventing the spread of extremist content and protecting public safety. On the other hand, too much regulation risks stifling the very freedoms that social media platforms were created to protect. This delicate dichotomy is at the heart of the ongoing debate about the role of tech giants in a modern, digital society.
The story of 𝕏 and its role in hosting extremist content is more than just the platform itself. It is about the power of technology to shape our world, for better or worse. As the digital landscape continues to evolve, the questions raised by 𝕏’s approach to content moderation will only become more urgent. And in the corridors of power, where decisions that shape our future are made, answers to those questions will determine the fate of the internet itself.
Three authors have filed a class-action lawsuit against the AI company Anthropic in a California federal court, accusing the firm of illegally using their books and hundreds of thousands of others to train its AI chatbot, Claude. The lawsuit, initiated by writers Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, claims that Anthropic utilised pirated versions of their works to develop the chatbot’s ability to respond to human prompts.
Anthropic, which has received financial backing from major companies like Amazon and Google, acknowledged the lawsuit but declined to comment further due to the ongoing litigation. The legal action against Anthropic is part of a broader trend, with other content creators, including visual artists and news outlets, also suing tech companies over using their copyrighted material in training AI models.
This is not the first time Anthropic has faced such accusations. Music publishers previously sued the company for allegedly misusing copyrighted song lyrics to train Claude. The authors involved in the current case argue that Anthropic has built a multibillion-dollar business by exploiting its intellectual property without permission.
The lawsuit demands financial compensation for the authors and a court order to permanently prevent Anthropic from using their work unlawfully. As the case progresses, it highlights the growing tension between content creators and AI companies over using copyrighted material in developing AI technologies.
These partnerships are crucial for training AI models but have sparked controversy. Some media organisations, like The New York Times, have taken legal action against OpenAI, citing copyright concerns over the use of their content. OpenAI’s COO, Brad Lightcap, emphasised the company’s commitment to maintaining accuracy and integrity in news delivery as AI becomes increasingly integral to this process.
Roger Lynch, CEO of Condé Nast, highlighted the financial pressures news and digital media faced in recent years, attributing them to tech companies undermining publishers’ ability to monetise content. He sees the partnership with OpenAI as a step toward reclaiming some of that lost revenue.
OpenAI’s introduction of SearchGPT in July, a search engine with real-time internet access, marks a significant move into territory traditionally dominated by Google. The company is actively collaborating with its news partners to gather feedback and refine the performance of SearchGPT, aiming to enhance its role in the evolving landscape of digital news consumption.
In the world of video game development, the rise of AI has sparked concern among performers who fear it could threaten their jobs. Motion capture actors like Noshir Dalal, who perform the physical movements that bring game characters to life, worry that AI could be used to replicate their performances without their consent, potentially reducing job opportunities and diminishing the value of their work.
Dalal, who has played characters in the most popular video games like ‘Star Wars Jedi: Survivor,’ highlights the physical toll and skill required in motion capture work. He argues that AI could allow studios to bypass hiring actors for new projects by reusing data from past performances. The concern is central to the ongoing strike by the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), which represents video game performers and other media professionals. The union is demanding stronger protections against unregulated AI use in the industry.
Why does this matter?
AI’s ability to generate new animations and voices based on existing data is at the heart of the issue. While studios argue that they have offered meaningful AI protections, performers remain sceptical. They worry that the use of AI could lead to ethical dilemmas, such as their likenesses being used in ways they do not endorse, as seen in the controversy surrounding game modifications that use AI to create inappropriate content.
Video game companies have offered wage increases and other benefits as negotiations continue, but the debate over AI protections remains unresolved. Performers like Dalal and others argue that AI could strip away the artistry and individuality that actors bring to their roles without strict controls, leaving them vulnerable to exploitation. The outcome of this dispute could set a precedent for how AI is regulated in the entertainment industry, impacting the future of video game development and beyond.
OpenAI has intensified its efforts to prevent the misuse of AI, especially in light of the numerous elections scheduled for 2024. The company recently identified and turned off a cluster of ChatGPT accounts linked to an Iranian covert influence operation named Storm-2035. The operation aimed to manipulate public opinion during the US presidential election using AI-generated content on social media and websites but failed to gain significant engagement or reach a broad audience.
According to Reuters’ latest news..
The US has accused Iran of launching cyber and influence operations aimed at the campaigns of US presidential candidates and sowing political discord among the American public. A joint statement from the FBI, the Office of the Director of National Intelligence, and the Cybersecurity and Infrastructure Security Agency highlighted increasingly aggressive Iranian activity during the election cycle. The statement follows earlier allegations from Donald Trump’s campaign regarding an Iranian hack on one of its websites. Iran has denied the accusations, describing them as ‘unsubstantiated and devoid of any standing.’ The US intelligence community remains confident in its assessment, citing attempts to access individuals within the presidential campaigns and activities intended to influence the election process.
The operation generated articles and social media comments on various topics, including US politics, global events, and the conflict in Gaza. The content was published on websites posing as news outlets and shared on platforms like X and Instagram. Despite their efforts, the operation saw minimal interaction, with most posts receiving little to no attention.
OpenAI’s investigation into this operation was bolstered by information from Microsoft, and it revealed that the influence campaign was largely ineffective, scoring low on a scale assessing the impact of covert operations. The company remains vigilant against such threats and has shared its findings with government and industry stakeholders.
OpenAI is committed to collaborating with industry, civil society, and government to counter these influence operations. The company emphasises the importance of transparency and continues to monitor and disrupt any attempts to exploit its AI technologies for manipulative purposes.