Romania‘s telecoms regulator is set to initiate steps to suspend TikTok, citing potential interference in the recent presidential election. Pavel Popescu, the regulator’s deputy head, announced plans to begin the suspension process on Thursday. The action will remain in place until state authorities conclude their investigation into allegations of electoral manipulation linked to the platform.
The scrutiny comes after TikTok‘s role in Sunday’s election raised concerns about misinformation and influence. Officials are prioritising transparency and security during the ongoing electoral process.
The decision underscores the increasing global attention on social media platforms’ influence on democratic processes.
Google is appealing a court order mandating significant changes to its Play app store, arguing to the US 9th Circuit Court of Appeals that legal errors during the trial unfairly favoured Epic Games. The tech giant contends that the San Francisco jury should not have been allowed to rule on Epic’s claims and that the trial judge overstepped by issuing a nationwide injunction.
Epic, known for creating “Fortnite,” accused Google of monopolising app distribution and payment systems on Android devices. A jury sided with Epic last year, leading US District Judge James Donato to require Google to permit rival app stores on Android and allow competitors access to Play’s app catalogue. This injunction, set to last three years, is on hold pending the appeal.
Google warns the mandated changes would disrupt app developers and users, framing the judge’s order as excessive intervention. Epic, meanwhile, dismissed Google’s appeal as baseless and a refusal to honour the jury’s unanimous decision. The appeals court is set to hear arguments in February, with a decision expected later in 2025.
British police forces are retreating from using X, formerly known as Twitter, citing concerns over violent content and misinformation. A Reuters survey found significant reductions in posting activity from several forces, with some nearly halting use of the platform entirely. Critics argue the platform fosters hate speech under Elon Musk’s leadership, a claim he disputes, emphasising his commitment to free speech.
West Midlands Police, which serves Birmingham, reduced posts by 95% compared to last year. Lancashire Police cut its use by three-quarters, while Derbyshire Police has only responded to queries since August. North Wales Police became the first force to fully withdraw, stating the platform no longer served as an effective communication tool.
Some forces, however, continue limited use of X for urgent updates like road closures, while increasingly favouring Facebook and Instagram to engage with communities. Platforms such as Threads and Blue Sky are also emerging alternatives, though X remains more widely used in Britain despite a 19% drop in app users over the past year.
The shift reflects broader discontent with X among organisations, including media outlets and non-profits, due to concerns over Musk’s influence and the platform’s growing extremism. A government source confirmed its preference for other social media platforms for advertising while maintaining limited unpaid use of X.
A new survey by UNESCO reveals that over 60% of online influencers fail to fact-check the content they share with their followers. The study, conducted by researchers at Bowling Green State University, surveyed 500 influencers across 45 countries about their content-sharing practices. It found that many influencers struggle to assess the reliability of information, with 42% relying on the number of likes and shares a post receives as a measure of credibility.
The survey also highlighted that only 37% of content creators use mainstream media as a source, with personal experiences and their own research being the top sources for content. While many influencers are aware of the challenge of misinformation, only 73% expressed interest in training to better handle disinformation and online hate speech.
UNESCO is responding to this need by launching a month-long training program designed to equip influencers with tools to combat disinformation. The course will teach content creators how to verify information, source from diverse outlets, and debunk false narratives, aiming to improve the overall quality of online information.
Margrethe Vestager, the European Union’s outgoing competition chief, is stepping down after a decade of high-profile confrontations with tech giants like Apple and Google. In an exit interview, she expressed regret over not being more aggressive in regulating Big Tech, acknowledging the continued dominance of major platforms despite billions in fines. She described her tenure as ‘partly successful,’ noting the slow pace of change in the tech landscape.
Vestager was instrumental in shaping the EU’s regulatory framework, pushing for initiatives like the Digital Markets Act (DMA) to curb monopolistic behaviour. However, she conceded that the full impact of these measures may take years to be felt. She emphasised the importance of stronger enforcement and deterrence, advocating for a bolder approach to regulating tech firms globally.
Her reflections also highlighted the role of the Digital Services Act (DSA) in overseeing social media platforms and addressing harmful content. Platforms like X and Telegram, which face criticism for inadequate content moderation, were pointed out as examples of why robust regulation is necessary. Vestager stressed that platforms undermining democracy must comply with the EU’s stringent laws.
As she prepares to transition to academia, Vestager’s departure marks the end of an era. While her legacy includes significant strides in holding tech companies accountable, the ongoing influence of these firms signals that the battle for better regulation is far from over. Teresa Ribera Rodríguez will succeed her, tasked with continuing this critical work.
Australia’s House of Representatives passed a groundbreaking bill on Wednesday aiming to ban social media use for children under 16. The bill, supported by Prime Minister Anthony Albanese’s Labor government and the opposition, introduces strict measures requiring platforms to implement age-verification systems. Companies could face fines of up to A$49.5 million ($32 million) for breaches. The Senate will debate the bill next, with Albanese pushing for its approval before the year ends.
The law follows an emotional inquiry that highlighted cyberbullying’s devastating effects, including testimony from parents of children who self-harmed. While advocates argue the ban will protect young people’s mental health, critics, including youth groups and human rights organisations, warn it risks cutting off teens from vital social connections. Tech giants like Google, Meta, and TikTok have urged the government to delay the legislation until a proposed age-verification trial concludes in 2025.
Despite these concerns, public opinion overwhelmingly supports the ban, with recent polls showing 77% approval. Parent advocacy groups have praised the initiative as a critical step in addressing the negative impacts of social media on children. However, critics within parliament and civil rights groups have called for more nuanced solutions, emphasising the importance of balancing protection with privacy and self-expression rights.
If passed, Australia will become a global leader in stringent social media regulations, but the debate over how best to safeguard young users while respecting their freedoms is far from over.
Google and Meta are urging the Australian government to delay a proposed law that would prohibit social media use for children under 16, citing insufficient time to evaluate its potential effects. Prime Minister Anthony Albanese’s government aims to pass the bill, which includes some of the strictest child social media controls globally, before the parliamentary year ends on Thursday. However, critics argue the rushed timeline undermines thorough debate and expert input.
The bill mandates social media platforms, not parents or children, to implement age-verification systems, potentially involving biometrics or government IDs. Platforms failing to comply could face fines of up to AUD 49.5 million ($32 million). While the Liberal opposition is likely to support the legislation, some independents and tech companies like TikTok and Elon Musk’s X have raised concerns about its clarity and impact on human rights, including freedom of expression and access to information.
Tech companies argue the government should wait for the results of an age-verification trial before proceeding. TikTok called the bill rushed and poorly consulted, while Meta described it as “inconsistent and ineffective.” Meanwhile, Elon Musk criticised the bill as a potential tool for broader internet control, amplifying debates over balancing child safety with digital freedoms.
As a Senate committee prepares a report on the legislation, the controversy underscores the global challenge of regulating children’s online activity without infringing on broader rights.
A United States federal appeals court is set to rule by 6 December on whether ByteDance, TikTok‘s Chinese parent company, must divest its US operations or face a ban. The ruling will address national security concerns raised by the Justice Department, which alleges that TikTok’s Chinese ownership poses risks due to access to vast American user data. ByteDance has challenged the law as unconstitutional, arguing it unfairly targets TikTok and violates free speech.
The three-judge panel could uphold the law, leading to a likely appeal by ByteDance. Alternatively, the court might allow the law but criticise its fairness, requiring further certification of TikTok as a security risk. A ruling deeming the law unconstitutional could halt efforts to force ByteDance to sell TikTok’s US assets. Any outcome may result in further legal battles, including an appeal to the Supreme Court.
The case underscores tensions between US national security priorities and free market principles, with over 170 million Americans actively using TikTok. The final decision could shape the future of tech regulation and US-China relations.
Meta has proposed a unified system for age verification and safety standards across the EU to better protect teenagers online. The plan includes requiring parental approval for app downloads by users under 16, with app stores notifying parents for consent. Meta also advocates for consistent age-appropriate content guidelines and supervision tools for teens that parents can manage.
The proposal follows calls from incoming EU technology commissioner Henna Virkkunen, who emphasised protecting minors as a priority. Meta’s global head of safety, Antigone Davis, highlighted the fragmented nature of current European regulations, urging the adoption of uniform rules to ensure better protections for teens.
Although some EU frameworks like the Digital Services Act and Audiovisual Media Services Directive touch on youth safety, the lack of EU-wide standards leaves much to member states. Meta’s proposal aligns with ongoing discussions around the Child Sexual Abuse Material regulation, which aims to enhance online protections for minors.
IMAX is adopting AI technology to bring its original content to more global audiences. The company has partnered with Dubai-based Camb.ai to use advanced speech and translation models for content localisation. With non-English content growing in popularity, including in English-speaking markets, the initiative aims to increase accessibility and reduce costs.
Camb.ai’s AI platform, DubStudio, supports over 140 languages, including lesser-known ones. Its specialised models, Boli and Mars, ensure accurate text-to-speech translations while preserving nuances like background audio and tone. The startup’s technology has been previously deployed for live events like the Australian Open and Eurovision Sport, showcasing its ability to handle high-pressure scenarios.
IMAX plans a phased rollout of the AI localisation, starting with widely spoken languages. Early tests of Camb.ai’s technology on IMAX’s original documentaries proved promising. The company expects the collaboration to reduce translation expenses while boosting the global appeal of its immersive experiences.
Camb.ai, founded by former Apple engineer Akshat Prakash and his father, recently raised $4 million and is securing additional funding to expand its team and operations. The startup avoids controversial data scraping methods, relying instead on ethically licensed datasets and input from early partners, positioning itself as a reliable choice for AI-driven content solutions.