US Supreme Court to decide TikTok’s fate amid ban fears

The future of TikTok in the United States hangs in the balance as the Supreme Court prepares to hear arguments on 10 January over a law that could force the app to sever ties with its Chinese parent company, ByteDance, or face a ban. The case centres on whether the law violates the First Amendment, with TikTok and its creators arguing that it does, while the US government maintains that national security concerns justify the measure. If the government wins, TikTok has stated it would shut down its US operations by 19 January.

Creators who rely on TikTok for income are bracing for uncertainty. Many have taken to the platform to express their frustrations, fearing disruption to their businesses and online communities. Some are already diversifying their presence on other platforms like Instagram and YouTube, though they acknowledge TikTok’s unique algorithm has provided visibility and opportunities not found elsewhere. Industry experts believe many creators are adopting a wait-and-see approach, avoiding drastic moves until the Supreme Court reaches a decision.

The Biden administration has pushed for a resolution without success, while President-elect Donald Trump has asked the court to delay the ban so he can weigh in once in office. If the ban proceeds, app stores and internet providers will be required to stop supporting TikTok, ultimately rendering it unusable. TikTok has warned that even a temporary shutdown could lead to a sharp decline in users, potentially causing lasting damage to the platform. A ruling from the Supreme Court is expected in the coming weeks.

TikTok faces new allegations of child exploitation

TikTok is under heightened scrutiny following newly unsealed allegations from a Utah lawsuit claiming the platform knowingly allowed harmful activities, including child exploitation and sexual misconduct, to persist on its livestreaming feature, TikTok Live. According to the lawsuit, TikTok disregarded the issue because it ‘profited significantly’ from these livestreams. The revelations come as the app faces a potential nationwide ban in the US unless its parent company, ByteDance, divests ownership.

The complaint, filed by Utah’s Division of Consumer Protection in June, accuses TikTok Live of functioning as a ‘virtual strip club,’ connecting minors with adult predators in real-time. Internal documents and investigations, including Project Meramec and Project Jupiter probes, reveal that TikTok was aware of the dangers. The findings indicate that hundreds of thousands of minors bypassed age restrictions and were allegedly groomed by adults to perform explicit acts in exchange for virtual gifts. The probes also uncovered criminal activities such as money laundering and drug sales facilitated through TikTok Live.

TikTok has defended itself, claiming it prioritises user safety and accusing the lawsuit of distorting facts by selectively quoting outdated internal documents. A spokesperson emphasised the platform’s ‘proactive measures’ to support community safety and dismissed the allegations as misleading. However, the unsealed material from the case, released by Utah Judge Coral Sanchez, paints a stark picture of TikTok Live’s risks to minors.

This lawsuit is not an isolated case. In October, 13 US states and Washington, D.C., filed a bipartisan lawsuit accusing TikTok of exploiting children and fostering addiction to the app. Utah Attorney General Sean Reyes called social media a pervasive tool for exploiting America’s youth and welcomed the disclosure of TikTok’s internal communications as critical evidence for demonstrating the platform’s culpability.

Why does it matter?

The controversy unfolds amid ongoing national security concerns about TikTok’s ties to China. President Joe Biden signed legislation authorising a TikTok ban last April, citing risks that the app could share sensitive data with the Chinese government. The US Supreme Court is set to hear arguments on whether to delay the ban on 10 January, with a decision expected shortly thereafter. The case underscores the intensifying debate over social media’s role in safeguarding users while balancing innovation and accountability.

Meta appoints Joel Kaplan as chief global affairs officer in strategic leadership shift

Meta Platforms has announced Joel Kaplan as its new chief global affairs officer, succeeding Nick Clegg in a significant leadership transition. Kaplan, a prominent Republican and former deputy chief of staff for policy under George W. Bush, has been with Meta since 2011 and previously reported to Clegg.

The reshuffle comes as the company navigates a delicate political landscape ahead of US President-elect Donald Trump’s inauguration, addressing past tensions with the administration over its content policies. Nick Clegg, who joined Meta in 2018 after serving as the UK’s deputy prime minister, announced his decision to step down, describing the timing as ‘right’ for the transition.

He has been instrumental in shaping Meta’s policies on contentious issues like election integrity and content moderation, including creating its independent oversight board. Clegg praised Kaplan as the ideal choice to guide Meta through evolving societal and political expectations for technology.

Kaplan’s tenure at Meta has not been without controversy. He has faced accusations of prioritising conservative agendas under the guise of neutrality, which Meta denied. Notably, Kaplan attended a 2018 Senate hearing on sexual assault allegations against then-Supreme Court nominee Brett Kavanaugh, sparking internal dissent at the company. Despite these challenges, Kaplan’s appointment underscores Meta’s intent to strengthen ties with Republican leadership.

The leadership change aligns with Meta’s broader efforts to mend its relationship with Trump and his administration. The company’s $1 million donation to Trump’s inaugural fund and CEO Mark Zuckerberg’s gestures to appease conservative concerns reflect this shift. That marks a significant chapter in Meta’s ongoing balancing act between political pressures and its role as a global tech powerhouse.

OpenAI delays Media Manager amid creator backlash

In May, OpenAI announced plans for ‘Media Manager,’ a tool to allow creators to control how their content is used in AI training, aiming to address intellectual property (IP) concerns. The project remains unfinished seven months later, with critics claiming it was never prioritised internally. The tool was intended to identify copyrighted text, images, audio, and video, allowing creators to include or exclude their work from OpenAI’s training datasets. However, its future remains uncertain, with no updates since August and missed deadlines.

The delay comes amidst growing backlash from creators and a wave of lawsuits against OpenAI. Plaintiffs, including prominent authors and artists, allege that the company trained its AI models on their works without authorisation. While OpenAI provides ad hoc opt-out mechanisms, critics argue these measures are cumbersome and inadequate.

Media Manager was seen as a potential solution, but experts doubt its effectiveness in addressing complex legal and ethical challenges, including global variations in copyright law and the burden placed on creators to protect their works. OpenAI continues to assert that its AI models transform, rather than replicate, copyrighted material, defending itself under ‘fair use’ protections.

While the company has implemented filters to minimise IP conflicts, lacking comprehensive tools like Media Manager leaves unresolved questions about compliance and compensation. As OpenAI battles legal challenges, the effectiveness and impact of Media Manager—if it ever launches—remain uncertain in the face of an evolving IP landscape.

Elon Musk’s regulatory challenges and potential influence under Trump’s presidency

In the final days of Joe Biden’s US presidency, the SEC pressured Elon Musk to settle allegations of securities violations related to his 2022 Twitter takeover or face civil charges. Musk’s response, shared via social media, included a legal letter accusing the SEC of an ‘improperly motivated’ ultimatum and demanding to know if the White House had influenced the action. Both the SEC and White House declined to comment.

As Donald Trump prepares to take office, questions arise about how Musk’s ties to the incoming administration could impact ongoing federal investigations into his business ventures, including Tesla, SpaceX, and Neuralink. Sources reveal at least 20 active probes into issues ranging from Tesla’s driver-assistance systems to SpaceX’s environmental practices.

Critics warn that Trump’s administration might scale back regulatory scrutiny, while legal experts argue that evidence-based cases could still proceed regardless of Musk’s political connections. Musk’s proximity to Trump has intensified since the election, with Musk participating in high-profile meetings and being appointed to co-lead a government efficiency initiative.

Musk has openly discussed using his position to push policies that could benefit his businesses, such as easing driverless-vehicle regulations. Meanwhile, ongoing investigations, including those by the DOJ and the National Highway Traffic Safety Administration, face uncertainties over enforcement under the new administration.

Musk’s business dealings, including contacts with foreign leaders and regulatory disputes, continue to draw attention. Despite allegations of political interference, agencies like the EPA and NASA have emphasised their commitment to legal responsibilities. However, critics fear that Musk’s influence could undermine the integrity of federal oversight during Trump’s second term.

Albania’s TikTok ban: Balancing youth protection with free speech and economic impact

In Tirana, Albania, Ergus Katiaj, a small business owner who relies on TikTok to market his nighttime delivery service for snacks, cigarettes, and alcohol, faces an uncertain future. The Albanian government has announced a year-long ban on the social media platform, a move aimed at curbing youth violence.

The ban follows a tragic incident in November where a 14-year-old boy was fatally stabbed, reportedly after an online clash with a peer. Prime Minister Edi Rama said the decision, announced on 21 December, is to protect young people, but critics argue it threatens free speech and commerce ahead of the May elections.

The ban aligns Albania with a growing list of countries imposing restrictions on TikTok due to concerns over harmful content and its ties to China-based parent company ByteDance. However, business owners like Katiaj fear significant financial losses, as TikTok has been a vital tool for free marketing.

Rights groups and opposition leaders, such as Arlind Qori of the Bashke party, worry the ban sets a troubling precedent for political censorship, particularly in a country where protests against the jailing of political opponents were met with harsh government responses last year.

TikTok has called for urgent clarification from the Albanian government, asserting that reports indicate the videos linked to the tragic incident were uploaded to another platform. Meanwhile, the debate continues, with some viewing the ban as a protective measure for youth and others as an overreach limiting commerce and dissent.

For many, like Katiaj, the ban underscores the broader challenges of balancing public safety with democratic freedoms in Albania.

Malaysia tightens social media oversight with new licensing law

Malaysia’s communications regulator has granted licenses to Tencent’s WeChat and ByteDance’s TikTok under a new social media law designed to combat rising cybercrime. The law, effective from 1 January, mandates that platforms and messaging services with over 8 million users in Malaysia must obtain a license or face legal consequences.

While messaging app Telegram is close to completing the licensing process, Meta Platforms, the owner of Facebook, Instagram, and WhatsApp, has just started compliance steps. Other major platforms face scrutiny under the law. X, formerly known as Twitter, claims its user base in Malaysia falls below the 8 million threshold, a claim currently under review by authorities.

Alphabet’s YouTube has not applied for a license, citing concerns about how the law applies to its video-sharing features. The regulator emphasised that non-compliance could lead to investigations and regulatory actions.

The move follows a surge in harmful online content earlier this year, prompting Malaysian authorities to urge tighter monitoring from social media companies. Content related to online scams, child exploitation, cyberbullying, and sensitive topics such as race, religion, and royalty is classified as harmful.

Platforms like TikTok, Facebook, and YouTube reportedly have millions of active users in Malaysia. TikTok has over 28 million users aged 18 and above, highlighting the region’s high stakes of regulatory compliance.

California’s ban on addictive feeds for minors upheld

A federal judge has upheld California’s law, SB 976, which restricts companies from serving addictive content feeds to minors. The decision allows the legislation to take effect, beginning a significant shift in how social media platforms operate in the state.

Companies must now ensure that addictive feeds, defined as algorithms recommending content based on user behaviour rather than explicit preferences, are not shown to minors without parental consent. By 2027, businesses will also need to implement age assurance techniques, such as age estimation models, to identify underage users and tailor their feeds accordingly.

The tech industry group NetChoice, representing firms like Meta, Google, and X, attempted to block the law, citing First Amendment concerns. While the judge dismissed their challenge to the addictive feeds provision, certain aspects of the law, such as limits on nighttime notifications for minors, were blocked.

This ruling marks a notable step in California’s efforts to regulate the digital landscape and protect younger users from potentially harmful online content.

TikTok fined in Russia for legal violations

A Moscow court has fined TikTok three million roubles (around $28,930) for failing to meet Russian legal requirements. The court’s press service confirmed the verdict but did not elaborate on the specific violation.

The social media platform, owned by ByteDance, has been facing increasing scrutiny worldwide. Allegations of non-compliance with legal frameworks and security concerns have made headlines in multiple countries.

TikTok encountered further setbacks recently, including a year-long ban in Albania last December. Canadian authorities also ordered the company to halt operations, citing national security threats.

The fine in Russia reflects the mounting regulatory challenges for TikTok as it navigates stricter oversight in various regions.

AI model Aitana takes social media by storm

In Barcelona, a pink-haired 25-year-old named Aitana captivates social media with her stunning images and relatable personality. But Aitana isn’t a real person—she’s an AI model created by The Clueless Agency. Launched during a challenging period for the agency, Aitana was designed as a solution to the unpredictability of working with human influencers. The virtual model has proven successful, earning up to €10,000 monthly by featuring in advertisements and modelling campaigns.

Aitana has already amassed over 343,000 Instagram followers, with some celebrities unknowingly messaging her for dates. Her creators, Rubén Cruz and Diana Núñez, maintain her appeal by crafting a detailed “life,” including fictional trips and hobbies, to connect with her audience. Unlike traditional models, Aitana has a defined personality, presented as a fitness enthusiast with a determined yet caring demeanour. This strategic design, rooted in current trends, has made her a relatable and marketable figure.

The success of Aitana has sparked a new wave of AI influencers. The Clueless Agency has developed additional virtual models, including a more introverted character named Maia. Brands increasingly seek these customisable AI creations for their campaigns, citing cost efficiency and the elimination of human unpredictability. However, critics warn that the hypersexualised and digitally perfected imagery promoted by such models may negatively influence societal beauty standards and young audiences.

Despite these concerns, Aitana represents a broader shift in advertising and social media. By democratising access to influencer marketing, AI models like her offer new opportunities for smaller businesses while challenging traditional notions of authenticity and influence in the digital age.