Public invited to comment on FTC’s Big Tech probe

The US Federal Trade Commission has launched an inquiry into tech companies’ content moderation policies and decisions to ban users. FTC Chairman Andrew Ferguson stated that such actions could amount to censorship and potentially breach the law.

Concerns have been raised about whether platforms misled users or unfairly suppressed conservative voices. Ferguson previously suggested that advertisers may have coordinated to withdraw spending from sites like Elon Musk’s X due to content concerns.

Unclear moderation policies could violate laws against deceptive business practices or stem from anticompetitive behaviour. The FTC is now seeking public input, with online comments open until 21 May.

For more information on these topics, visit diplomacy.edu.

Telstra faces penalties after broadband speed ruling

Australia’s Federal Court has ruled that telecom giant Telstra misled customers about downgrading the upload speed of its broadband plans. The Australian Competition & Consumer Commission (ACCC) initiated legal action in December 2022, accusing Telstra of downgrading the upload speeds for nearly 9,000 customers in 2020 without informing them or adjusting charges accordingly.

The ACCC argued that Telstra’s failure to notify customers deprived them of the chance to decide whether the altered service met their needs. The regulator is seeking penalties, compensation for affected customers, and other measures, with a final decision to be made by the court later.

Telstra expressed disappointment in the ruling but acknowledged the court’s decision. A spokesperson said the company would review the judgment before deciding on further steps.

For more information on these topics, visit diplomacy.edu.

Brazil slaps X with $1.42 million fine for noncompliance

Brazil‘s Supreme Court Justice Alexandre de Moraes has fined social media platform X, owned by Elon Musk, 8.1 million reais ($1.42 million) for failing to comply with judicial orders. The ruling, made public on Thursday, follows a legal case from 2023 where the court had instructed X to remove a profile spreading misinformation and provide the user’s registration data.

X’s failure to meet these demands resulted in a daily fine of 100,000 reais, and the company’s local legal representative faced potential criminal liability. The court order required the immediate payment of the fine, citing the platform’s noncompliance. X’s legal team in Brazil has not commented on the matter.

In 2024, X faced a month-long suspension in Brazil for not adhering to court orders related to hate speech moderation and for failing to designate a legal representative in the country, as mandated by law.

For more information on these topics, visit diplomacy.edu.

Australian kids overlook social media age checks

A recent report by Australia’s eSafety regulator reveals that children in the country are finding it easy to bypass age restrictions on social media platforms. The findings come ahead of a government ban, set to take effect at the end of 2025, that will prevent children under the age of 16 from using these platforms. The report highlights data from a national survey on social media use among 8 to 15-year-olds and feedback from eight major services, including YouTube, Facebook, and TikTok.

The report shows that 80% of Australian children aged 8 to 12 were using social media in 2024, with YouTube, TikTok, Instagram, and Snapchat being the most popular platforms. While most platforms, except Reddit, require users to enter their date of birth during sign-up, the report indicates that these systems rely on self-declaration, which can be easily manipulated. Despite these weaknesses, 95% of teens under 16 were found to be active on at least one of the platforms surveyed.

While some platforms, such as TikTok, Twitch, and YouTube, have introduced tools to proactively detect underage users, others have not fully implemented age verification technologies. YouTube remains exempt from the upcoming ban, allowing children under 13 to use the platform with parental supervision. However, eSafety Commissioner Julie Inman Grant stressed that there is still significant work needed to enforce the government’s minimum age legislation effectively.

The report also noted that most of the services surveyed had conducted research to improve their age verification processes. However, as the law approaches, there are increasing calls for app stores to take greater responsibility for enforcing age restrictions.

For more information on these topics, visit diplomacy.edu.

Stricter rules for WhatsApp after EU designation

WhatsApp has officially met the threshold set by the EU’s Digital Services Act (DSA), marking its designation as a Very Large Online Platform.

The messaging app, owned by Meta Platforms, reported an average of 46.8 million monthly users in the EU during the latter half of 2024, surpassing the 45-million-user threshold established by the DSA.

The new classification requires WhatsApp to strengthen efforts in tackling illegal and harmful online content.

The platform must assess system risks related to public security, fundamental rights, and protecting minors within four months to comply with the DSA. Violations could result in fines reaching up to 6% of global annual revenue.

Meta’s Instagram and Facebook are already subject to the same rules. While complying with the stricter regulations, Meta leadership, including Mark Zuckerberg, has expressed concerns about the growing impact of EU tech laws.

For more information on these topics, visit diplomacy.edu.

iPhone 16e features Apple-designed C1 subsystem

Apple has introduced its first custom-designed modem chip, marking a significant step towards reducing reliance on Qualcomm. The new chip, a part of Apple’s C1 subsystem, debuts in the $599 iPhone 16e and will eventually be integrated across other products.

The C1 subsystem includes advanced components like processors and memory, offering better battery life and enhanced artificial intelligence features.

Apple has ensured the modem is globally compatible, testing it with 180 carriers in 55 countries. Executives highlight its ability to prioritise network traffic for smoother performance, setting it apart from competitors.

Modem development is highly complex, with few companies achieving global compatibility. Apple previously relied on Qualcomm but resolved to design its own platform after legal disputes and challenges with alternative suppliers.

The C1 subsystem represents Apple’s strategy to tightly integrate modem technology with its processors for long-term product differentiation.

Apple’s senior hardware executives described the C1 as their most complex creation, combining cutting-edge chipmaking techniques. The new platform underscores Apple’s focus on control and innovation in core technologies.

For more information on these topics, visit diplomacy.edu.

Trump discusses TikTok sale with China

President Donald Trump confirmed on Wednesday that he was in active discussions with China over the future of TikTok, as the US seeks to broker a sale of the popular app. Speaking to reporters aboard Air Force One, Trump revealed that talks were ongoing, underscoring the US government’s desire to address national security concerns tied to the app’s ownership by the Chinese company ByteDance. The move comes amid growing scrutiny over TikTok’s data security practices and potential links to the Chinese government.

The Trump administration has expressed concerns that TikTok could be used to collect sensitive data on US users, raising fears about national security risks. As a result, the US has been pushing for ByteDance to sell TikTok’s US operations to an American company. This would be part of an effort to reduce any potential influence from the Chinese government over the app’s data and operations. However, the process has faced complexities, with discussions involving multiple stakeholders, including potential buyers.

While the negotiations continue, the future of TikTok remains uncertain. If a sale is not agreed upon, the US has indicated that it could pursue further actions, including a potential ban of the app. As these talks unfold, the outcome could have significant implications for TikTok’s millions of American users and its business operations in the US, with both sides working to find a solution that addresses the security concerns while allowing the app to continue its success.

For more information on these topics, visit diplomacy.edu.

Two charged after pensioner loses over £100,000 in cryptocurrency fraud

Two men have been charged in connection with a cryptocurrency fraud that saw a 75-year-old man from Aberdeenshire lose more than £100,000. The case, reported to police in July, led to an extensive investigation by officers from the north east division CID.

Following inquiries, officers travelled to Coventry and Mexborough on Tuesday, working alongside colleagues from West Midlands Police and South Yorkshire Police.

The coordinated operation resulted in the arrests of two men, aged 36 and 54, who have now been charged in relation to the fraud allegations.

Police have not yet disclosed details of how the scam was carried out, but cryptocurrency frauds often involve fake investment schemes, phishing scams, or fraudulent trading platforms that lure victims into handing over money with promises of high returns.

Many scams also exploit a lack of regulation in the digital currency sector, making it difficult for victims to recover lost funds.

Authorities have urged the public to remain vigilant and report any suspicious financial activity, particularly scams involving cryptocurrencies.

For more information on these topics, visit diplomacy.edu.

Lawyers warned about AI misuse in court filings

Warnings about AI misuse have intensified after lawyers from Morgan & Morgan faced potential sanctions for using fake case citations in a lawsuit against Walmart.

The firm’s urgent email to over 1,000 attorneys highlighted the dangers of relying on AI tools, which can fabricate legal precedents and jeopardise professional credibility. A lawyer in the Walmart case admitted to unintentionally including AI-generated errors in court filings.

Courts have seen a rise in similar incidents, with at least seven cases involving disciplinary actions against lawyers using false AI-generated information in recent years. Prominent examples include fines and mandatory training for lawyers in Texas and New York who cited fictitious cases in legal disputes.

Legal experts warn that while AI tools can speed up legal work, they require rigorous oversight to avoid costly mistakes.

Ethics rules demand lawyers verify all case filings, regardless of AI involvement. Generative AI, such as ChatGPT, creates risks by producing fabricated data confidently, sometimes referred to as ‘hallucinations’. Experts point to a lack of AI literacy in the legal profession as the root cause, not the technology itself.

Advances in AI continue to reshape the legal landscape, with many firms adopting the technology for research and drafting. However, mistakes caused by unchecked AI use underscore the importance of understanding its limitations.

Acknowledging this issue, law schools and organisations are urging lawyers to approach AI cautiously to maintain professional standards.

For more information on these topics, visit diplomacy.edu.

EU delays AI liability directive due to stalled negotiations

The European Commission has removed the AI Liability Directive from its 2025 work program due to stalled negotiations, though lawmakers in the European Parliament’s Internal Market and Consumer Protection Committee (IMCO) have voted to continue working on the proposal. A spokesperson confirmed that IMCO coordinators will push to keep the directive on the political agenda, despite the Commission’s plans to withdraw it. The Legal Affairs committee has yet to make a decision on the matter.

The AI Liability Directive, proposed in 2022 alongside the EU’s AI Act, aimed to address the potential risks AI systems pose to society. While some lawmakers, such as German MEP Axel Voss, criticised the Commission’s move as a ‘strategic mistake,’ others, like Andreas Schwab, called for more time to assess the impact of the AI Act before introducing separate liability rules.

The proposal’s withdrawal has sparked mixed reactions within the European Parliament. Some lawmakers, like Marc Angel and Kim van Sparrentak, emphasised the need for harmonised liability rules to ensure fairness and accountability, while others expressed concern that such rules might not be needed until the AI Act is fully operational. Consumer groups welcomed the proposed legislation, while tech industry representatives argued that liability issues were already addressed under the revamped Product Liability Directive.

For more information on these topics, visit diplomacy.edu.