The EU to resolve dispute with India over ICT tariffs

The European Union is addressing a trade dispute with India over tariffs on ICT goods, which India has effectively blocked under the World Trade Organization (WTO) by appealing a favourable report for the EU to the non-functional WTO Appellate Body, stalling the resolution process. India has also rejected alternative dispute resolution methods, such as ad hoc appeal arbitration or a mutually agreed solution.

In response, the EU uses its Enforcement Regulation to enforce international trade obligations when disputes are blocked, ensuring that WTO rules are respected. The EU has launched a consultation for concerned entities, with responses due by 10 February 2025, to guide decisions on potential commercial policy measures should a mutually satisfactory solution not be reached.

At the same time, the EU continues to seek a resolution through alternative means, inviting India to join the Multi-Party Interim Appeal Arrangement (MPIA) or agree to ad hoc appeal arbitration. The dispute began in 2014 when India imposed customs duties of up to 20% on various ICT products, which the EU argues violates India’s WTO commitments to apply a zero-duty rate.

In 2019, the EU initiated the WTO dispute settlement process, and in April 2023, the panel ruled in favour of the EU, confirming that India’s tariffs were inconsistent with WTO rules. India appealed the decision in December 2023, prolonging the dispute.

Global stakeholders chart the course for digital governance at the IGF in Riyadh

Global digital governance was the main topic in a key discussion led by moderator Timea Suto, gathering experts to tackle challenges in AI, data management, and internet governance. At the Internet Governance Forum (IGF) in Riyadh, Saudi Arabia, speakers emphasised balancing innovation with regulatory consistency while highlighting the need for inclusive frameworks that address societal biases and underrepresented voices.

Thomas Schneider of Ofcom Switzerland underscored the Council of Europe‘s AI convention as a promising standard for global interoperability. Meta’s Flavia Alves advocated for open-source AI to drive global collaboration and safer products. Meanwhile, Yoichi Iida from Japan‘s Ministry of Communications outlined the G7 Hiroshima AI code as an international step forward, while concerns about dataset biases were raised from the audience.

Data governance discussions focused on privacy and trust in cross-border flows. Maarit Palovirta of Connect Europe called for harmonised regulations to protect privacy while fostering innovation. Yoichi Iida highlighted OECD initiatives on trusted data sharing, with Amr Hashem of the GSMA stressing the need to develop infrastructure alongside governance, particularly in underserved regions.

The future of internet governance also featured prominently, with Irina Soeffky from Germany‘s Digital Ministry reinforcing the multi-stakeholder model amid calls to update WSIS structures. Audience member Bertrand de La Chapelle proposed reforming the Internet Governance Forum to reflect current challenges. Jacques Beglinger of EuroDIG stressed the importance of grassroots inclusion, while Desiree Milosevic-Evans highlighted gender representation gaps in governance.

Canada‘s Larisa Galadza framed the coming year as critical for advancing the Global Digital Compact, with priorities on AI governance under Canada’s G7 presidency. Maria Fernanda Garza of the International Chamber of Commerce (ICC) called for alignment in governance while maintaining flexibility for local needs amid ongoing multilateral challenges.

Speakers concluded that collaboration, inclusivity, and clear mandates are key to shaping effective digital governance. As technological change accelerates, the dialogue reinforces the need for adaptable, action-oriented strategies to ensure equity and innovation globally.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Social media fine plan dropped in Australia

Australia’s government has abandoned a proposal to fine social media platforms up to 5% of their global revenue for failing to curb online misinformation. The decision follows resistance from various political parties, making the legislation unlikely to pass the Senate.

Communications Minister Michelle Rowland stated the proposal aimed to enhance transparency and hold tech companies accountable for limiting harmful misinformation online. Despite broad public support for tackling misinformation, opposition from conservative and crossbench politicians stalled the plan.

The centre-left Labor government, currently lagging in polls, faces criticism for its approach. Greens senator Sarah Hanson-Young described the proposed law as a ‘half-baked option,’ adding to calls for more robust measures against misinformation.

Industry group DIGI, including Meta, argued the proposal merely reinforced an existing code. Australia’s tech regulation efforts are part of broader concerns about foreign platforms undermining national sovereignty.

Social media blamed for fuelling UK unrest, Ofcom finds

Ofcom has linked the violent unrest in England and Northern Ireland during the summer to the rapid spread of harmful content on social media platforms. The media regulator found that disinformation and illegal posts circulated widely online following the Southport stabbings in July, which sparked the disorder.

While some platforms acted swiftly to remove inflammatory content, others were criticised for uneven responses. Experts highlighted the significant influence of social media in driving divisive narratives during the crisis, with some calling for platforms to be held accountable for unchecked dangerous content.

Ofcom, which has faced criticism for its handling of the situation, argued that its enhanced powers under the forthcoming Online Safety Act were not yet in force at the time. The new legislation will introduce stricter responsibilities for tech firms in tackling harmful content and disinformation.

The unrest, the worst seen in the United Kingdom in a decade, resulted in arrests and public scrutiny of tech platforms. A high-profile row erupted between the Prime Minister and Elon Musk, after the billionaire suggested that civil war was inevitable following the disorder, a claim strongly rebuked by Sir Keir Starmer.

Google DeepMind’s AI may ease culture war tensions, say researchers

A new AI tool created by Google DeepMind, called the ‘Habermas Machine,’ could help reduce culture war divides by mediating between different viewpoints. The system takes individual opinions and generates group statements that reflect both majority and minority perspectives, aiming to foster greater agreement.

Developed by researchers, including Professor Chris Summerfield from the University of Oxford, the AI system has been tested in the United Kingdom with more than 5,000 participants. It was found that the statements created by AI were often rated higher in clarity and quality than those written by human mediators, increasing group consensus by eight percentage points on average.

The Habermas Machine was also used in a virtual citizens’ assembly on topics such as Brexit and universal childcare. It was able to produce group statements that acknowledged minority views without marginalising them, but the AI approach does have its critics.

Some researchers argue that AI-mediated discussions don’t always promote empathy or give smaller minorities enough influence in shaping the final statements. Despite these concerns, the potential for AI to assist in resolving social disagreements remains a promising development.

Massachusetts parents sue school over AI use dispute

The parents of a Massachusetts high school senior are suing Hingham High School and its district after their son received a “D” grade and detention for using AI in a social studies project. Jennifer and Dale Harris, the plaintiffs, argue that their son was unfairly punished, as there was no rule in the school’s handbook prohibiting AI use at the time. They claim the grade has impacted his eligibility for the National Honor Society and his applications to top-tier universities like Stanford and MIT.

The lawsuit, filed in Plymouth County District Court, alleges the school’s actions could cause “irreparable harm” to the student’s academic future. Jennifer Harris stated that their son’s use of AI should not be considered cheating, arguing that AI-generated content belongs to the creator. The school, however, classified it as plagiarism. The family’s lawyer, Peter Farrell, contends that there’s widespread information supporting their view that using AI isn’t plagiarism.

The Harrises are seeking to have their son’s grade changed and his academic record cleared. They emphasised that while they can’t reverse past punishments like detention, the school can still adjust his grade and confirm that he did not cheat. Hingham Public Schools has not commented on the ongoing litigation.

Independent body in Ireland empowers EU social media users to challenge content moderation decisions

A new independent body in Ireland will allow social media users in the European Union to challenge content moderation decisions made by platforms like Facebook, TikTok, and YouTube. Established under the EU Digital Services Act (DSA), this Appeals Centre aims to provide users with an alternative to the courts when disputing content decisions. Supported by Meta’s Oversight Board Trust and certified by Ireland’s media regulator, the centre is expected to begin operations by the end of the year. It will expand to include more platforms over time.

Thomas Hughes, CEO of the Appeals Centre, emphasised the body’s independence from governments and companies, ensuring that social media content policies are applied fairly. The centre’s team of experts will review cases within 90 days to determine if the platforms’ actions align with their stated policies. The European Commission has expressed support for the initiative, with spokesperson Thomas Regnier highlighting the importance of uniform development across the EU to strengthen online user rights.

Located in Dublin, the Appeals Centre will operate on a funding model that charges social media companies fees for each case. At the same time, users will incur a nominal fee that is refundable if their appeal is successful. However, platforms are not obligated to participate, as the centre lacks the power to enforce binding settlements. The centre will be governed by a board of seven non-executive directors.

HP seeks billions in damages from Mike Lynch’s estate in UK court case

Hewlett Packard announced it will continue legal proceedings in the United Kingdom to claim up to $4 billion in damages from the estate of British billionaire Mike Lynch. The case stems from HP’s acquisition of British software company Autonomy in 2011, which was later marred by allegations of fraud. Lynch, who co-founded Autonomy, had been accused of inflating the company’s value ahead of the $11.1 billion deal but denied any wrongdoing.

In 2022, HP won a civil case against Lynch, though a UK High Court judge ruled that damages would be less than the $5 billion HP initially sought. Despite Lynch’s death in August when his yacht sank off Sicily, the company remains committed to pursuing the legal process. HP maintains that the fraud caused substantial financial losses and is seeking compensation from Lynch’s estate.

Lynch’s family has not issued a statement following HP’s latest announcement. HP had originally filed the lawsuit against both Lynch and Sushovan Hussain, Autonomy’s former chief financial officer, over the accounting scandal discovered in 2012.

The legal battle is one of the largest corporate fraud cases in the UK, centring on HP’s claim that it was misled during one of the country’s biggest tech deals. HP is determined to see the case through to its conclusion.

Chinese AI companies react to OpenAI block with SenseNova 5.5

At the recent World AI Conference in Shanghai, SenseTime introduced its latest model, SenseNova 5.5, showcasing capabilities comparable to OpenAI’s GPT-4o. This unveiling coincided with OpenAI’s decision to block its services in China, leaving developers scrambling for alternatives.

OpenAI’s move, effective from July 9th, blocks API access from regions where it does not support service, impacting Chinese developers who relied on its tools via virtual private networks. The decision, amid US-China technology tensions, underscores broader concerns about global access to AI technologies.

The ban has prompted Chinese AI companies like SenseTime, Baidu, Zhipu AI, and Tencent Cloud to offer incentives, including free tokens and migration services, to lure former OpenAI users. Analysts suggest this could accelerate China’s AI development, challenging US dominance in generative AI technologies.

The development has sparked mixed reactions in China, with some viewing it as a move to bolster domestic AI independence amidst geopolitical pressures. However, it also highlights challenges in China’s AI industry, such as reliance on US semiconductors, impacting capabilities like Kuaishou’s AI models.

ACCC accepts Telstra and Optus commitments amid Google search investigation

The Australian Competition and Consumer Commission (ACCC) has reached agreements with Telstra and Optus regarding Google’s search services following an investigation into potential anticompetitive practices. The ACCC found that Google had arrangements with Telstra and Optus since at least 2017, ensuring its search services were pre-installed as the default on Android devices supplied by these telecom companies. These agreements restrict competition by limiting the visibility of rival search engines.

Telstra and Optus have cooperated with the ACCC and agreed that, as of 30 June 2024, they will not renew or enter into any new agreements with Google that mandate its search services as the exclusive default option on devices they distribute. These undertakings aim to promote competition and consumer choice in Australia’s digital market.

ACCC Commissioner Liza Carver emphasised the importance of these undertakings in enhancing consumer choice and fostering competition in digital platforms. She noted that practices such as exclusivity agreements can stifle innovation and limit options for consumers, highlighting the need for digital platforms to adhere to Australia’s competition laws.

The ACCC’s broader investigation into Google’s practices continues, focusing on potential competition concerns raised by these agreements and their impact on the digital economy. The commission plans to submit further reports on its findings, including recommendations for regulatory reforms aimed at promoting fair competition among digital platforms in Australia.