European Parliament moves to force AI companies to pay news publishers

Lawmakers in the EU are moving closer to forcing technology companies to pay news publishers for the use of journalistic material in model training, according to a draft copyright report circulating in the European Parliament.

The text forms part of a broader effort to update copyright enforcement as automated content systems expand across media and information markets.

Compromise amendments also widen the scope beyond payment obligations, bringing AI-generated deepfakes and synthetic manipulation into sharper focus.

MEPs argue that existing legal tools fail to offer sufficient protection for publishers, journalists and citizens when automated systems reproduce or distort original reporting.

The report reflects growing concern that platform-driven content extraction undermines the sustainability of professional journalism. Lawmakers are increasingly framing compensation mechanisms as a corrective measure rather than as voluntary licensing or opaque commercial arrangements.

If adopted, the position of the Parliament would add further regulatory pressure on large technology firms already facing tighter scrutiny under the Digital Markets Act and related digital legislation, reinforcing Europe’s push to assert control over data use, content value and democratic safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Labour MPs press Starmer to consider UK under-16s social media ban

Pressure is growing on Keir Starmer after more than 60 Labour MPs called for a UK ban on social media use for under-16s, arguing that children’s online safety requires firmer regulation instead of voluntary platform measures.

The signatories span Labour’s internal divides, including senior parliamentarians and former frontbenchers, signalling broad concern over the impact of social media on young people’s well-being, education and mental health.

Supporters of the proposal point to Australia’s recently implemented ban as a model worth following, suggesting that early evidence could guide UK policy development rather than prolonged inaction.

Starmer is understood to favour a cautious approach, preferring to assess the Australian experience before endorsing legislation, as peers prepare to vote on related measures in the coming days.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Finnish data breach exposed thousands of patients

A major data breach at Finnish psychotherapy provider Vastaamo exposed the private therapy records of around 33,000 patients in 2020. Hackers demanded bitcoin payments and threatened to publish deeply personal notes if victims refused to pay.

Among those affected was Meri-Tuuli Auer, who described intense fear after learning her confidential therapy details could be accessed online. Stolen records included discussions of mental health, abuse, and suicidal thoughts, causing nationwide shock.

The breach became the largest criminal investigation in Finland, prompting emergency government talks led by then prime minister Sanna Marin. Despite efforts to stop the leak, the full database had already circulated on the dark web.

Finnish courts later convicted cybercriminal Julius Kivimäki, sentencing him to more than six years in prison. Many victims say the damage remains permanent, with trust in therapy and digital health systems severely weakened.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Council of Europe highlights legal frameworks for AI fairness

The Council of Europe recently hosted an online event to examine the challenges posed by algorithmic discrimination and explore ways to strengthen governance frameworks for AI and automated decision-making (ADM) systems.

Two new publications were presented, focusing on legal protections against algorithmic bias and policy guidelines for equality bodies and human rights institutions.

Algorithmic bias has been shown to exacerbate existing social inequalities. In employment, AI systems trained on historical data may unfairly favour male candidates or disadvantage minority groups.

Public authorities also use AI in law enforcement, migration, welfare, justice, education, and healthcare, where profiling, facial recognition, and other automated tools can carry discriminatory risks. Private-sector applications in banking, insurance, and personnel services similarly raise concerns.

Legal frameworks such as the EU AI Act (2024/1689) and the Council of Europe’s Framework Convention on AI, human rights, democracy, and the rule of law aim to mitigate these risks. The publications review how regulations protect against algorithmic discrimination and highlight remaining gaps.

National equality bodies and human rights structures play a key role in monitoring AI/ADM systems, ensuring compliance, and promoting human rights-based deployment.

The webinar highlighted practical guidance and examples for applying EU and Council of Europe rules to public sector AI initiatives, fostering more equitable and accountable systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Brazil excluded from WhatsApp rival AI chatbot ban

WhatsApp has excluded Brazil from its new restriction on third-party general-purpose chatbots, allowing AI providers to continue operating on the platform despite a broader policy shift affecting other markets.

The decision follows action by the competition authority of Brazil, which ordered Meta to suspend elements of the policy while assessing whether the rules unfairly disadvantage rival chatbot providers in favour of Meta AI.

Developers have been informed that services linked to Brazilian phone numbers do not need to stop responding to users or issue service warnings.

Elsewhere, WhatsApp has introduced a 90-day grace period starting in mid-January, requiring chatbot developers to halt responses and notify users that services will no longer function on the app.

The policy applies to tools such as ChatGPT and Grok, while customer service bots used by businesses remain unaffected.

Italy has already secured a similar exemption after regulatory scrutiny, while the EU has opened an antitrust investigation into the new rules.

Meta continues to argue that general-purpose AI chatbots place technical strain on systems designed for business messaging instead of acting as an open distribution platform for AI services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia’s social media age limit prompts restrictions on millions of under-16 accounts

Major social media platforms restricted access to approximately 4.7 million accounts linked to children under 16 across Australia during early December, following the introduction of the national social media minimum age requirement.

Initial figures collected by eSafety indicate that platforms with high youth usage are already engaging in early compliance efforts.

Since the obligation took effect on 10 December, regulatory focus has shifted towards monitoring and enforcement instead of preparation, targeting services assessed as age-restricted.

Early data suggests meaningful steps are being taken, although authorities stress it remains too soon to determine whether platforms have achieved full compliance.

eSafety has emphasised continuous improvement in age-assurance accuracy, alongside the industry’s responsibility to prevent circumvention.

Reports indicate some under-16 accounts remain active, although early signals point towards reduced exposure and gradual behavioural change rather than immediate elimination.

Officials note that the broader impact of the minimum age policy will emerge over time, supported by a planned independent, longitudinal evaluation involving academic and youth mental health experts.

Data collection will continue to monitor compliance, platform migration trends and long-term safety outcomes for children and families in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Qalb brings Urdu-language AI to Pakistan

Pakistan has launched its own Urdu-focused generative AI model, Qalb, trained on 1.97 billion tokens and evaluated across more than seven international benchmarking frameworks. The developers say the model outperforms existing Urdu-language systems on key real-world performance indicators.

With Urdu spoken by over 230 million people worldwide, Qalb aims to expand access to advanced AI tools in Pakistan’s national language. The model is designed to support local businesses, startups, education platforms, digital services, and voice-based AI agents.

Qalb was developed by a small team led by Taimoor Hassan, a serial entrepreneur who has launched and exited multiple startups and previously won the Microsoft Cup. He completed his undergraduate studies in computer science in Pakistan and is currently pursuing postgraduate education in the United States.

‘I had the opportunity to contribute in a small way to a much bigger mission for the country,’ Hassan said, noting that the project was built with his former university teammates Jawad Ahmed and Muhammad Awais. The group plans to continue refining localised AI models for specific industries.

The launch of Qalb highlights how smaller teams can develop advanced AI tools outside major technology hubs. Supporters say Urdu-first models could help drive innovation across Pakistan’s digital economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Why young people across South Asia turn to AI

Children and young adults across South Asia are increasingly turning to AI tools for emotional reassurance, schoolwork and everyday advice, even while acknowledging their shortcomings.

Easy access to smartphones, cheap data and social pressures have made chatbots a constant presence, often filling gaps left by limited human interaction.

Researchers and child safety experts warn that growing reliance on AI risks weakening critical thinking, reducing social trust and exposing young users to privacy and bias-related harms.

Studies show that many children understand AI can mislead or oversimplify, yet receive little guidance at school or home on how to question outputs or assess risks.

Rather than banning AI outright, experts argue for child-centred regulation, stronger safeguards and digital literacy that involves parents, educators and communities.

Without broader social support systems and clear accountability from technology companies, AI risks becoming a substitute for human connection instead of a tool that genuinely supports learning and wellbeing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Internet access suspended in Uganda before presidential vote

Uganda’s communications regulator has ordered a nationwide internet shutdown ahead of Thursday’s general election. The move is intended to prevent misinformation, electoral fraud, and incitement to violence.

The shutdown was due to begin at 18:00 local time on Tuesday, with no end date specified. Mobile data users in Uganda reported losing access, while some business networks, including hotels, remained connected. Voice calls and basic SMS services were expected to continue operating.

The regulator said it was acting on recommendations from security agencies, including the army and police. In a letter to operators, it described the suspension as a precautionary measure to protect national stability during what it called a sensitive national exercise.

Uganda imposed a similar internet blackout during the 2021 election, which was followed by protests in which dozens of people were killed. Earlier this month, the commission had dismissed reports of another shutdown as rumours, saying it aimed to guarantee uninterrupted connectivity.

President Yoweri Museveni, 81, is seeking a seventh term against opposition challenger Bobi Wine, 43, whose real name is Robert Kyagulanyi. Wine criticised the internet suspension and urged supporters to use Bluetooth-based messaging apps, though authorities warned those could also be restricted.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

X restricts Grok image editing after global backlash

Elon Musk’s X has limited the image editing functions of its Grok AI tool after criticism over the creation of sexualised images of real people.

The platform said technological safeguards have been introduced to block such content in regions where it is illegal, following growing concern from governments and regulators.

UK officials described the move as a positive step, although regulatory scrutiny remains ongoing.

Authorities are examining whether X complied with existing laws, while similar investigations have been launched in the US amid broader concerns over the misuse of AI-generated imagery.

International pressure has continued to build, with some countries banning Grok entirely instead of waiting for platform-led restrictions.

Policy experts have welcomed stronger controls but questioned how effectively X can identify real individuals and enforce its updated rules across different jurisdictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!