Strict new rules have been introduced in India for social media platforms in an effort to curb the spread of AI-generated and deepfake material.
Platforms must label synthetic content clearly and remove flagged posts within three hours instead of allowing manipulated material to circulate unchecked. Government notifications and court orders will trigger mandatory action, creating a fast-response mechanism for potentially harmful posts.
Synthetic media has already raised concerns about public safety, misinformation and reputational harm, prompting the government to strengthen oversight of online platforms and their handling of AI-generated imagery.
The measure forms part of a broader push by India to regulate digital environments and anticipate the risks linked to advanced AI tools.
Authorities maintain that early intervention and transparency around manipulated content are vital for public trust, particularly during periods of political sensitivity or high social tension.
Platforms are now expected to align swiftly with the guidelines and cooperate with legal instructions. The government views strict labelling and rapid takedowns as necessary steps to protect users and uphold the integrity of online communication across India.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Ambitions for AI were outlined during a presentation at the Jožef Stefan Institute, where Slovenia’s Prime Minister Robert Golob highlighted the country’s growing role in scientific research and technological innovation.
He argued that AI has moved far beyond a supportive research tool and is now shaping the way societies function.
He called for deeper cooperation between engineering and the natural sciences instead of isolated efforts, while stressing that social sciences and the humanities must also be involved to secure balanced development.
Golob welcomed the joint bid for a new national supercomputer, noting that institutions once competing for excellence are now collaborating. He said Europe must build a stronger collective capacity if it wants to keep pace with the US and China.
Europe may excel in knowledge, he added, yet it continues to lag behind in turning that knowledge into useful tools for society.
Government officials set out the investment increases that support Slovenia’s long-term scientific agenda. Funding for research, innovation and development has risen sharply, while work has begun on two major projects: the national supercomputer and the Centre of Excellence for Artificial Intelligence.
Leaders from the Jožef Stefan Institute praised the government for recognising Slovenia’s AI potential and strengthening financial support.
Slovenia will present its progress at next week’s AI Action Summit in Paris, where global leaders, researchers, civil society and industry representatives will discuss sustainable AI standards.
Officials said that sustained investment in knowledge remains the most reliable route to social progress and international competitiveness.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
US AI company Anthropic’s expansion into India has triggered a legal dispute with a Bengaluru-based software firm that claims it has used the name ‘Anthropic’ since 2017. The Indian company argues that the US AI firm’s market entry has caused customer confusion. It is seeking recognition of prior use and damages of ₹10 million.
A commercial court in Karnataka has issued notice and suit summons to Anthropic but declined to grant an interim injunction. Further hearings are scheduled. The local firm says it prefers coexistence but turned to litigation due to growing marketplace confusion.
The dispute comes as India becomes a key growth market for global AI companies. Anthropic recently announced local leadership and expanded operations in the country. India’s large digital economy and upcoming AI industry events reinforce its strategic importance.
The case also highlights broader challenges linked to the rapid global expansion of AI firms. Trademark protection, brand due diligence, and regulatory clarity are increasingly central to cross-border digital market entry.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Court of Justice of the EU has ruled that WhatsApp can challenge an EDPB decision directly in European courts. Judges confirmed that firms may seek annulment when a decision affects them directly instead of relying solely on national procedures.
A ruling that reshapes how companies defend their interests under the GDPR framework.
The judgment centres on a 2021 instruction from the EDPB to Ireland’s Data Protection Commission regarding the enforcement of data protection rules against WhatsApp.
European regulators argued that only national authorities were formal recipients of these decisions. The court found that companies should be granted standing when their commercial rights are at stake.
By confirming this route, the court has created an important precedent for businesses facing cross-border investigations. Companies will be able to contest EDPB decisions at EU level rather than moving first through national courts, a shift that may influence future GDPR enforcement cases across the Union.
Legal observers expect more direct challenges as organisations adjust their compliance strategies. The outcome strengthens judicial oversight of the EDPB and could reshape the balance between national regulators and EU-level bodies in data protection governance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has issued implementation guidelines for Article 18 of the European Media Freedom Act (EMFA), setting out how large platforms must protect recognised media content through self-declaration mechanisms.
Article 18 has been in effect for 6 months, and the guidance is intended to translate legal duties into operational steps. The European Broadcasting Union welcomed the clarification but warned that major platforms continue to delay compliance, limiting media organisations’ ability to exercise their rights.
The Commission says self-declaration mechanisms should be easy to find and use, with prominent interface features linked to media accounts. Platforms are also encouraged to actively promote the process, make it available in all EU languages, and use standardised questionnaires to reduce friction.
The guidance also recommends allowing multiple accounts in one submission, automated acknowledgements with clear contact points, and the ability to update or withdraw declarations. The aim is to improve transparency and limit unilateral moderation decisions.
The guidelines reinforce the EMFA’s goal of rebalancing power between platforms and media organisations by curbing opaque moderation practices. The impact of EMFA will depend on enforcement and ongoing oversight to ensure platforms implement the measures in good faith.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Dutch MPs have renewed calls for companies and public services in the Netherlands to reduce reliance on US-based cloud servers. The move reflects growing concern over data security and foreign access in the Netherlands.
Research by NOS found that two-thirds of essential service providers in the Netherlands rely on at least one US cloud server. Local councils, health insurers and hospitals in the Netherlands remain heavily exposed.
Concerns intensified following a proposed sale of Solvinity, which manages the DigiD system used across the Netherlands. A sale to a US firm could place Dutch data under the US Cloud Act.
Parties including D66, VVD and CDA say critical infrastructure data in the Netherlands should be prioritised for protection. Dutch cloud providers say Europe could handle most services if procurement rules changed.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has begun testing advertisements inside ChatGPT for some adult users in the US, marking a major shift for the widely used AI service.
The ads appear only on Free and Go tiers in the US, while paid plans remain ad free. OpenAI says responses are unaffected, though critics warn commercial messaging could blur boundaries over time in the US.
Ads are selected based on conversation topics and prior interactions, prompting concern among privacy advocates in the US. OpenAI says advertisers receive only aggregated data and cannot view conversations.
Industry analysts say the move reflects growing pressure to monetise costly AI infrastructure in the US. Regulators and researchers continue to debate whether advertising can coexist with trust in AI systems.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A landmark trial has begun in Los Angeles, accusing Meta and Google’s YouTube of deliberately addicting children to their platforms.
The case is part of a wider series of lawsuits across the US seeking to hold social media companies accountable for harms to young users. TikTok and Snap settled before trial, leaving Meta and YouTube to face the allegations in court.
The first bellwether case involves a 19-year-old identified as ‘KGM’, whose claims could shape thousands of similar lawsuits. Plaintiffs allege that design features were intentionally created to maximise engagement among children, borrowing techniques from slot machines and the tobacco industry.
A trial that may see testimony from executives, including Meta CEO Mark Zuckerberg, and could last six to eight weeks.
Social media companies deny the allegations, emphasising existing safeguards and arguing that teen mental health is influenced by numerous factors, such as academic pressure, socioeconomic challenges and substance use, instead of social media alone.
Meta and YouTube maintain that they prioritise user safety and privacy while providing tools for parental oversight.
Similar trials are unfolding across the country. New Mexico is investigating allegations of sexual exploitation facilitated by Meta platforms, while Oakland will hear cases representing school districts.
More than 40 state attorneys general have filed lawsuits against Meta, with TikTok facing claims in over a dozen states. Outcomes could profoundly impact platform design, regulation and legal accountability for youth-focused digital services.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The ambitions of the EU to streamline telecom rules are facing fresh uncertainty after a Commission document indicated that the Digital Networks Act may create more administrative demands for national regulators instead of easing their workload.
The plan to simplify long-standing procedures risks becoming more complex as officials examine the impact on oversight bodies.
Concerns are growing among telecom authorities and BEREC, which may need to adjust to new reporting duties and heightened scrutiny. The additional requirements could limit regulators’ ability to respond quickly to national needs.
Policymakers hoped the new framework would reduce bureaucracy and modernise the sector. The emerging assessment now suggests that greater coordination at the EU level may introduce extra layers of compliance at a time when regulators seek clarity and flexibility.
The debate has intensified as governments push for faster network deployment and more predictable governance. The prospect of heavier administrative tasks could slow progress rather than deliver the streamlined system originally promised.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A global health organisation is urging the EU to make fuller use of its digital rules to curb health disinformation as concerns grow over the impact of deepfakes on public confidence.
Warnings point to a rising risk that manipulated content could reduce vaccine uptake instead of supporting informed public debate.
Experts argue that the Digital Services Act already provides the framework needed to limit harmful misinformation, yet enforcement remains uneven. Stronger oversight could improve platforms’ ability to detect manipulated content and remove inaccurate claims that jeopardise public health.
Campaigners emphasise that deepfake technology is now accessible enough to spread false narratives rapidly. The trend threatens vaccination campaigns at a time when several member states are attempting to address declining trust in health authorities.
The EU officials continue to examine how digital regulation can reinforce public health strategies. The call for stricter enforcement highlights the pressure on Brussels to ensure that digital platforms act responsibly rather than allowing misleading material to circulate unchecked.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!