Indonesia fines Platform X for pornographic content violations

Platform X has paid an administrative fine of nearly Rp80 million after failing to meet Indonesia’s content moderation requirements related to pornographic material, according to the country’s digital regulator.

The Ministry of Communication and Digital Affairs said the payment was made on 12 December 2025, after a third warning letter and further exchanges with the company. Officials confirmed that Platform X appointed a representative to complete the process, who is based in Singapore.

The regulator welcomed the company’s compliance, framing the payment as a demonstration of responsibility by an electronic system operator under Indonesian law. Authorities said the move supports efforts to keep the national digital space safe, healthy, and productive.

All funds were processed through official channels and transferred directly to the state treasury managed by the Ministry of Finance, in line with existing regulations, the ministry said.

Officials said enforcement actions against domestic and global platforms, including those operating from regional hubs such as Singapore, remain a priority. The measures aim to protect children and vulnerable groups and encourage stronger content moderation and communication.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Universities back generative AI but guidance remains uneven

A majority of leading US research universities are encouraging the use of generative AI in teaching, according to a new study analysing institutional policies and guidance documents across higher education.

The research reviewed publicly available policies from 116 R1 universities and found that 63 percent explicitly support the use of generative AI, while 41 percent provide detailed classroom guidance. More than half of the institutions also address ethical considerations linked to AI adoption.

Most guidance focuses on writing-related activities, with far fewer references to coding or STEM applications. The study notes that while many universities promote experimentation, expectations placed on faculty can be demanding, often implying significant changes to teaching practices.

US researchers also found wide variation in how universities approach oversight. Some provide sample syllabus language and assignment design advice, while others discourage the use of AI-detection tools, citing concerns around reliability and academic trust.

The authors caution that policy statements may not reflect real classroom behaviour and say further research is needed to understand how generative AI is actually being used by educators and students in practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Conduit revolutionises neuro-language research with 10,000-hour dataset

A San Francisco start-up, named Conduit, has spent six months building what it claims is the largest neural language dataset ever assembled, capturing around 10,000 hours of non-invasive brain recordings from thousands of participants.

The project aims to train thought-to-text AI systems that interpret semantic intent from brain activity moments before speech or typing occurs.

Participants take part in extended conversational sessions instead of rigid laboratory tasks, interacting freely with large language models through speech or simplified keyboards.

Engineers found that natural dialogue produced higher quality data, allowing tighter alignment between neural signals, audio and text while increasing overall language output per session.

Conduit developed its own sensing hardware after finding no commercial system capable of supporting large-scale multimodal recording.

Custom headsets combine multiple neural sensing techniques within dense training rigs, while future inference devices will be simplified once model behaviour becomes clearer.

Power systems and data pipelines were repeatedly redesigned to balance signal clarity with scalability, leading to improved generalisation across users and environments.

As data volume increased, operational costs fell through automation and real time quality control, allowing continuous collection across long daily schedules.

With data gathering largely complete, the focus has shifted toward model training, raising new questions about the future of neural interfaces, AI-mediated communication and cognitive privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reshapes cybercrime investigations in India

Maharashtra police are expanding the use of an AI-powered investigation platform developed with Microsoft to tackle the rapid growth of cybercrime.

MahaCrimeOS AI, already in use across Nagpur district, will now be deployed to more than 1,100 police stations statewide, significantly accelerating case handling and investigation workflows.

The system acts as an investigation copilot, automating complaint intake, evidence extraction and legal documentation across multiple languages.

Officers can analyse transaction trails, request data from banks and telecom providers and follow standardised investigation pathways, instead of relying on slow manual processes.

Built using Microsoft Foundry and Azure OpenAI Service, MahaCrimeOS AI integrates policing protocols, criminal law references and open-source intelligence.

Investigators report major efficiency gains, handling several cases monthly where only one was previously possible, while maintaining procedural accuracy and accountability.

The initiative highlights how responsible AI deployment can strengthen public institutions.

By reducing administrative burden and improving investigative capacity, the platform allows officers to focus on victim support and crime resolution, marking a broader shift toward AI-assisted governance in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New law requires AI disclosure in advertising in the US

A new law in New York, US, will require advertisers to disclose when AI-generated people appear in commercial content. Governor Kathy Hochul said the measure brings transparency and protects consumers as synthetic avatars become more widespread.

A second law now requires consent from heirs or executors when using a deceased person’s likeness for commercial purposes. The rule updates the state’s publicity rights, which previously lacked clarity in the context of the generative AI era.

Industry groups welcomed the move, saying it addresses the risks posed by unregulated AI usage, particularly for actors in the film and television industries. The disclosure must be conspicuous when an avatar does not correspond to a real human.

Specific expressive works such as films, games and shows are exempt when the avatar matches its use in the work. The laws arrive as national debate intensifies and President-elect Donald Trump signals potential attempts to limit state-level AI regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches GPT‑5.2 for professional knowledge work

OpenAI has introduced GPT‑5.2, its most advanced model series to date, designed to enhance professional knowledge work. Users report significant time savings, with daily reductions of 40-60 minutes and more than 10 hours per week for heavy users.

The new model excels at generating spreadsheets, presentations, and code, while also handling complex, multi-step projects with improved speed and accuracy.

Performance benchmarks show GPT‑5.2 surpasses industry professionals on GDPval tasks across 44 occupations, producing outputs over eleven times faster and at a fraction of the cost.

Coding abilities have also reached a new standard, encompassing debugging, refactoring, front-end UI work, and multi-language software engineering tasks, providing engineers with a more reliable daily assistant.

GPT‑5.2 Thinking improves long-context reasoning, vision, and tool-calling capabilities. It accurately interprets long documents, charts, and graphical interfaces while coordinating multi-agent workflows.

The model also demonstrates enhanced factual accuracy and fewer hallucinations, making it more dependable for research, analysis, and decision-making.

The rollout includes ChatGPT Instant, Thinking, and Pro plans, as well as API access for developers. Early tests show GPT‑5.2 accelerates research, solves complex problems, and improves professional workflows, setting a new benchmark for real-world AI tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU survey shows strong public backing for digital literacy in schools

A new Eurobarometer survey finds that Europeans want digital skills to hold the same status in schools as reading, mathematics and science.

Citizens view digital competence as essential for learning, future employment and informed participation in public life.

Nine in ten respondents believe that schools should guide pupils on how to handle the harmful effects of digital technologies on their mental health and well-being, rather than treating such issues as secondary concerns.

Most Europeans also support a more structured approach to online information. Eight in ten say digital literacy helps them avoid misinformation, while nearly nine in ten want teachers to be fully prepared to show students how to recognise false content.

A majority continues to favour restrictions on smartphones in schools, yet an even larger share supports the use of digital tools specifically designed for learning.

More than half find that AI brings both opportunities and risks for classrooms, which they believe should be examined in greater depth.

Almost half want the EU to shape standards for the use of educational technologies, including rules on AI and data protection.

The findings will inform the European Commission’s 2030 Roadmap on digital education and skills, scheduled for release next year as part of the Union of Skills initiative.

A survey carried out across all member states reflects a growing expectation that digital education should become a central pillar of Europe’s teaching systems, rather than an optional enhancement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Users gain new control with Instagram feed algorithm

Instagram has unveiled a new AI-powered feature called ‘Your Algorithm’, giving users control over the topics shown in their Reels feed. The tool analyses viewing history and allows users to indicate which subjects they want to see more or less of.

The feature displays a summary of each user’s top interests and allows typing in specific topics to fine-tune recommendations in real-time. Instagram plans to expand the tool beyond Reels to Explore and other areas of the app.

Launch started in the US, with a global rollout in English expected soon. The initiative comes amid growing calls for social media platforms to provide greater transparency over algorithmic content and avoid echo chambers.

By enabling users to adjust their feeds directly, Instagram aims to offer more personalised experiences while responding to regulatory pressures and societal concerns over harmful content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Three in ten US teens now use AI chatbots every day, survey finds

According to new data from the Pew Research Center, roughly 64% of US teens (aged 13–17) say they have used an AI chatbot; about three in ten (≈ 30%) report daily use. Among those teens, the leading chatbot is ChatGPT (used by 59%), followed by Gemini (23%) and Meta AI (20%).

The widespread adoption raises growing safety and welfare concerns. As teenagers increasingly rely on AI for information, companionship or emotional support, critics point to potential risks, including exposure to biased content, misinformation, or emotionally manipulative interactions, particularly among vulnerable youth.

Legal action has already followed, with families of at least two minors suing AI-developer companies after alleged harmful advice from chatbots.

Demographic patterns reveal that Black and Hispanic teens report higher daily usage rates (around 33-35%) compared to their White peers (≈ 22%). Daily use is also more common among older teens (15–17) than younger ones.

For policymakers and digital governance stakeholders, the findings add urgency to calls for AI-specific safeguarding frameworks, especially where young people are concerned. As AI tools become embedded in adolescent life, ensuring transparency, responsible design, and robust oversight will be critical to preventing unintended harms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian families receive eSafety support as the social media age limit takes effect

Australia has introduced a minimum age requirement of 16 for social media accounts during the week, marking a significant shift in its online safety framework.

The eSafety Commissioner has begun monitoring compliance, offering a protective buffer for young people as they develop digital skills and resilience. Platforms now face stricter oversight, with potential penalties for systemic breaches, and age assurance requirements for both new and current users.

Authorities stress that the new age rule forms part of a broader effort aimed at promoting safer online environments, rather than relying on isolated interventions. Australia’s online safety programmes continue to combine regulation, education and industry engagement.

Families and educators are encouraged to utilise the resources on the eSafety website, which now features information hubs that explain the changes, how age assurance works, and what young people can expect during the transition.

Regional and rural communities in Australia are receiving targeted support, acknowledging that the change may affect them more sharply due to limited local services and higher reliance on online platforms.

Tailored guidance, conversation prompts, and step-by-step materials have been produced in partnership with national mental health organisations.

Young people are reminded that they retain access to group messaging tools, gaming services and video conferencing apps while they await eligibility for full social media accounts.

eSafety officials underline that the new limit introduces a delay rather than a ban. The aim is to reduce exposure to persuasive design and potential harm while encouraging stronger digital literacy, emotional resilience and critical thinking.

Ongoing webinars and on-demand sessions provide additional support as the enforcement phase progresses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!