Character AI blocks teen chat and introduces new interactive Stories feature

A new feature called ‘Stories’ from Character.AI allows users under 18 to create interactive fiction with their favourite characters. The move replaces open-ended chatbot access, which has been entirely restricted for minors amid concerns over mental health risks.

Open-ended AI chatbots can initiate conversations at any time, raising worries about overuse and addiction among younger users.

Several lawsuits against AI companies have highlighted the dangers, prompting Character.AI to phase out access for minors and introduce a guided, safety-focused alternative.

Industry observers say the Stories feature offers a safer environment for teens to engage with AI characters while continuing to explore creative content.

The decision aligns with recent AI regulations in California and ongoing US federal proposals to limit minors’ exposure to interactive AI companions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UAE strengthens digital transformation with Sharjah’s new integration committee

Sharjah is advancing its digital transformation efforts following the issuance of a new decree that established the Higher Committee for Digital Integration. The Crown Prince formed the body to strengthen oversight and guide government entities as the emirate seeks more coordinated progress.

The committee will report directly to the Executive Council and will be led by Sheikh Saud bin Sultan Al Qasimi from the Sharjah Digital Department.

Senior officials from several departments in the UAE will join him to enhance cooperation across the government, rather than leaving agencies to pursue separate digital plans.

Their combined expertise is expected to support stronger governance and reduce risks linked to large-scale transformation.

Its mandate covers strategic oversight, approval of key policies, alignment with national objectives and careful monitoring of digital projects.

The members will intervene when challenges arise, oversee investments and help resolve disputes so the emirate can maintain momentum instead of facing delays caused by fragmented decision-making.

Membership runs for two years, with the option of extension. The committee will continue its work until a successor group is formed and will provide regular reports on progress, challenges and proposed solutions to the Executive Council.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI development by Chinese companies shifts abroad

Leading Chinese technology companies are increasingly training their latest AI models outside the country to maintain access to Nvidia’s high-performance chips, according to a report by the Financial Times. Firms such as Alibaba and ByteDance are shifting parts of their AI development to data centres in Southeast Asia, a move that comes as the United States tightens restrictions on advanced chip exports to China.

The trend reportedly accelerated after Washington imposed new limits in April on the sale of Nvidia’s H20 chips, a key component for developing sophisticated large language models. By relying on leased server space operated by non-Chinese companies abroad, tech firms are able to bypass some of the effects of US export controls while continuing to train next-generation AI systems.

One notable exception is DeepSeek, which had already stockpiled a significant number of Nvidia chips before the export restrictions took effect. The company continues to train its models domestically and is now collaborating with Chinese chipmakers led by Huawei to develop and optimise homegrown alternatives to US hardware.

Neither Alibaba, ByteDance, Nvidia, DeepSeek, nor Huawei has commented publicly on the report, and Reuters stated that it could not independently verify the claims. However, the developments underscore the increasing complexity of global AI competition and the lengths to which companies may go to maintain technological momentum amid geopolitical pressure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New alliance between Samsung and SK Telecom accelerates 6G innovation

Samsung Electronics and SK Telecom have taken a significant step toward shaping next-generation connectivity after signing an agreement to develop essential 6G technologies.

Their partnership centres on AI-based radio access networks, with both companies aiming to secure an early lead as global competition intensifies.

Research teams from Samsung and SK Telecom will build and test key components, including AI-based channel estimation, distributed MIMO and AI-driven schedulers.

AI models will refine signals in real-time to improve accuracy, rather than relying on conventional estimation methods. Meanwhile, distributed MIMO will enable multiple antennas to cooperate for reliable, high-speed communication across diverse environments.

The companies believe that AI-enabled schedulers and core networks will manage data flows more efficiently as the number of devices continues to rise.

Their collaboration also extends into the AI-RAN Alliance, where a jointly proposed channel estimation technology has already been accepted as a formal work item, strengthening their shared role in shaping industry standards.

Samsung continues to promote 6G research through its Advanced Communications Research Centre, and recent demonstrations at major industry events highlight the growing momentum behind AI-RAN technology.

Both organisations expect their work to accelerate the transition toward a hyperconnected 6G future, rather than allowing competing ecosystems to dominate early development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MEPs call for stronger online protection for children

The European Parliament is urging stronger EU-wide measures to protect minors online, calling for a harmonised minimum age of 16 for accessing social media, video-sharing platforms, and AI companions. Under the proposal, children aged 13 to 16 would only be allowed to join such platforms with their parents’ consent.

MEPs say the move responds to growing concerns about the impact of online environments on young people’s mental health, attention span, and exposure to manipulative design practices.

The report, adopted by a large majority of MEPs, also calls for stricter enforcement of existing EU rules and greater accountability from tech companies. Lawmakers seek accurate, privacy-preserving age verification tools, including the forthcoming EU age-verification app and the European digital identity wallet.

They also propose making senior managers personally liable in cases of serious, repeated breaches, especially when platforms fail to implement adequate protections for minors.

Beyond age limits, Parliament is calling for sweeping restrictions on harmful features that fuel digital addiction. That includes banning practices such as infinite scrolling, autoplay, reward loops, and dark patterns for minors, as well as prohibiting non-compliant websites altogether.

MEPs also want engagement-based recommendation systems and randomised gaming mechanics like loot boxes outlawed for children, alongside tighter controls on influencer marketing, targeted ads, and commercial exploitation through so-called ‘kidfluencing.’

The report highlights growing public concern, as most Europeans view protecting children online as an urgent priority amid rising rates of problematic smartphone use among teenagers. Rapporteur Christel Schaldemose said the measures mark a turning point, signalling that platforms can no longer treat children as test subjects.

‘The experiment ends here,’ she said, urging consistent enforcement of the Digital Services Act to ensure safer digital spaces for Europe’s youngest users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI and anonymity intensifies online violence against women

Digital violence against women is rising sharply, fuelled by AI, online anonymity, and weak legal protections, leaving millions exposed.

UN Women warns that abuse on digital platforms often spills into real life, threatening women’s safety, livelihoods, and ability to participate freely in public life.

Public figures, journalists, and activists are increasingly targeted with deepfakes, coordinated harassment campaigns, and gendered disinformation designed to silence and intimidate.

One in four women journalists report receiving online death threats, highlighting the urgent scale and severity of the problem.

Experts call for stronger laws, safer digital platforms, and more women in technology to address AI-driven abuse effectively. Investments in education, digital literacy, and culture-change programmes are also vital to challenge toxic online communities and ensure digital spaces promote equality rather than harm.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots misidentify images they created

Growing numbers of online users are turning to AI chatbots to verify suspicious images, yet many tools are failing to detect fakes they created themselves. AFP found several cases in Asia where AI systems labelled fabricated photos as authentic, including a viral image of former Philippine lawmaker Elizaldy Co.

The failures highlight a lack of genuine visual analysis in current models. Many models are primarily trained on language patterns, resulting to inconsistent decisions even when dealing with images generated by the same generative systems.

Investigations also uncovered similar misidentifications during unrest in Pakistan-administered Kashmir, where AI models wrongly validated synthetic protest images. A Columbia University review reinforced the trend, with seven leading systems unable to verify any of the ten authentic news photos.

Specialists argue that AI may assist professional fact-checkers but cannot replace them. They emphasise that human verification remains essential as AI-generated content becomes increasingly lifelike and continues to circulate widely across social media platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI clarifies position in sensitive lawsuit

A legal case is underway involving OpenAI and the family of a teenager who had extensive interactions with ChatGPT before his death.

OpenAI has filed a response in court that refers to its terms of use and provides additional material for review. The filing also states that more complete records were submitted under seal so the court can assess the situation in full.

The family’s complaint includes concerns about the model’s behaviour and the company’s choices, while OpenAI’s filing outlines its view of the events and the safeguards it has in place. Both sides present different interpretations of the same interactions, which the court will evaluate.

OpenAI has also released a public statement describing its general approach to sensitive cases and the ongoing development of safety features intended to guide users towards appropriate support.

The case has drawn interest because it relates to broader questions about safety measures within conversational AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI use by US immigration agents sparks concern

A US federal judge has condemned immigration agents in Chicago for using AI to draft use-of-force reports, warning that the practice undermines credibility. Judge Sara Ellis noted that one agent fed a short description and images into ChatGPT before submitting the report.

Body camera footage cited in the ruling showed discrepancies between events recorded and the written narrative. Experts say AI-generated accounts risk inaccuracies in situations where courts rely on an officer’s personal recollection to assess reasonableness.

Researchers argue that poorly supervised AI use could erode public trust and compromise privacy. Some warn that uploading images into public tools relinquishes control of sensitive material, exposing it to misuse.

Police departments across the US are still developing policies for safe deployment of generative tools. Several states now require officers to label AI-assisted reports, while specialists call for stronger guardrails before the technology is applied in high-stakes legal settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Industrial sectors push private 5G momentum

Private 5G is often dismissed as too complex or narrow, yet analysts argue it carries strong potential for mission-critical industries instead of consumer-centric markets.

Sectors that depend on high reliability, including manufacturing, logistics, energy and public safety, find public networks and Wi-Fi insufficient for the operational demands they face. The technology aligns with the rise of AI-enabled automation and may provide growth in a sluggish telecom landscape.

Success depends on the maturity of surrounding ecosystems. Devices, edge computing and integration models differ across industrial verticals, slowing adoption instead of enabling rapid deployment.

The increasing presence of physical AI systems, from autonomous drones to industrial vehicles, makes reliable connectivity even more important.

Debate intensified when Nokia considered divesting its private 5G division, raising doubts about commercial viability, yet industry observers maintain that every market involves unique complexity.

Private 5G extends beyond traditional telecom roles by supporting real-economy sectors such as factories, ports and warehouses. The challenge lies in tailoring networks to distinct operational needs instead of expecting a single solution for all industries.

Analysts also note that inflated expectations in 2019 created a perception of underperformance, although private cellular remains a vital piece in a broader ecosystem involving edge computing, device readiness and software integration.

Long-term outlooks remain optimistic. Analysts project an equipment market worth around $30 billion each year by 2040, supported by strong service revenue. Adoption will vary across industries, but its influence on public RAN markets is expected to grow.

Despite complexity, interest inside the telecom sector stays high, especially as enterprise venues search for reliable connectivity solutions that can support their digital transformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!