The AI chatbot service, Character.ai, has announced that teenagers can no longer chat with its AI characters from 25 November.
Under-18s will instead be limited to generating content such as videos, as the platform responds to concerns over risky interactions and lawsuits in the US.
Character.ai has faced criticism after avatars related to sensitive cases were discovered on the site, prompting safety experts and parents to call for stricter measures.
The company cited feedback from regulators and safety specialists, explaining that AI chatbots can pose emotional risks for young users by feigning empathy or providing misleading encouragement.
Character.ai also plans to introduce new age verification systems and fund a research lab focused on AI safety, alongside enhancing role-play and storytelling features that are less likely to place teens in vulnerable situations.
Safety campaigners welcomed the decision but emphasised that preventative measures should have been implemented.
Experts warn the move reflects a broader shift in the AI industry, where platforms increasingly recognise the importance of child protection in a landscape transitioning from permissionless innovation to more regulated oversight.
Analysts note the challenge for Character.ai will be maintaining teen engagement without encouraging unsafe interactions.
Separating creative play from emotionally sensitive exchanges is key, and the company’s new approach may signal a maturing phase in AI development, where responsible innovation prioritises the protection of young users.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Foxconn will add humanoid robots to a new Houston plant building Nvidia AI servers from early 2026. Announced at Nvidia’s developer conference, the move deepens their partnership and positions the site as a US showcase for AI-driven manufacturing.
Humanoid systems based on Nvidia’s Isaac GR00T N are built to perceive parts, adapt on the line, and work with people. Unlike fixed industrial arms, they handle delicate assembly and switch tasks via software updates. Goals include flexible throughput, faster retooling, and fewer stoppages.
AI models are trained in simulation using digital twins and reinforcement learning to improve accuracy and safety. On the line, robots self-tune as analytics predict maintenance and balance workloads, unlocking gains across logistics, assembly, testing, and quality control.
Texas, US, offers proximity to a growing semiconductor and AI cluster, as well as policy support for domestic capacity. Foxconn also plans expansions in Wisconsin and California to meet global demand for AI servers. Scaling output should ease supply pressures around Nvidia-class compute in data centres.
Job roles will shift as routine tasks automate and oversight becomes data-driven. Human workers focus on design, line configuration, and AI supervision, with safety gates for collaboration. Analysts see a template for Industry 4.0 factories running near-continuously with rapid changeovers.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
In a move reflecting its growing strategic ambitions, India is rapidly implementing AI across its defence forces. The country’s military has moved from policy to practice, using tools from real-time sensor fusion to predictive maintenance to transform how it fights.
The shift has involved institutional change. India’s Defence AI Council and Defence AI Project Agency (established 2019) are steering an ecosystem that includes labs such as the Centre for Artificial Intelligence & Robotics of the Defence Research and Development Organisation (DRDO).
One recent example is the cross-border operation Operation Sindoor (May 2025), in which AI-driven platforms appeared in roles ranging from intelligence analysis to operational coordination.
This effort signals more than just a technological upgrade. It underscores a shift in warfare logic, where systems of systems, connectivity and rapid decision-making matter more than sheer numbers.
India’s incorporation of AI into its capabilities, drone swarming, combat simulation and logistics optimisation, is aligned with broader trends in defence innovation and digital diplomacy. The country’s strategy now places AI at the heart of its procurement demands and force design.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The United States and South Korea agreed on a broad science and technology memorandum to deepen alliance ties and bolster Indo-Pacific stability. The non-binding pact aims to accelerate innovation while protecting critical capabilities. Both sides cast it as groundwork for a new Golden Age of Innovation.
AI sits at the centre. Plans include pro-innovation policy alignment, trusted exports across the stack, AI-ready datasets, safety standards, and enforcement of compute protection. Joint metrology and standards work links the US Center for AI Standards and Innovation with the AI Safety Institute of South Korea.
Trusted technology leadership extends beyond AI. The memorandum outlines shared research security, capacity building for universities and industry, and joint threat analysis. Telecommunications cooperation targets interoperable 6G supply chains and coordinated standards activity with industry partners.
Quantum and basic research are priority growth areas. Participants plan interoperable quantum standards, stronger institutional partnerships, and secured supply chains. Larger projects and STEM exchanges aim to widen collaboration, supported by shared roadmaps and engagement in global consortia.
Space cooperation continues across civil and exploration programmes. Strands include Artemis contributions, a Korean cubesat rideshare on Artemis II, and Commercial Lunar Payload Services. The Korea Positioning System will be developed for maximum interoperability with GPS.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Speaking at the CNBC Technology Executive Council Summit in New York, Wikipedia founder Jimmy Wales has expressed scepticism about Elon Musk’s new AI-powered Grokipedia, suggesting that large language models cannot reliably produce accurate wiki entries.
Wales highlighted the difficulties of verifying sources and warned that AI tools can produce plausible but incorrect information, citing examples where chatbots fabricated citations and personal details.
He rejected Musk’s claims of liberal bias on Wikipedia, noting that the site prioritises reputable sources over fringe opinions. Wales emphasised that focusing on mainstream publications does not constitute political bias but preserves trust and reliability for the platform’s vast global audience.
Despite his concerns, Wales acknowledged that AI could have limited utility for Wikipedia in uncovering information within existing sources.
However, he stressed that substantial costs and potential errors prevent the site from entirely relying on generative AI, preferring careful testing before integrating new technologies.
Wales concluded that while AI may mislead the public with fake or plausible content, the Wiki community’s decades of expertise in evaluating information help safeguard accuracy. He urged continued vigilance and careful source evaluation as misinformation risks grow alongside AI capabilities.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has introduced new features in ChatGPT to encourage healthier use for people who spend extended periods chatting with the AI. Users may see a pop-up message reading ‘Just checking in. You’ve been chatting for a while, is this a good time for a break?’.
Users can dismiss it or continue, helping to prevent excessive screen time while staying flexible. The update also guides high-stakes personal decisions.
ChatGPT will not give direct advice on sensitive topics such as relationships, but instead asks questions and encourages reflection, helping users consider their options safely.
OpenAI acknowledged that AI can feel especially personal for vulnerable individuals. Earlier versions sometimes struggled to recognise signs of emotional dependency or distress.
The company is improving the model to detect these cases and direct users to evidence-based resources when needed, making long interactions safer and more mindful.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The California Department of Financial Protection & Innovation (DFPI) has warned that criminals are weaponising AI to scam consumers. Deepfakes, cloned voices, and slick messages mimic trusted people and exploit urgency. Learning the new warning signs cuts risk quickly.
Imposter deepfakes and romance ruses often begin with perfect profiles or familiar voices pushing you to pay or invest. Grandparent scams use cloned audio in fake emergencies; agree a family passphrase and verify on a separate channel. Influencers may flaunt fabricated credentials and followers.
Automated attacks now use AI to sidestep basic defences and steal passwords or card details. Reduce exposure with two-factor authentication, regular updates, and a reputable password manager. Pause before clicking unexpected links or attachments, even from known names.
Investment frauds increasingly tout vague ‘AI-powered’ returns while simulating growth and testimonials, then blocking withdrawals. Beware guarantees of no risk, artificial deadlines, unsolicited messages, and recruit-to-earn offers. Research independently and verify registrations before sending money.
DFPI advises careful verification before acting. Confirm identities through trusted channels, refuse to move money under pressure, and secure devices. Report suspicious activity promptly; smart habits remain the best defence.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Nokia and NVIDIA have announced a $1 billion partnership to develop an AI-powered platform that will drive the transition from 5G to 6G networks.
The collaboration will create next-generation AI-RAN systems, combining computing, sensing and connectivity to transform how the US mobile networks process data and deliver services.
However, this partnership marks a strategic step in both companies’ ambition to regain global leadership in telecommunications.
By integrating NVIDIA’s new Aerial RAN Computer and Nokia’s AI-RAN software, operators can upgrade existing networks through software updates instead of complete infrastructure replacements.
T-Mobile US will begin field tests in 2026, supported by Dell’s PowerEdge servers.
NVIDIA’s investment and collaboration with Nokia aim to strengthen the foundation for AI-native networks that can handle the rising demand from agentic, generative and physical AI applications.
These networks are expected to support future 6G use cases, including drones, autonomous vehicles and advanced augmented reality systems.
Both companies see AI-RAN as the next evolution of wireless connectivity, uniting data processing and communication at the edge for greater performance, energy efficiency and innovation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI says a small share of ChatGPT users show possible signs of mental health emergencies each week, including mania, psychosis, or suicidal thoughts. The company estimates 0.07 percent and says safety prompts are triggered. Critics argue that small percentages scale at ChatGPT’s size.
A further 0.15 percent of weekly users discuss explicit indicators of potential suicidal planning or intent. Updates aim to respond more safely and empathetically, and to flag indirect self-harm signals. Sensitive chats can be routed to safer models in a new window.
More than 170 clinicians across 60 countries advise OpenAI on risk cues and responses. Guidance focuses on encouraging users to seek real-world support. Researchers warn vulnerable people may struggle to act on on-screen warnings.
External specialists see both value and limits. AI may widen access when services are stretched, yet automated advice can mislead. Risks include reinforcing delusions and misplaced trust in authoritative-sounding output.
Legal and public scrutiny is rising after high-profile cases linked to chatbot interactions. Families and campaigners want more transparent accountability and stronger guardrails. Regulators continue to debate transparency, escalation pathways, and duty of care.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Europol calls for a Europe-wide response to caller ID spoofing, which criminals use to impersonate trusted numbers and commit fraud. The practice causes significant harm, with an estimated €850 million lost yearly.
Organised networks now run ‘spoofing as a service’, impersonating banks, authorities or family members, and even staging so-called swatting incidents by making false emergency calls from a victim’s address. Operating across borders, these groups exploit jurisdictional gaps to avoid detection and prosecution.
A Europol survey across 23 countries found major obstacles to implementing anti-spoofing measures, leaving around 400 million vulnerable to these scams.
Law enforcement said weak cooperation with telecom operators, fragmented rules and limited technical tools to identify and block spoofed traffic hinder an adequate response.
Europol has put forward several priorities, including setting up EU-wide technical standards to verify caller IDs and trace fraudulent calls, stronger cross-border cooperation among authorities and industry, and regulatory convergence to enable lawful tracing.
The proposals, aligned with the ProtectEU strategy, aim to harden networks while anticipating evolving scammers’ tactics such as SIM-based scams, anonymous prepaid services and smishing (fraud via fake text messages).
Brussels has begun a phishing awareness campaign alongside enforcement to help users spot and report scams.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!