AI-powered tools are adding new features to long-running Santa Tracker services used by families on Christmas Eve. Platforms run by NORAD and Google allow users to follow Father Christmas’s journey through their Santa Tracker tools, which also introduce interactive and personalised digital experiences.
NORAD’s Santa Tracker, first launched in 1955, now features games, videos, music, and stories in addition to its live tracking map. This year, the service introduced AI-powered features that generate elf-style avatars, create toy ideas, and produce personalised holiday stories for families.
The Santa Tracker presents Santa’s journey on a 3D globe built using open-source mapping technology and satellite imagery. Users can also watch short videos on Santa Cam, featuring Santa travelling to destinations around the world.
Google’s rendition offers similar features, including a live map, estimated arrival times, and interactive activities available throughout December. Santa’s Village includes games, animations, and beginner-friendly coding activities designed for children.
Google Assistant introduces a voice-based experience to its service, enabling users to ask about Santa’s location or receive updates from the North Pole. Both platforms aim to blend tradition with digital tools to create a seamless and engaging holiday experience.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at the Marine Biological Laboratory in Massachusetts are using AI and advanced visualisation tools to study how memories are formed in the human brain. Their work focuses on understanding how experiences produce lasting biological changes linked to long-term memory.
The project is led by Andre Fenton of New York University and Abhishek Kumar of the University of Wisconsin–Madison. Using NVIDIA RTX GPUs and HP Z workstations, the team analyses large-scale brain imaging data with custom AI tools and the syGlass virtual reality platform.
Researchers centred on the hippocampus, a brain structure central to memory. Scientists are examining specific protein markers in neurons to reveal how memories are encoded, even though these markers represent only a small fraction of the brain’s overall protein landscape.
High-resolution 3D imaging previously created a major data bottleneck. AI-supported workflows now allow researchers to capture, inspect, and store terabytes of volumetric data, enabling more detailed analysis of brain cell structure and function.
Researchers say understanding memory at a molecular level could support earlier insights into neurological and psychiatric conditions. The tools are also being used for education, allowing students to explore brain data interactively while contributing to ongoing research.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The European Union’s new tax-reporting directive for crypto assets, known as DAC8, takes effect on 1 January. The rules require crypto-asset service providers, including exchanges and brokers, to report detailed user and transaction data to national tax authorities.
DAC8 aims to close gaps in crypto tax reporting, giving authorities visibility over holdings and transfers similar to that of bank accounts and securities. Data collected under the directive will be shared across EU member states, enabling a more coordinated approach to enforcement.
Crypto firms have until 1 July to ensure full compliance, including implementing reporting systems, customer due diligence procedures, and internal controls. After that deadline, non-compliance may result in penalties under national law.
For users, DAC8 strengthens enforcement powers. Authorities can act on tax avoidance or evasion with support from counterparts in other EU countries, including seizing or embargoing crypto assets held abroad.
The directive operates alongside the EU’s Markets in Crypto-Assets (MiCA) regulation, which focuses on licensing, customer protection, and market conduct, while DAC8 ensures the tax trail is monitored.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Japan Fair Trade Commission (JFTC) announced it will investigate AI-based online search services over concerns that using news articles without permission could violate antitrust laws.
Authorities said such practices may amount to an abuse of a dominant bargaining position under Japan’s antimonopoly regulations.
The inquiry is expected to examine services from global tech firms, including Google, Microsoft, and OpenAI’s ChatGPT, as well as US startup Perplexity AI and Japanese company LY Corp. AI search tools summarise online content, including news articles, raising concerns about their effect on media revenue.
The Japan Newspaper Publishers and Editors Association warned AI summaries may reduce website traffic and media revenue. JFTC Secretary General Hiroo Iwanari said generative AI is evolving quickly, requiring careful review to keep up with technological change.
The investigation reflects growing global scrutiny of AI services and their interaction with content providers, with regulators increasingly assessing the balance between innovation and fair competition in digital markets.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI is said to be testing a new feature for ChatGPT that would mark a shift from Custom GPTs toward a more modular system of Skills.
Reports suggest the project, internally codenamed Hazelnut, will allow users and developers to teach the AI model standalone abilities, workflows and domain knowledge instead of relying only on role-based configurations.
The Skills framework is designed to allow multiple abilities to be combined automatically when a task requires them. The system aims to increase portability across the web version, desktop client and API, while loading instructions only when needed instead of consuming the entire context window.
Support for running executable code is also expected, providing the model with stronger reliability for logic-driven work, rather than relying entirely on generated text.
Industry observers note similarities to Anthropic’s Claude, which already benefits from a skill-like structure. Further features are expected to include slash-command interactions, a dedicated Skill editor and one-click conversion from existing GPTs.
Market expectations point to an early 2026 launch, signalling a move toward ChatGPT operating as an intelligent platform rather than a traditional chatbot.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
SK Hynix has urged the South Korean government to relax fair trade rules to allow the creation of a special-purpose company for raising funds for significant investments. The move comes as the semiconductor firm faces high capital demands amid the global AI boom.
Currently, SK hynix, a second-tier subsidiary of SK Group through SK Square, must retain full ownership when establishing third-tier subsidiaries. The government pledged to cut the ownership requirement to 50 percent, giving chipmakers more flexibility in funding projects.
The company highlighted the rising costs of advanced facilities, noting that a cleanroom at the Yongin semiconductor cluster in 2019 required 7.5 trillion won ($5.14 billion), while the new M15X fabrication plant in 2025 cost around 20 trillion won.
The size and long-term nature of modern semiconductor investments increasingly strain existing methods for raising funds.
SK hynix said letting subsidiaries partner with external investors would ease financial pressure and improve corporate health. The company added that regulatory flexibility is crucial for sustaining investment and competitiveness in a sector marked by high volatility.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Fraudulent investment platform Nomani has surged, spreading from Facebook to YouTube. ESET blocked tens of thousands of malicious links this year, mainly in Czech Republic, Japan, Slovakia, Spain, and Poland.
The scam utilises AI-generated videos, branded posts, and social media advertisements to lure victims into fake investments that promise high returns. Criminals then request extra fees or sensitive personal data, and often attempt a secondary scam posing as Europol or INTERPOL.
Recent improvements make Nomani’s AI videos more realistic, using trending news or public figures to appear credible. Campaigns run briefly and misuse social media forms and surveys to harvest information while avoiding detection.
Despite overall growth, detections fell 37% in the second half of 2025, suggesting that scammers are adapting to more stringent law enforcement measures. Meta’s ad platforms earned billions from scams, demonstrating the global reach of Nomani fraud.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Japan’s competition authority will probe AI search services from major domestic and international tech firms. The investigation aims to identify potential antitrust violations rather than impose immediate sanctions.
The probe is expected to cover LY Corp., Google, Microsoft and AI providers such as OpenAI and Perplexity AI. Concerns centre on how AI systems present and utilise news content within search results.
Legal action by Japanese news organisations alleges unauthorised use of articles by AI services. Regulators are assessing whether such practices constitute abuse of market dominance.
The inquiry builds on a 2023 review of news distribution contracts that warned against the use of unfair terms for publishers. Similar investigations overseas, including within the EU, have guided the commission’s approach.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI is extending the clinical value of chest X-rays beyond lung and heart assessment. Researchers are investigating whether routine radiographs can support broader disease screening without the need for additional scans. Early findings suggest existing images may contain underused diagnostic signals.
A study in Radiology: Cardiothoracic Imaging examined whether AI could detect hepatic steatosis from standard frontal chest X-rays. Researchers analysed more than 6,500 images from over 4,400 patients across two institutions. Deep learning models were trained and externally validated.
The AI system achieved area-under-curve scores above 0.8 in both internal and external tests. Saliency maps showed predictions focused near the diaphragm, where part of the liver appears on chest X-rays. Results suggest that reliable signal extraction can be achieved from routine imaging.
Researchers argue the approach could enable opportunistic screening during standard care. Patients flagged by AI could be referred for a dedicated liver assessment when appropriate. The method adds clinical value without increasing imaging costs or radiation exposure.
Experts caution that the model is not a standalone diagnostic tool and requires further prospective validation. Integration with clinical and laboratory data remains necessary to reduce false positives. If validated, AI-enhanced X-rays could support scalable risk stratification.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Use of AI chatbots for everyday tasks, from structuring essays to analysing data, has become widespread. Researchers are increasingly examining whether reliance on such tools affects critical thinking and learning. Recent studies suggest a more complex picture than simple decline.
A research study published by MIT found reduced cognitive activity among participants who used ChatGPT to write essays. Participants also showed weaker recall than those who completed tasks without AI assistance, raising questions about how learning develops when writing is outsourced.
Similar concerns emerged from studies by Carnegie Mellon University and Microsoft. Surveys of white-collar workers linked higher confidence in AI tools with lower levels of critical engagement, prompting warnings about possible overreliance.
Studies involving students present a more nuanced outcome. Research published by Oxford University Press found that many pupils felt AI supported skills such as revision and creativity. At the same time, some reported that tasks became too easy, limiting deeper learning.
Experts emphasise that outcomes depend on how AI tools are used. Educators argue for clearer guidance, transparency, and further research into long-term effects. Used as a tutor rather than a shortcut, AI may support learning rather than weaken it.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!