EU targets addictive gaming features

Video gaming has become one of Europe’s most prominent entertainment industries, surpassing a niche hobby, with over half the population regularly engaging in it.

As the sector grows, the EU lawmakers are increasingly worried about addictive game design and manipulative features that push players to spend more time and money online.

Much of the concern focuses on loot boxes, where players pay for random digital rewards that resemble gambling mechanics. Studies and parliamentary reports warn that children may be particularly vulnerable, with some lawmakers calling for outright bans on paid loot boxes and premium in-game currencies.

The European Commission is examining how far design choices contribute to digital addiction and whether games are exploiting behavioural weaknesses rather than offering fair entertainment.

Officials say the risk is higher for minors, who may not fully understand how engagement-driven systems are engineered.

The upcoming Digital Fairness Act aims to strengthen consumer protection across online services, rather than leaving families to navigate the risks alone. However, as negotiations continue, the debate over how tightly gaming should be regulated is only just beginning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MIT-IBM researchers improve large language models with PaTH Attention

Researchers at MIT and the MIT-IBM Watson AI Lab have introduced a new attention mechanism designed to enhance the capabilities of large language models (LLMs) in tracking state and reasoning across long texts.

Unlike traditional positional encoding methods, the PaTH Attention system adapts to the content of words, enabling models to follow complex sequences more effectively.

PaTH Attention models sequences through data-dependent transformations, allowing LLMs to track how meaning changes between words instead of relying solely on relative distance.

The approach improves performance on long-context reasoning, multi-step recall, and language modelling benchmarks, all while remaining computationally efficient and compatible with GPUs.

Tests demonstrated consistent gains in perplexity and content-awareness compared with conventional methods. The team combined PaTH Attention with FoX to down-weight less relevant information, improving reasoning and long-sequence understanding.

According to senior author Yoon Kim, these advances represent the next step in developing general-purpose building blocks for AI, combining expressivity, scalability, and efficiency for broader applications in structured domains such as biology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

IMF calls for stronger AI regulation in global securities markets

Regulators worldwide are being urged to adopt stronger oversight frameworks for AI in capital markets after an IMF technical note warned that rapid AI adoption could reshape securities trading while increasing systemic risk.

AI brings major efficiency gains in asset management and high-frequency trading instead of slower, human-led processes, yet opacity, market volatility, cyber threats and model concentration remain significant concerns.

The IMF warns that AI could create powerful data oligopolies where only a few firms can train the strongest models, while autonomous trading agents may unintentionally collude by widening spreads without explicit coordination.

Retail investors also face rising exposure to AI washing, where financial firms exaggerate or misrepresent AI capability, making transparency, accountability and human-in-the-loop review essential safeguards.

Supervisory authorities are encouraged to scale their own AI capacity through SupTech tools for automated surveillance and social-media sentiment monitoring.

The note highlights India as a key case study, given the dominance of algorithmic trading and SEBI’s early reporting requirements for AI and machine learning. The IMF also points to the National Stock Exchange’s use of AI in fraud detection as an emerging-market model for resilient monitoring infrastructure.

The report underlines the need for regulators to prepare for AI-driven market shocks, strengthen governance obligations on regulated entities and build specialist teams capable of understanding model risk instead of reacting only after misconduct or misinformation harms investors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Groq partners with Nvidia to expand inference technology

Groq has signed a non-exclusive licensing agreement with Nvidia to share its inference technology, aiming to make high-performance, cost-efficient AI processing more widely accessible.

Groq’s founder, Jonathan Ross, president Sunny Madra, and other team members will join Nvidia to help develop and scale the licensed technology. Despite the collaboration, Groq will remain an independent company, with Simon Edwards taking over as Chief Executive Officer.

Operations of GroqCloud will continue without interruption, ensuring ongoing services for existing customers. The agreement highlights a growing trend of partnerships in the AI sector, combining innovation with broader access to advanced processing capabilities.

The partnership could speed up AI inference adoption, offering companies more scalable and cost-effective options for deploying AI workloads. Analysts suggest such collaborations are likely to drive competition and innovation in the rapidly evolving AI hardware and software market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Visa ban imposed by US on ex-EU commissioner over digital platform rules

The US State Department has imposed a visa ban on former EU Commissioner Thierry Breton and four other individuals, citing opposition to European regulation of social media platforms. The US visa ban reflects growing tensions between Washington and Brussels over digital governance and free expression.

US officials said the visa ban targets figures linked to organisations involved in content moderation and disinformation research. Those named include representatives from HateAid, the Center for Countering Digital Hate, and the Global Disinformation Index, alongside Breton.

Secretary of State Marco Rubio accused the individuals of pressuring US-based platforms to restrict certain viewpoints. A senior State Department official described Breton as a central figure behind the EU’s Digital Services Act, a law that sets obligations for large online platforms operating in Europe.

Breton rejected the US visa ban, calling it a witch hunt and denying allegations of censorship. European organisations affected by the decision criticised the move as unlawful and authoritarian, while the European Commission said it had sought clarification from US authorities.

France and the European Commission condemned the visa ban and warned of a possible response. EU officials said European digital rules are applied uniformly and are intended to support a safe, competitive online environment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots exploited to create nonconsensual bikini deepfakes

Users of popular AI chatbots are generating bikini deepfakes by manipulating photos of fully clothed women, often without consent. Online discussions show how generative AI tools can be misused to create sexually suggestive deepfakes from ordinary images, raising concerns about image-based abuse.

A now-deleted Reddit thread shared prompts for using Google’s Gemini to alter clothing in photographs. One post asked for a woman’s traditional dress to be changed to a bikini. Reddit removed the content and later banned the subreddit over deepfake-related harassment.

Researchers and digital rights advocates warn that nonconsensual deepfakes remain a persistent form of online harassment. Millions of users have visited AI-powered websites designed to undress people in photos. The trend reflects growing harm enabled by increasingly realistic image generation tools.

Most mainstream AI chatbots prohibit the creation of explicit images and apply safeguards to prevent abuse. However, recent advances in image-editing models have made it easier for users to bypass guardrails using simple prompts, according to limited testing and expert assessments.

Technology companies say their policies ban altering a person’s likeness without consent, with penalties including account suspensions. Legal experts argue that deepfakes involving sexualised imagery represent a core risk of generative AI and that accountability must extend to both users and platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU crypto tax reporting rules take effect in January

The European Union’s new tax-reporting directive for crypto assets, known as DAC8, takes effect on 1 January. The rules require crypto-asset service providers, including exchanges and brokers, to report detailed user and transaction data to national tax authorities.

DAC8 aims to close gaps in crypto tax reporting, giving authorities visibility over holdings and transfers similar to that of bank accounts and securities. Data collected under the directive will be shared across EU member states, enabling a more coordinated approach to enforcement.

Crypto firms have until 1 July to ensure full compliance, including implementing reporting systems, customer due diligence procedures, and internal controls. After that deadline, non-compliance may result in penalties under national law.

For users, DAC8 strengthens enforcement powers. Authorities can act on tax avoidance or evasion with support from counterparts in other EU countries, including seizing or embargoing crypto assets held abroad.

The directive operates alongside the EU’s Markets in Crypto-Assets (MiCA) regulation, which focuses on licensing, customer protection, and market conduct, while DAC8 ensures the tax trail is monitored.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-generated Jesuses spark concern over faith and bias

AI chatbots modelled on Jesus are becoming increasingly popular over Christmas, offering companionship or faith guidance to people who may feel emotionally vulnerable during the holidays.

Several platforms, including Character.AI, Talkie.AI and Text With Jesus, now host simulations claiming to answer questions in the voice of Jesus Christ.

Experts warn that such tools could gradually reshape religious belief and practice. Training data is controlled by a handful of technology firms, which means AI systems may produce homogenised and biased interpretations instead of reflecting the diversity of real-world faith communities.

Users who are young or unfamiliar with AI may also struggle to judge the accuracy or intent behind the answers they receive.

Researchers say AI chatbots are currently used as a supplement rather than a replacement for religious teaching.

However, concern remains that people may begin to rely on AI for spiritual reassurance during sensitive moments. Scholars recommend limiting use over the holidays and prioritising conversations with family, friends or trusted religious leaders instead of seeking emotional comfort from a chatbot.

Experts also urge users to reflect carefully on who designs these systems and why. Fact-checking answers and grounding faith in recognised sources may help reduce the risk of distortion as AI plays a growing role in people’s daily lives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT becomes more customisable for tone and style

OpenAI has introduced new Personalisation settings in ChatGPT that allow users to fine-tune warmth, enthusiasm and emoji use. The changes are designed to make conversations feel more natural, instead of relying on a single default tone.

ChatGPT users can set each element to More, Less or Default, alongside existing tone styles such as Professional, Candid and Quirky. The update follows previous adjustments, where OpenAI first dialled back perceived agreeableness, then later increased warmth after users said the system felt overly cold.

Experts have raised concerns that highly agreeable AI could encourage emotional dependence, even as users welcome a more flexible conversational style.

Some commentators describe the feature as empowering, while others question whether customising a chatbot’s personality risks blurring emotional boundaries.

The new tone controls continue broader industry debates about how human-like AI should become. OpenAI hopes that added transparency and user choice will balance personal preference with responsible design, instead of encouraging reliance on a single conversational style.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT may move beyond GPTs as OpenAI develops new Skills feature

OpenAI is said to be testing a new feature for ChatGPT that would mark a shift from Custom GPTs toward a more modular system of Skills.

Reports suggest the project, internally codenamed Hazelnut, will allow users and developers to teach the AI model standalone abilities, workflows and domain knowledge instead of relying only on role-based configurations.

The Skills framework is designed to allow multiple abilities to be combined automatically when a task requires them. The system aims to increase portability across the web version, desktop client and API, while loading instructions only when needed instead of consuming the entire context window.

Support for running executable code is also expected, providing the model with stronger reliability for logic-driven work, rather than relying entirely on generated text.

Industry observers note similarities to Anthropic’s Claude, which already benefits from a skill-like structure. Further features are expected to include slash-command interactions, a dedicated Skill editor and one-click conversion from existing GPTs.

Market expectations point to an early 2026 launch, signalling a move toward ChatGPT operating as an intelligent platform rather than a traditional chatbot.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!