The European Union’s new tax-reporting directive for crypto assets, known as DAC8, takes effect on 1 January. The rules require crypto-asset service providers, including exchanges and brokers, to report detailed user and transaction data to national tax authorities.
DAC8 aims to close gaps in crypto tax reporting, giving authorities visibility over holdings and transfers similar to that of bank accounts and securities. Data collected under the directive will be shared across EU member states, enabling a more coordinated approach to enforcement.
Crypto firms have until 1 July to ensure full compliance, including implementing reporting systems, customer due diligence procedures, and internal controls. After that deadline, non-compliance may result in penalties under national law.
For users, DAC8 strengthens enforcement powers. Authorities can act on tax avoidance or evasion with support from counterparts in other EU countries, including seizing or embargoing crypto assets held abroad.
The directive operates alongside the EU’s Markets in Crypto-Assets (MiCA) regulation, which focuses on licensing, customer protection, and market conduct, while DAC8 ensures the tax trail is monitored.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI chatbots modelled on Jesus are becoming increasingly popular over Christmas, offering companionship or faith guidance to people who may feel emotionally vulnerable during the holidays.
Experts warn that such tools could gradually reshape religious belief and practice. Training data is controlled by a handful of technology firms, which means AI systems may produce homogenised and biased interpretations instead of reflecting the diversity of real-world faith communities.
Users who are young or unfamiliar with AI may also struggle to judge the accuracy or intent behind the answers they receive.
Researchers say AI chatbots are currently used as a supplement rather than a replacement for religious teaching.
However, concern remains that people may begin to rely on AI for spiritual reassurance during sensitive moments. Scholars recommend limiting use over the holidays and prioritising conversations with family, friends or trusted religious leaders instead of seeking emotional comfort from a chatbot.
Experts also urge users to reflect carefully on who designs these systems and why. Fact-checking answers and grounding faith in recognised sources may help reduce the risk of distortion as AI plays a growing role in people’s daily lives.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has introduced new Personalisation settings in ChatGPT that allow users to fine-tune warmth, enthusiasm and emoji use. The changes are designed to make conversations feel more natural, instead of relying on a single default tone.
ChatGPT users can set each element to More, Less or Default, alongside existing tone styles such as Professional, Candid and Quirky. The update follows previous adjustments, where OpenAI first dialled back perceived agreeableness, then later increased warmth after users said the system felt overly cold.
Experts have raised concerns that highly agreeable AI could encourage emotional dependence, even as users welcome a more flexible conversational style.
Some commentators describe the feature as empowering, while others question whether customising a chatbot’s personality risks blurring emotional boundaries.
The new tone controls continue broader industry debates about how human-like AI should become. OpenAI hopes that added transparency and user choice will balance personal preference with responsible design, instead of encouraging reliance on a single conversational style.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI is said to be testing a new feature for ChatGPT that would mark a shift from Custom GPTs toward a more modular system of Skills.
Reports suggest the project, internally codenamed Hazelnut, will allow users and developers to teach the AI model standalone abilities, workflows and domain knowledge instead of relying only on role-based configurations.
The Skills framework is designed to allow multiple abilities to be combined automatically when a task requires them. The system aims to increase portability across the web version, desktop client and API, while loading instructions only when needed instead of consuming the entire context window.
Support for running executable code is also expected, providing the model with stronger reliability for logic-driven work, rather than relying entirely on generated text.
Industry observers note similarities to Anthropic’s Claude, which already benefits from a skill-like structure. Further features are expected to include slash-command interactions, a dedicated Skill editor and one-click conversion from existing GPTs.
Market expectations point to an early 2026 launch, signalling a move toward ChatGPT operating as an intelligent platform rather than a traditional chatbot.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A significant debate has erupted in South Korea after the National Assembly passed new legislation aimed at tackling so-called fake news.
The revised Information and Communications Network Act bans the circulation of false or fabricated information online. It allows courts to impose punitive damages up to five times the losses suffered when media outlets or YouTubers intentionally spread disinformation for unjust profit.
Journalists, unions and academics warn that the law could undermine freedom of expression and weaken journalism’s watchdog function instead of strengthening public trust.
Critics argue that ambiguity over who decides what constitutes fake news could shift judgement away from the courts and toward regulators or platforms, encouraging self-censorship and increasing the risk of abusive lawsuits by influential figures.
Experts also highlight the lack of strong safeguards in South Korea against malicious litigation compared with the US, where plaintiffs must prove fault by journalists.
The controversy reflects more profound public scepticism about South Korean media and long-standing reporting practices that sometimes rely on relaying statements without sufficient verification, suggesting that structural reform may be needed instead of rapid, punitive legislation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
US insurance firm Aflac has confirmed that a cyberattack disclosed in June affected around 22.65 million people. The breach involved the theft of sensitive personal and health information; however, the company initially did not specify the number of individuals affected.
In filings with the Texas attorney general, Aflac said the compromised data includes names, dates of birth, home addresses, government-issued identification numbers, driving licence details, and Social Security numbers. Medical and health insurance information was also accessed during the incident.
A separate filing with the Iowa attorney general suggested the attackers may be linked to a known cybercriminal organisation. Federal law enforcement and external cybersecurity specialists indicated the group had been targeting the insurance sector more broadly.
Security researchers have linked a wave of recent insurance-sector breaches to Scattered Spider, a loosely organised group of predominantly young, English-speaking hackers. The timing and targeting of the Aflac incident align with the group’s activity.
The US company stated that it has begun notifying the affected individuals. The company, which reports having around 50 million customers, did not respond to requests for comment. Other insurers, including Erie Insurance and Philadelphia Insurance Companies, reported breaches during the same period.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Mandatory facial verification will be introduced in South Korea for anyone opening a new mobile phone account, as authorities try to limit identity fraud.
Officials said criminals have been using stolen personal details to set up phone numbers that later support scams such as voice phishing instead of legitimate services.
Major mobile carriers, including LG Uplus, Korea Telecom and SK Telecom, will validate users by matching their faces against biometric data stored in the PASS digital identity app.
Such a requirement expands the country’s identity checks rather than replacing them outright, and is intended to make it harder for fraud rings to exploit stolen data at scale.
The measure follows a difficult year for data security in South Korea, marked by cyber incidents affecting more than half the population.
SK Telecom reported a breach involving all 23 million of its customers and now faces more than $1.5 billion in penalties and compensation.
Regulators also revealed that mobile virtual network operators were linked to 92% of counterfeit phones uncovered in 2024, strengthening the government’s case for tougher identity controls.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Amid growing attention on AI, Google DeepMind chief Demis Hassabis has argued that future systems could learn anything humans can.
He suggested that as technology advances, AI may no longer remain confined to single tasks. Instead of specialising narrowly, it could solve different kinds of problems and continue improving over time.
Supporters say rapid progress already shows how powerful the technology has become.
Other experts disagree and warn that human intelligence remains deeply complex. People rely on emotions, personal experience and social understanding when they think, while machines depend on data and rules.
Critics argue that comparing AI with the human mind oversimplifies how intelligence really works, and that even people vary widely in ability.
Elon Musk has supported the idea that AI could eventually learn as much as humans, while repeating his long-standing view that powerful systems must be handled carefully. His backing has intensified the debate, given his influence in the technology world.
The discussion matters because highly capable AI could reshape work, education and creativity, raising questions over safety and control.
For now, AI performs specific tasks extremely well yet cannot think or feel like humans, and no one can say for certain whether true human-level intelligence will ever emerge.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at Lomonosov Moscow State University have developed a 72-qubit quantum computer prototype based on single neutral rubidium atoms. It marks the third Russian quantum computer to surpass the 70-qubit milestone.
The achievement was announced by Rosatom Quantum Technologies and highlights progress in reliable quantum operations.
The atom-based prototype features three zones: one for computing and two for storage and readout. Experiments have demonstrated two-qubit logical operations with 94% accuracy, enabling practical testing and development of quantum algorithms.
Scientists stress that lower error rates are vital for scaling quantum computers to solve complex industrial and financial problems. The work also supports Russia’s technological sovereignty and strengthens the competitiveness of domestic enterprises.
The project actively involves young researchers, graduate students, and undergraduates alongside leading specialists, ensuring the next generation gains hands-on experience in one of Russia’s most significant scientific initiatives.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Prime Minister Kim Min-seok has called for punitive fines of up to 10 percent of company sales for repeated and serious data breaches, as public anger grows over large-scale leaks.
The government is seeking swift legislation to impose stronger sanctions on firms that fail to safeguard personal data, reflecting President Lee Jae Myung’s stance that violations require firm penalties instead of lenient warnings.
Kim said corporate responses to recent breaches had fallen far short of public expectations and stressed that companies must take full responsibility for protecting customer information.
Under the proposed framework, affected individuals would receive clearer notifications that include guidance on their rights to seek damages.
The government of South Korea also plans to strengthen investigative powers through coercive fines for noncompliance, while pursuing rapid reforms aimed at preventing further harm.
The tougher line follows a series of major incidents, including a leak at Shinhan Card that affected around 190,000 merchant records and a large-scale breach at Coupang that exposed the data of 33.7 million users.
Officials have described the Coupang breach as a serious social crisis that has eroded public trust.
Authorities have launched an interagency task force to identify responsibility and ensure tighter data protection across South Korea’s digital economy instead of relying on voluntary company action.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!