Atlassian bets on AI browsers with $610m deal

The proprietary software firm Atlassian is entering the AI browser market with a $610 million deal to acquire The Browser Company of New York, creator of Arc and Dia. The move signals an attempt to turn browsers into intelligent assistants instead of leaving them as passive tools.

Traditional browsers are blank slates, forcing users to juggle tabs and applications without context. Arc and Dia promise a different approach by connecting tasks, offering in-line AI support, and adapting to user behaviour. Atlassian believes these features could transform productivity for knowledge workers.

Analysts note, however, that AI browsers are still experimental. While they offer potential to integrate workflows and reduce distractions, rivals like Chrome, Edge and Safari already dominate with established ecosystems and security features. Convincing users to change habits may prove difficult.

Industry observers suggest Atlassian’s move is more a long-term bet on natural language and agentic browsing than an immediate market shift. For now, AI browsers remain promising but unproven alternatives to conventional tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tourism boards across Europe embrace AI but face gaps in strategy and skills

A new study by the European Travel Commission shows that national tourism organisations (NTOs) are experimenting with AI but are facing gaps in strategy and skills.

Marketing teams are leading the way, applying AI in content generation and workflow streamlining, whereas research departments primarily view the tools as exploratory. Despite uneven readiness, most staff show enthusiasm, with little resistance reported.

The survey highlights challenges, including limited budgets, sparse training, and the absence of a clear roadmap. Early adopters report tangible productivity gains, but most NTOs are still running small pilots rather than embedding AI across operations.

Recommendations include ring-fencing time for structured experiments, offering role-specific upskilling, and scaling budgets aligned with results. The report also urges the creation of shared learning spaces and providing practical support to help organisations transition from testing to sustained adoption.

ETC President Miguel Sanz said AI offers clear opportunities for tourism boards, but uneven capacity means shared tools and targeted investment will be essential to ensure innovation benefits all members.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

IBM Cloud replaces free support with AI tools

The cloud computing services offered by IBM will end free human support under its Basic Support tier in January 2026, opting for an AI-driven self-service model instead.

Users will lose the option to open or escalate technical cases through the portal or APIs. However, they can still report service issues via the Cloud Console and raise billing or account cases through the Support Portal.

IBM will direct customers to its Watsonx-powered AI Assistant, upgraded earlier in the year, while introducing a ‘Report an Issue’ tool to improve routing. The company plans to expand its support library to provide more detailed self-help resources.

Starting at $200 per month, paid support will remain available for organisations needing faster response times and direct technical assistance.

The company describes the change as an alignment with industry norms. AWS, Google Cloud and Microsoft Azure already provide free tiers that rely on community forums, online resources and billing support.

However, IBM Cloud holds only 2–4 percent of the market, according to Synergy Research Group, which some analysts suggest makes cost reductions in support more likely. Tencent, another provider, previously withdrew support for basic users because they were not profitable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CNIL fines Google and SHEIN in ongoing cookie compliance crackdown

France’s data protection authority, CNIL, has fined Google 350 million euros and SHEIN 150 million euros as part of a broader enforcement effort targeting non-compliant use of advertising cookies under Article 82 of the French Data Protection Act.

The action stems from CNIL’s 2019 guidelines, aimed at ensuring that internet users are adequately informed and give valid consent for the placement of cookies.

The CNIL’s restricted committee, responsible for imposing penalties, raised ongoing concerns such as unauthorised cookie placement and the growing use of ‘cookie walls’ where users must accept cookies to access services.

Although not illegal by default, such practices require consent, with all choices presented clearly and without bias.

In Google’s case, CNIL also cited a breach of Article L.34-5 of the French Postal and Electronic Communications Code for displaying promotional emails in Gmail’s ‘Promotions’ and ‘Social’ tabs without prior user consent. High-traffic platforms remain a key focus of the authority’s compliance strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Latvia launches open AI framework for Europe

Language technology company Tilde has released an open AI framework designed for all European languages.

The model, named ‘TildeOpen’, was developed with the support of the European Commission and trained on the Lumi supercomputer in Finland.

According to Tilde’s head Artūrs Vasiļevskis, the project addresses a key gap in US-based AI systems, which often underperform for smaller European languages such as Latvian. By focusing on European linguistic diversity, the framework aims to provide better accessibility across the continent.

Vasiļevskis also suggested that Latvia has the potential to become an exporter of AI solutions. However, he acknowledged that development is at an early stage and that current applications remain relatively simple. The framework and user guidelines are freely accessible online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Singapore mandates Meta to tackle scams or risk $1 million penalty

In a landmark move, Singapore police have issued their first implementation directive under the Online Criminal Harms Act (OCHA) to tech giant Meta, requiring the company to tackle scam activity on Facebook or face fines of up to $1 million.

Announced on 3 September by Minister of State for Home Affairs Goh Pei Ming at the Global Anti-Scam Summit Asia 2025, the directive targets scam advertisements, fake profiles, and impersonation of government officials, particularly Prime Minister Lawrence Wong and former Defence Minister Ng Eng Hen. The measure is part of Singapore’s intensified crackdown on government official impersonation scams (GOIS), which have surged in 2025.

According to mid-year police data, Gois cases nearly tripled to 1,762 in the first half of 2025, up from 589 in the same period last year. Financial losses reached $126.5 million, a 90% increase from 2024.
PM Wong previously warned the public about deepfake ads using his image to promote fraudulent cryptocurrency schemes and immigration services.

Meta responded that impersonation and deceptive ads violate its policies and are removed when detected. The company said it uses facial recognition to protect public figures and continues to invest in detection systems, trained reviewers, and user reporting tools.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

SCO Tianjin Summit underscores economic cooperation and security dialogue

The Shanghai Cooperation Organisation (SCO) summit in Tianjin closed with leaders adopting the Tianjin Declaration, highlighting member states’ commitment to multilateralism, sovereignty, and shared security.

The discussions emphasised economic resilience, financial cooperation, and collective responses to security challenges.

Proposals included exploring joint financial mechanisms, such as common bonds and payment systems, to shield member economies from external disruptions.

Leaders also underlined the importance of strengthening cooperation in trade and investment, with China pledging additional funding and infrastructure support across the bloc. Observers noted that these measures reflect growing interest in alternative global finance and economic governance approaches.

Security issues are prominently featured, with agreements to enhance counter-terrorism initiatives and expand existing structures such as the Regional Anti-Terrorist Structure. Delegates also called for greater collaboration against cross-border crime, drug trafficking, and emerging security risks.

At the same time, they stressed the need for political solutions to ongoing regional conflicts, including those in Ukraine, Gaza, and Afghanistan.

With its expanding membership and combined economic weight, the SCO continues to position itself as a platform for cooperation beyond traditional regional security concerns.

While challenges remain, including diverging interests among key members, the Tianjin summit indicated the bloc’s growing role in discussions on multipolar governance and collective stability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts warn of sexual and drug risks to kids from AI chatbots

A new report highlights alarming dangers from AI chatbots on platforms such as Character AI. Researchers acting as 12–15-year-olds logged 669 harmful interactions, from sexual grooming to drug offers and secrecy instructions.

Bots frequently claimed to be real humans, increasing their credibility with vulnerable users.

Sexual exploitation dominated the findings, with nearly 300 cases of adult bots pursuing romantic relationships and simulating sexual activity. Some bots suggested violent acts, staged kidnappings, or drug use.

Experts say the immersive and role-playing nature of these apps amplifies risks, as children struggle to distinguish between fantasy and reality.

Advocacy groups, including ParentsTogether Action and Heat Initiative, are calling for age restrictions, urging platforms to limit access to verified adults. The scrutiny follows a teen suicide linked to Character AI and mounting pressure on tech firms to implement effective safeguards.

OpenAI has announced parental controls for ChatGPT, allowing parents to monitor teen accounts and set age-appropriate rules.

Researchers warn that without stricter safety measures, interactive AI apps may continue exposing children to dangerous content. Calls for adult-only verification, improved filters, and public accountability are growing as the debate over AI’s impact on minors intensifies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Researchers develop an AI system to modify the brain’s mental imagery with words

A new AI system named DreamConnect can now translate a person’s brain activity into images and then edit those mental pictures using natural language commands.

Instead of merely reconstructing thoughts from fMRI scans, the breakthrough technology allows users to reshape their imagined scenes actively. For instance, an individual visualising a horse can instruct the system to transform it into a unicorn, with the AI accurately modifying the relevant features.

The system employs a dual-stream framework that interprets brain signals into rough visuals and then refines them based on text instructions.

Developed by an international team of researchers, DreamConnect represents a fundamental shift from passive brain decoding to interactive visual brainstorming.

It marks a significant advance at the frontier of human-AI interaction, moving beyond simple reconstruction to active collaboration.

Potential applications are wide-ranging, from accelerating creative design to offering new tools for therapeutic communication.

However, the researchers caution that such powerful technology necessitates robust ethical safeguards to prevent misuse and protect the privacy of an individual’s most personal data, their thoughts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Privacy concerns arise as Google reportedly expands gaming data sharing

Google may roll out a Play Games update on 23 September adding public profiles, stat tracking, and community features. Reports suggest users may customise profiles, follow others, and import gaming history, while Google could collect gameplay and developer data.

The update is said to track installed games, session lengths, and in-game achievements, with some participating developers potentially accessing additional data. Players can reportedly manage visibility settings, delete profiles, or keep accounts private, with default settings applied unless changed.

The EU and UK are expected to receive the update on 1 October.

Privacy concerns have been highlighted in Europe. Austrian group NOYB filed a complaint against Ubisoft over alleged excessive data collection in games like Far Cry Primal, suggesting that session tracking and frequent online connections may conflict with GDPR.

Ubisoft could face fines of up to four percent of global turnover, based on last year’s revenues.

Observers suggest the update reflects a social and data-driven gaming trend, though European players may seek more explicit consent and transparency.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot