Is AI eroding human intelligence?

The article reflects on the growing integration of AI into daily life, from classrooms to work, and asks whether this shift is making people intellectually sharper or more dependent on machines.

Tools such as ChatGPT, Grok and Perplexity have moved from optional assistants to everyday aids that generate instant answers, summaries and explanations, reducing the time and effort traditionally required for research and deep thinking.

While quantifiable productivity gains are clear, the piece highlights trade-offs: readily available answers can diminish the cognitive struggle that builds critical thinking, problem-solving and independent reasoning.

In education, easy AI responses may weaken students’ engagement in learning unless teachers guide their use responsibly. Some respondents point to creativity and conceptual understanding eroding when AI is used as a shortcut. In contrast, others see it as a democratising tutor that supports learners who otherwise lack resources.

The article also incorporates perspectives from AI systems themselves, which generally frame AI as neither inherently making people smarter nor dumber, but dependent on how it’s used.

It concludes that the impact of AI on human cognition is not predetermined by the technology, but shaped by user choice: whether AI is a partner that augments thinking or a crutch that replaces it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Conversational advertising takes the stage as ChatGPT tests in-chat promotions

Advertising inside ChatGPT marks a shift in where commercial messages appear, not a break from how advertising works. AI systems have shaped search, social media, and recommendations for years, but conversational interfaces make those decisions more visible during moments of exploration.

Unlike search or social formats, conversational advertising operates inside dialogue. Ads appear because users are already asking questions or seeking clarity. Relevance is built through context rather than keywords, changing when information is encountered rather than how decisions are made.

In healthcare and clinical research, this distinction matters. Conversational ads cannot enroll patients directly, but they may raise awareness earlier in patient journeys and shape later discussions with clinicians and care providers.

Early rollout will be limited to free or low-cost ChatGPT tiers, likely skewing exposure towards patients and caregivers. As with earlier platforms, sensitive categories may remain restricted until governance and safeguards mature.

The main risks are organisational rather than technical. New channels will not fix unclear value propositions or operational bottlenecks. Conversational advertising changes visibility, not fundamentals, and success will depend on responsible integration.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI drives robots from labs into industry

The International Federation of Robotics says AI is accelerating the move of robots from research labs into real world use. A new position paper highlights rapid adoption across multiple industries as AI becomes a core enabler.

Logistics, manufacturing and services are leading AI driven robotics deployment. Warehousing and supply chains benefit from controlled environments, while factories use AI to improve efficiency, quality and precision in sectors including automotive and electronics.

The IFR said service robots are expanding as labour shortages persist, with restaurants and hospitality testing AI enabled machines. Hybrid models are emerging where robots handle repetitive work while humans focus on customer interaction.

Investment is rising globally, with major commitments in the US, Europe and China. The IFR expects AI to improve returns on robotics investment over the next decade through lower costs and higher productivity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Europe’s 2025 app market shows a downloads-revenue gap

The mobile app market of Europe in 2025 revealed a distinct divergence between popularity and revenue. AI-driven productivity apps, such as ChatGPT and Google Gemini, dominated downloads, alongside shopping platforms including Temu, SHEIN, and Vinted.

While installs highlight user preferences, active use and monetisation patterns tell a very different story instead of merely reflecting popularity.

Downloads for the top apps show ChatGPT leading with over 64 million, followed by Temu with nearly 44 million. Other widely downloaded apps included Threads, TikTok, CapCut, WhatsApp, Revolut and Lidl Plus.

The prevalence of AI and shopping apps underscores the shift of tools from professional use to everyday tasks, as Europeans increasingly rely on digital services for work, study and leisure.

Revenue patterns diverge sharply from download rankings. TikTok generated €740 million, followed by ChatGPT at €448 million and Tinder at €429 million. Subscription-based and premium-feature apps, including Disney+, Amazon Prime, Google One and YouTube, also rank highly.

In-app spending, rather than download numbers, drives earnings, revealing the importance of monetisation strategies beyond pure popularity.

Regional trends emphasise local priorities. The UK favours domestic finance and public service apps such as Monzo, Tesco, GOV.UK ID Check and HMRC, while Turkey shows strong use of national government, telecom and e-commerce apps, including e-Devlet Kapısı, Turkcell and Trendyol.

These variations highlight how app consumption reflects cultural preferences and the role of domestic services in digital life.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI model promises faster monoclonal antibody production

Researchers at the University of Oklahoma have developed a machine-learning model that could significantly speed up the manufacturing of monoclonal antibodies, a fast-growing class of therapies used to treat cancer, autoimmune disorders, and other diseases.

The study, published in Communications Engineering, targets delays in selecting high-performing cell lines during antibody production. Output varies widely between Chinese hamster ovary cell clones, forcing manufacturers to spend weeks screening for high yields.

By analysing early growth data, the researchers trained a model to predict antibody productivity far earlier in the process. Using only the first 9 days of data, it forecast production trends through day 16 and identified higher-performing clones in more than 76% of tests.

The model was developed with Oklahoma-based contract manufacturer Wheeler Bio, combining production data with established growth equations. Although further validation is needed, early results suggest shorter timelines and lower manufacturing costs.

The work forms part of a wider US-funded programme to strengthen biotechnology manufacturing capacity, highlighting how AI is being applied to practical industrial bottlenecks rather than solely to laboratory experimentation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Research warns of AI driven burnout risks

Generative AI is not reducing workloads as widely expected but intensifying them, according to new workplace research. Findings suggest productivity gains are being offset by expanding responsibilities and longer working hours.

An eight-month study at a US tech firm found employees worked faster, took on broader tasks, and extended working hours. AI tools enabled staff to take on duties beyond their roles, including coding, research, and technical problem-solving.

Researchers identified three pressure points driving intensification: task expansion, blurred work-life boundaries, and increased multitasking. Workers used AI during breaks and off-hours while juggling parallel tasks, increasing cognitive load.

Experts warn that the early productivity surge may mask burnout, fatigue, and declining work quality. Organisations are now being urged to establish structured ‘AI practices’ to regulate usage, protect focus, and maintain sustainable productivity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EMFA guidance sets expectations for Big Tech media protections

The European Commission has issued implementation guidelines for Article 18 of the European Media Freedom Act (EMFA), setting out how large platforms must protect recognised media content through self-declaration mechanisms.

Article 18 has been in effect for 6 months, and the guidance is intended to translate legal duties into operational steps. The European Broadcasting Union welcomed the clarification but warned that major platforms continue to delay compliance, limiting media organisations’ ability to exercise their rights.

The Commission says self-declaration mechanisms should be easy to find and use, with prominent interface features linked to media accounts. Platforms are also encouraged to actively promote the process, make it available in all EU languages, and use standardised questionnaires to reduce friction.

The guidance also recommends allowing multiple accounts in one submission, automated acknowledgements with clear contact points, and the ability to update or withdraw declarations. The aim is to improve transparency and limit unilateral moderation decisions.

The guidelines reinforce the EMFA’s goal of rebalancing power between platforms and media organisations by curbing opaque moderation practices. The impact of EMFA will depend on enforcement and ongoing oversight to ensure platforms implement the measures in good faith.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Dutch MPs renew push to move data off US clouds

Dutch MPs have renewed calls for companies and public services in the Netherlands to reduce reliance on US-based cloud servers. The move reflects growing concern over data security and foreign access in the Netherlands.

Research by NOS found that two-thirds of essential service providers in the Netherlands rely on at least one US cloud server. Local councils, health insurers and hospitals in the Netherlands remain heavily exposed.

Concerns intensified following a proposed sale of Solvinity, which manages the DigiD system used across the Netherlands. A sale to a US firm could place Dutch data under the US Cloud Act.

Parties including D66, VVD and CDA say critical infrastructure data in the Netherlands should be prioritised for protection. Dutch cloud providers say Europe could handle most services if procurement rules changed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT begins limited ads test in the US

OpenAI has begun testing advertisements inside ChatGPT for some adult users in the US, marking a major shift for the widely used AI service.

The ads appear only on Free and Go tiers in the US, while paid plans remain ad free. OpenAI says responses are unaffected, though critics warn commercial messaging could blur boundaries over time in the US.

Ads are selected based on conversation topics and prior interactions, prompting concern among privacy advocates in the US. OpenAI says advertisers receive only aggregated data and cannot view conversations.

Industry analysts say the move reflects growing pressure to monetise costly AI infrastructure in the US. Regulators and researchers continue to debate whether advertising can coexist with trust in AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US lawsuits target social media platforms for deliberate child engagement designs

A landmark trial has begun in Los Angeles, accusing Meta and Google’s YouTube of deliberately addicting children to their platforms.

The case is part of a wider series of lawsuits across the US seeking to hold social media companies accountable for harms to young users. TikTok and Snap settled before trial, leaving Meta and YouTube to face the allegations in court.

The first bellwether case involves a 19-year-old identified as ‘KGM’, whose claims could shape thousands of similar lawsuits. Plaintiffs allege that design features were intentionally created to maximise engagement among children, borrowing techniques from slot machines and the tobacco industry.

A trial that may see testimony from executives, including Meta CEO Mark Zuckerberg, and could last six to eight weeks.

Social media companies deny the allegations, emphasising existing safeguards and arguing that teen mental health is influenced by numerous factors, such as academic pressure, socioeconomic challenges and substance use, instead of social media alone.

Meta and YouTube maintain that they prioritise user safety and privacy while providing tools for parental oversight.

Similar trials are unfolding across the country. New Mexico is investigating allegations of sexual exploitation facilitated by Meta platforms, while Oakland will hear cases representing school districts.

More than 40 state attorneys general have filed lawsuits against Meta, with TikTok facing claims in over a dozen states. Outcomes could profoundly impact platform design, regulation and legal accountability for youth-focused digital services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!