Cyber Force proposal gains momentum in Washington

A new commission will begin work next month to explore creating a standalone Cyber Force as a military service. The Centre for Strategic and International Studies leads the effort in collaboration with the Cyber Solarium Commission 2.0.

The study responds to ongoing weaknesses in how the US military organises, trains and equips personnel for cyber operations. These shortcomings have prompted calls for a dedicated force with a focused mission.

The Cyber Force would aim to improve readiness and capability in the digital domain, mirroring the structure of other service branches. Cyber operations are seen as increasingly central to national security.

Details of the commission’s work will emerge in the coming months as discussions shape what such a force might look like.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Concerns grow over children’s use of AI chatbots

The growing use of AI chatbots and companions among children has raised safety concerns, with experts warning of inadequate protections and potential emotional risks.

Often not designed for young users, these apps lack sufficient age verification and moderation features, making them vulnerable spaces for children. The eSafety Commissioner noted that many children are spending hours daily with AI companions, sometimes discussing topics like mental health and sex.

Studies in Australia and the UK show high engagement, with many young users viewing the chatbots as real friends and sources of emotional advice.

Experts, including Professor Tama Leaver, warn that these systems are manipulative by design, built to keep users engaged without guaranteeing appropriate or truthful responses.

Despite the concerns, initiatives like Day of AI Australia promote digital literacy to help young people understand and navigate such technologies critically.

Organisations like UNICEF say AI could offer significant educational benefits if applied safely. However, they stress that Australia must take childhood digital safety more seriously as AI rapidly reshapes how young people interact, learn and socialise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italy investigates Meta over AI integration in WhatsApp

Italy’s antitrust watchdog has investigated Meta Platforms over allegations that the company may have abused its dominant position by integrating its AI assistant directly into WhatsApp.

The Rome-based authority, formally known as the Autorità Garante della Concorrenza e del Mercato (AGCM), announced the probe on Wednesday, stating that Meta may have breached European Union competition regulations.

The regulator claims that the introduction of the Meta AI assistant into WhatsApp was carried out without obtaining prior user consent, potentially distorting market competition.

Meta AI, the company’s virtual assistant designed to provide chatbot-style responses and other generative AI functions, has been embedded in WhatsApp since March 2025. It is accessible through the app’s search bar and is intended to offer users conversational AI services directly within the messaging interface.

The AGCM is concerned that this integration may unfairly favour Meta’s AI services by leveraging the company’s dominant position in the messaging market. It warned that such a move could steer users toward Meta’s products, limit consumer choice, and disadvantage competing AI providers.

‘By pairing Meta AI with WhatsApp, Meta appears to be able to steer its user base into the new market not through merit-based competition, but by ‘forcing’ users to accept the availability of two distinct services,’ the authority said.

It argued that this strategy may undermine rival offerings and entrench Meta’s position across adjacent digital services. In a statement, Meta confirmed cooperating fully with the Italian authorities.

The company defended the rollout of its AI features, stating that their inclusion in WhatsApp aimed to improve the user experience. ‘Offering free access to our AI features in WhatsApp gives millions of Italians the choice to use AI in a place they already know, trust and understand,’ a Meta spokesperson said via email.

The company maintains its approach, which benefits users by making advanced technology widely available through familiar platforms. The AGCM clarified that its inquiry is conducted in close cooperation with the European Commission’s relevant offices.

The cross-border collaboration reflects the growing scrutiny Meta faces from regulators across the EU over its market practices and the use of its extensive user base to promote new services.

If the authority finds Meta in breach of EU competition law, the company could face a fine of up to 10 percent of its global annual turnover. Under Article 102 of the Treaty on the Functioning of the European Union, abusing a dominant market position is prohibited, particularly if it affects trade between member states or restricts competition.

To gather evidence, AGCM officials inspected the premises of Meta’s Italian subsidiary, accompanied by Guardia di Finanza, the tax police’s special antitrust unit in Italy.

The inspections were part of preliminary investigative steps to assess the impact of Meta AI’s deployment within WhatsApp. Regulators fear that embedding AI assistants into dominant platforms could lead to unfair advantages in emerging AI markets.

By relying on its established user base and platform integration, Meta may effectively foreclose competition by making alternative AI services harder to access or less visible to consumers. Such a case would not be the first time Meta has faced regulatory scrutiny in Europe.

The company has been the subject of multiple investigations across the EU concerning data protection, content moderation, advertising practices, and market dominance. The current probe adds to a growing list of regulatory pressures facing the tech giant as it expands its AI capabilities.

The AGCM’s investigation comes amid broader EU efforts to ensure fair competition in digital markets. With the Digital Markets Act and AI Act emerging, regulators are becoming more proactive in addressing potential risks associated with integrating advanced technologies into consumer platforms.

As the investigation continues, Meta’s use of AI within WhatsApp will remain under close watch. The outcome could set an essential precedent for how dominant tech firms can release AI products within widely used communication tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Flipkart employee deletes ChatGPT over emotional dependency

ChatGPT has become an everyday tool for many, serving as a homework partner, a research aid, and even a comforting listener. But questions are beginning to emerge about the emotional bonds users form with it. A recent LinkedIn post has reignited the debate around AI overuse.

Simrann M Bhambani, a marketing professional at Flipkart, publicly shared her decision to delete ChatGPT from her devices. In a post titled ‘ChatGPT is TOXIC! (for me)’, she described how casual interaction escalated into emotional dependence. The platform began to resemble a digital therapist.

Bhambani admitted to confiding every minor frustration and emotional spiral to the chatbot. Its constant availability and non-judgemental replies gave her a false sense of security. Even with supportive friends, she felt drawn to the machine’s quiet reliability.

What began as curiosity turned into compulsion. She found herself spending hours feeding the bot intrusive thoughts and endless questions. ‘I gave my energy to something that wasn’t even real,’ she wrote. The experience led to more confusion instead of clarity.

Rather than offering mental relief, the chatbot fuelled her overthinking. The emotional noise grew louder, eventually becoming overwhelming. She realised that the problem wasn’t the technology itself, but how it quietly replaced self-reflection.

Deleting the app marked a turning point. Bhambani described the decision as a way to reclaim mental space and reduce digital clutter. She warned others that AI tools, while useful, can easily replace human habits and emotional processing if left unchecked.

Many users may not notice such patterns until they are deeply entrenched. AI chatbots are designed to be helpful and responsive, but they lack the nuance and care of human conversation. Their steady presence can foster a deceptive sense of intimacy.

People increasingly rely on digital tools to navigate their daily emotions, often without understanding the consequences. Some may find themselves withdrawing from human relationships or journalling less often. Emotional outsourcing to machines can significantly change how people process personal experiences.

Industry experts have warned about the risks of emotional reliance on generative AI. Chatbots are known to produce inaccurate or hallucinated responses, especially when asked to provide personal advice. Sole dependence on such tools can lead to misinformation or emotional confusion.

Companies like OpenAI have stressed that ChatGPT is not a substitute for professional mental health support. While the bot is trained to provide helpful and empathetic responses, it cannot replace human judgement or real-world relationships. Boundaries are essential.

Mental health professionals also caution against using AI as an emotional crutch. Reflection and self-awareness take time and require discomfort, which AI often smooths over. The convenience can dull long-term growth and self-understanding.

Bhambani’s story has resonated with many who have quietly developed similar habits. Her openness has sparked important discussions on emotional hygiene in the age of AI. More users are starting to reflect on their relationship with digital tools.

Social media platforms are also witnessing an increased number of posts about AI fatigue and cognitive overload. People are beginning to question how constant access to information and feedback affects emotional well-being. There is growing awareness around the need for balance.

AI is expected to become even more integrated into daily life, from virtual assistants to therapy bots. Recognising the line between convenience and dependency will be key. Tools are meant to serve, not dominate, personal reflection.

Developers and users alike must remain mindful of how often and why they turn to AI. Chatbots can complement human support systems, but they are not replacements. Bhambani’s experience serves as a cautionary tale in the age of machine intimacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Guess AI model sparks fashion world debate

A striking new ‘supermodel’ has appeared in the August print edition of Vogue, featuring in a Guess advert for their summer collection. Uniquely, the flawless blonde model is not real, as a small disclaimer reveals she was created using AI.

While Vogue clarifies the AI model’s inclusion was an advertising decision, not editorial, it marks a significant first for the magazine and has ignited widespread controversy.

The development raises serious questions for real models, who have long campaigned for greater diversity, and consumers, particularly young people, are already grappling with unrealistic beauty standards.

Seraphinne Vallora, the company behind the controversial Guess advert, comprises founders Valentina Gonzalez and Andreea Petrescu. They told the BBC that Guess’s co-founder, Paul Marciano, approached them on Instagram to create an AI model for the brand’s summer campaign.

Valentina Gonzalez explained, ‘We created 10 draft models for him and he selected one brunette woman and one blonde that we developed further.’ Petrescu described AI image generation as a complex process, with their five employees taking up to a month to create a finished product, charging clients like Guess up to the low six figures.

However, plus-size model Felicity Hayward, with over a decade in the industry, criticised the use of AI models, stating it ‘feels lazy and cheap’ and worried it could ‘undermine years of work towards more diversity in the industry.’

Hayward believes the fashion industry, which saw strides in inclusivity in the 2010s, has regressed, leading to fewer bookings for diverse models. She warned, ‘The use of AI models is another kick in the teeth that will disproportionately affect plus-size models.’

Gonzalez and Petrescu insist they do not reinforce narrow beauty standards, with Petrescu claiming, ‘We don’t create unattainable looks – the AI model for Guess looks quite realistic.’ They contended, ‘Ultimately, all adverts are created to look perfect and usually have supermodels in, so what we do is no different.’

While admitting their company’s Instagram shows a lack of diversity, Gonzalez explained to the BBC that attempts to post AI images of women with different skin tones did not gain traction, stating, ‘people do not respond to them – we don’t get any traction or likes.’

They also noted that the technology is not yet advanced enough to create plus-size AI women. However, this mirrors a 2024 Dove campaign that highlighted AI bias by showing image generators consistently producing thin, white, blonde women when asked for ‘the most beautiful woman in the world.’

Vanessa Longley, CEO of eating disorder charity Beat, found the advert ‘worrying,’ telling the BBC, ‘If people are exposed to images of unrealistic bodies, it can affect their thoughts about their own body, and poor body image increases the risk of developing an eating disorder.’

The lack of transparent labelling for AI-generated content in the UK is also a concern, despite Guess having a small disclaimer. Sinead Bovell, a former model and now tech entrepreneur, told the BBC that not clearly labelling AI content is ‘exceptionally problematic’ due to ‘AI is already influencing beauty standards.’

Sara Ziff, a former model and founder of Model Alliance, views Guess’s campaign as “less about innovation and more about desperation and need to cut costs,’ advocating for ‘meaningful protections for workers’ in the industry.

Seraphinne Vallora, however, denies replacing models, with Petrescu explaining, ‘We’re offering companies another choice in how they market a product.’

Despite their website claiming cost-efficiency by ‘eliminating the need for expensive set-ups… hiring models,’ they involve real models and photographers in their AI creation process. Vogue’s decision to run the advert has drawn criticism on social media, with Bovell noting the magazine’s influential position, which means they are ‘in some way ruling it as acceptable.’

Looking ahead, Bovell predicts more AI-generated models but not their total dominance, foreseeing a future where individuals might create personal AI avatars to try on clothes and a potential ‘society opting out’ if AI models become too unattainable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN urges global rules for AI to prevent inequality

According to Doreen Bogdan-Martin, head of the UN’s International Telecommunications Union, the world must urgently adopt a unified approach to AI regulation.

She warned that fragmented national strategies could deepen global inequalities and risk leaving billions excluded from the AI revolution.

Bogdan-Martin stressed that only a global framework can ensure AI benefits all of humanity instead of worsening digital divides.

With 85% of countries lacking national AI strategies and 2.6 billion people still offline, she argued that a coordinated effort is essential to bridge access gaps and prevent AI from becoming a tool that advances inequality rather than opportunity.

ITU chief highlighted the growing divide between regulatory models — from the EU’s strict governance and China’s centralised control to the US’s new deregulatory push under Donald Trump.

She avoided direct criticism of the US strategy but called for dialogue between all regions instead of fragmented policymaking.

Despite the rapid advances of AI in sectors like healthcare, agriculture and education, Bogdan-Martin warned that progress must be inclusive. She also urged more substantial efforts to bring women into AI and tech leadership, pointing to the continued gender imbalance in the sector.

As the first woman to lead ITU, she said her role was not just about achievement but setting a precedent for future generations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump pushes for ‘anti-woke’ AI in US government contracts

Tech firms aiming to sell AI systems to the US government will now need to prove their chatbots are free of ideological bias, following a new executive order signed by Donald Trump.

The measure, part of a broader plan to counter China’s influence in AI development, marks the first official attempt by the US to shape the political behaviour of AI in services.

It places a new emphasis on ensuring AI reflects so-called ‘American values’ and avoids content tied to diversity, equity and inclusion (DEI) frameworks in publicly funded models.

The order, titled ‘Preventing Woke AI in the Federal Government’, does not outright ban AI that promotes DEI ideas, but requires companies to disclose if partisan perspectives are embedded.

Major providers like Google, Microsoft and Meta have yet to comment. Meanwhile, firms face pressure to comply or risk losing valuable public sector contracts and funding.

Critics argue the move forces tech companies into a political culture war and could undermine years of work addressing AI bias, harming fair and inclusive model design.

Civil rights groups warn the directive may sideline tools meant to support vulnerable groups, favouring models that ignore systemic issues like discrimination and inequality.

Policy analysts have compared the approach to China’s use of state power to shape AI behaviour, though Trump’s order stops short of requiring pre-approval or censorship.

Supporters, including influential Trump-aligned venture capitalists, say the order restores transparency. Marc Andreessen and David Sacks were reportedly involved in shaping the language.

The move follows backlash to an AI image tool released by Google, which depicted racially diverse figures when asked to generate the US Founding Fathers, triggering debate.

Developers claimed the outcome resulted from attempts to counter bias in training data, though critics labelled it ideological overreach embedded by design teams.

Under the directive, companies must disclose model guidelines and explain how neutrality is preserved during training. Intentional encoding of ideology is discouraged.

Former FTC technologist Neil Chilson described the order as light-touch. It does not ban political outputs; it only calls for transparency about generating outputs.

OpenAI said its objectivity measures align with the order, while Microsoft declined to comment. xAI praised Trump’s AI policy but did not mention specifics.

The firm, founded by Elon Musk, recently won a $200M defence contract shortly after its Grok chatbot drew criticism for generating antisemitic and pro-Hitler messages.

Trump’s broader AI orders seek to strengthen American leadership and reduce regulatory burdens to keep pace with China in the development of emerging technologies.

Some experts caution that ideological mandates could set a precedent for future governments to impose their political views on critical AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quantum computing faces roadblocks to real-world use

Quantum computing holds vast promise for sectors from climate modelling to drug discovery and AI, but it remains far from mainstream due to significant barriers. The fragility of qubits, the shortage of scalable quantum software, and the immense number of qubits required continue to limit progress.

Keeping qubits stable is one of the most significant technical obstacles, with most only lasting microseconds before disruption. Current solutions rely on extreme cooling and specialised equipment, which remain expensive and impractical for widespread use.

Even the most advanced systems today operate with a fraction of the qubits needed for practical applications, while software options remain scarce and highly tailored. Businesses exploring quantum solutions must often build their tools from scratch, adding to the cost and complexity.

Beyond technology, the field faces social and structural challenges. A lack of skilled professionals and fears around unequal access could see quantum benefits restricted to big tech firms and governments.

Security is another looming concern, as future quantum machines may be capable of breaking current encryption standards. Policymakers and businesses must develop defences before such systems become widely available.

AI may accelerate progress in both directions. Quantum computing can supercharge model training and simulation, while AI is already helping to improve qubit stability and propose new hardware designs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teen builds Hindi AI tool to help paralysis patients speak

An Indian teenager has created a low-cost AI device that translates slurred speech into clear Hindi, helping patients with paralysis and neurological conditions communicate more easily.

Pranet Khetan’s innovation, Paraspeak, uses a custom Hindi speech recognition model to address a long-ignored area of assistive tech.

The device was inspired by Khetan’s visit to a paralysis care centre, where he saw patients struggling to express themselves. Unlike existing English models, Paraspeak is trained on the first Hindi dysarthic speech dataset in India, created by Khetan himself through recordings and data augmentation.

Using transformer architecture, Paraspeak converts unclear speech into understandable output using cloud processing and a neck-worn compact device. It is designed to be scalable across different speakers, unlike current solutions that only work for individual patients.

The AI device is affordable, costing around ₹2,000 to build, and is already undergoing real-world testing. With no existing market-ready alternative for Hindi speakers, Paraspeak represents a significant step forward in inclusive health technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI strategy aims to attract global capital to Indonesia

Indonesia is moving to cement its position in the global AI and semiconductor landscape by releasing its first comprehensive national AI strategy in August 2025.

Deputy Minister Nezar Patria says the roadmap aims to clarify the country’s AI market potential, particularly in sectors like health and agriculture, and provide guidance on infrastructure, regulation, and investment pathways.

Already, global tech firms are demonstrating confidence in the country’s potential. Microsoft has pledged $1.7 billion to expand cloud and AI capabilities, while Nvidia partnered on a $200 million AI centre project. These investments align with Jakarta’s efforts to build skill pipelines and computational capacity.

In parallel, Indonesia is pitching into critical minerals extraction to strengthen its semiconductor and AI hardware supply chains, and has invited foreign partners, including from the United States, to invest. These initiatives aim to align resource security with its AI ambitions.

However, analysts caution that Indonesia must still address significant gaps: limited AI-ready infrastructure, a shortfall in skilled tech talent, and governance concerns such as data privacy and IP protection.

The new AI roadmap will bridge these deficits and streamline regulation without stifling innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!