Indeed expands AI tools to reshape hiring

Indeed is expanding its use of AI to improve hiring efficiency, enhance candidate matching, and support recruiters, while keeping humans in control of final decisions.

The platform offers over 100 AI-powered features across job search, recruitment, and internal operations, supported by a long-term partnership with OpenAI.

Recent launches include Career Scout for job seekers and Talent Scout for employers, streamlining career guidance, sourcing, screening, and engagement.

Additional AI-powered tools introduced through Indeed Connect aim to improve candidate discovery and screening, helping companies move faster while broadening access to opportunities through skills-based matching.

AI adoption has accelerated internally, with over 80% of engineers using AI tools and two-thirds of staff saving up to 2 hours per week. Marketing, sales, and research teams are building custom AI agents to support creativity, personalised outreach, and strategic decision-making.

Responsible AI principles remain central to Indeed’s strategy, prioritising fairness, transparency, and human control in hiring. Early results show faster hiring, stronger candidate engagement, and improved outcomes in hard-to-fill roles, reinforcing confidence in AI-driven recruitment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

France’s National Assembly backs under-15 social media ban

France’s National Assembly has backed a bill that would bar children under 15 from accessing social media, citing rising concern over cyberbullying and mental-health harms. MPs approved the text late Monday by 116 votes to 23, sending it next to the Senate before it returns to the lower house for a final vote.

As drafted, the proposal would cover both standalone social networks and ‘social networking’ features embedded inside wider platforms, and it would rely on age checks that comply with the EU rules. The same package also extends France’s existing smartphone restrictions in schools to include high schools, and lawmakers have discussed additional guardrails, such as limits on practices deemed harmful to minors (including advertising and recommendation systems).

President Emmanuel Macron has urged lawmakers to move quickly, arguing that platforms are not neutral spaces for adolescents and linking social media to broader concerns about youth violence and well-being. Support for stricter limits is broad across parties, and polling has pointed in the same direction, but the bill still faces the practical question of how reliably platforms can keep underage users out.

Australia set the pace in December 2025, when its world-first ban on under-16s holding accounts on major platforms came into force, an approach now closely watched abroad. Early experience there has highlighted the same tension France faces, between political clarity (‘no accounts under the age line’) and the messy reality of age assurance and workarounds.

France’s debate is also unfolding in a broader European push to tighten child online safety rules. The European Parliament has called for an EU-wide ‘digital minimum age’ of 16 (with parental consent options for 13–16), while the European Commission has issued guidance for platforms and developed a prototype age-verification tool designed to preserve privacy, signalling that Brussels is trying to square protection with data-minimisation.

Why does it matter?

Beyond the child-safety rationale, the move reflects a broader push to curb platform power, with youth protection framed as a test case for stronger state oversight of Big Tech. At the same time, critics warn that strict age-verification regimes can expand online identification and surveillance, raising privacy and rights concerns, and may push teens toward smaller or less regulated spaces rather than offline life.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Artists and writers say no to generative AI

Creative communities are pushing back against generative AI in literature and art. The Science Fiction and Fantasy Writers Association now bars works created wholly or partly with large language models after criticism of earlier, more permissive rules.

San Diego Comic-Con faced controversy when it initially allowed AI-generated art in its exhibition, but not for sale. Artists argued that the rules threatened originality, prompting organisers to ban all AI-created material.

Authors warn that generative AI undermines the creative process. Some point out that large language model tools are already embedded in research and writing software, raising concerns about accidental disqualification from awards.

Fans and members welcomed SFWA’s decision, but questions remain about how broadly AI usage will be defined. Many creators insist that machines cannot replicate storytelling and artistic skill.

Industry observers expect other cultural organisations to follow similar policies this year. The debate continues over ethics, fairness, and technology’s role in arts and literature.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Monnett highlights EU digital sovereignty in social media

Monnett is a European-built social media platform designed to give people control over their online feeds. Users can choose exactly what they see, prioritise friends’ posts, and opt out of surveillance-style recommendation systems that dominate other networks.

Unlike mainstream platforms, Monnett places privacy first, with no profiling or sale of user data, and private chats protected without being mined for advertising. The platform also avoids “AI slop” or generative AI content shaping people’s feeds, emphasising human-centred interaction.

Created and built in Luxembourg at the heart of Europe, Monnett’s design reflects a growing push for digital sovereignty in the European Union, where citizens, regulators and developers want more control over how their digital spaces are governed and how personal data is treated.

Core features include full customisation of your algorithm, no shadowbans, strong privacy safeguards, and a focus on genuine social connection. Monnett aims to win users who prefer meaningful online interaction over addictive feeds and opaque data practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta pauses teen access to AI characters

Meta Platforms has announced a temporary pause on teenagers’ access to AI characters across its platforms, including Instagram and WhatsApp. Meta disclosed the decision to review and rebuild the feature for younger users.

In San Francisco, Meta said the restriction will apply to users identified as minors based on declared ages or internal age-prediction systems. Teenagers will still be able to use Meta’s core AI assistant, though interactive AI characters will be unavailable.

The move comes ahead of a major child safety trial in Los Angeles involving Meta, TikTok and YouTube. The Los Angeles case focuses on allegations that social media platforms cause harm to children through addictive and unsafe digital features.

Concerns about AI chatbots and minors have grown across the US, prompting similar action by other companies. In Los Angeles and San Francisco, regulators and courts are increasingly scrutinising how AI interactions affect young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia’s social media ban raises concern for social media companies

Australia’s social media ban for under-16s is worrying social media companies. According to the country’s eSafety Commissioner, these companies fear a global trend of banning such apps. In Australia, regulators say major platforms reluctantly resisted the policy, fearing that similar rules could spread internationally.

In Australia, the ban has already led to the closure of 4.7 million child-linked accounts across platforms, including Instagram, TikTok and Snapchat. Authorities argue the measures are necessary to protect children from harmful algorithms and addictive design.

Social media companies operating in Australia, including Meta, say stronger safeguards are needed but oppose a blanket ban. Critics have warned about privacy risks, while regulators insist early data shows limited migration to alternative platforms.

Australia is now working with partners such as the UK to push tougher global standards on online child safety. In Australia, fines of up to A$49.5m may be imposed on companies failing to enforce the rules effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UN warns of rising AI-driven threats to child safety

UN agencies have issued a stark warning over the accelerating risks AI poses to children online, citing rising cases of grooming, deepfakes, cyberbullying and sexual extortion.

A joint statement published on 19 January urges urgent global action, highlighting how AI tools increasingly enable predators to target vulnerable children with unprecedented precision.

Recent data underscores the scale of the threat, with technology-facilitated child abuse cases in the US surging from 4,700 in 2023 to more than 67,000 in 2024.

During the COVID-19 pandemic, online exploitation intensified, particularly affecting girls and young women, with digital abuse frequently translating into real-world harm, according to officials from the International Telecommunication Union.

Governments are tightening policies, led by Australia’s social media ban for under-16s, as the UK, France and Canada consider similar measures. UN agencies urged tech firms to prioritise child safety and called for stronger AI literacy across society.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Apple accuses the EU of blocking App Store compliance changes

Apple has accused the European Commission of preventing it from implementing App Store changes designed to comply with the Digital Markets Act, following a €500 million fine for breaching the regulation.

The company claims it submitted a formal compliance plan in October and has yet to receive a response from EU officials.

In a statement, Apple argued that the Commission requested delays while gathering market feedback, a process the company says lasted several months and lacked a clear legal basis.

The US tech giant described the enforcement approach as politically motivated and excessively burdensome, accusing the EU of unfairly targeting an American firm.

The Commission has rejected those claims, saying discussions with Apple remain ongoing and emphasising that any compliance measures must support genuinely viable alternative app stores.

Officials pointed to the emergence of multiple competing marketplaces after the DMA entered into force as evidence of market demand.

Scrutiny has increased following the decision by SetApp mobile to shut down its iOS app store in February, with the developer citing complex and evolving business terms.

Questions remain over whether Apple’s proposed shift towards commission-based fees and expanded developer communication rights will satisfy EU regulators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT model draws scrutiny over Grokipedia citations

OpenAI’s latest GPT-5.2 model has sparked concern after repeatedly citing Grokipedia, an AI-generated encyclopaedia launched by Elon Musk’s xAI, raising fresh fears of misinformation amplification.

Testing by The Guardian showed the model referencing Grokipedia multiple times when answering questions on geopolitics and historical figures.

Launched in October 2025, the AI-generated platform rivals Wikipedia but relies solely on automated content without human editing. Critics warn that limited human oversight raises risks of factual errors and ideological bias, as Grokipedia faces criticism for promoting controversial narratives.

OpenAI said its systems use safety filters and diverse public sources, while xAI dismissed the concerns as media distortion. The episode deepens scrutiny of AI-generated knowledge platforms amid growing regulatory and public pressure for transparency and accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI bot swarms emerge as a new threat to democracy

Researchers and free-speech advocates are warning that coordinated swarms of AI agents could soon be deployed to manipulate public opinion at a scale capable of undermining democratic systems.

According to a consortium of academics from leading universities, advances in generative and agentic AI now enable large numbers of human-like bots to infiltrate online communities and autonomously simulate organic political discourse.

Unlike earlier forms of automated misinformation, AI swarms are designed to adapt to social dynamics, learn community norms and exchange information in pursuit of a shared objective.

By mimicking human behaviour and spreading tailored narratives gradually, such systems could fabricate consensus, amplify doubt around electoral processes and normalise anti-democratic outcomes without triggering immediate detection.

Evidence of early influence operations has already emerged in recent elections across Asia, where AI-driven accounts have engaged users with large volumes of unverifiable information rather than overt propaganda.

Researchers warn that information overload, strategic neutrality and algorithmic amplification may prove more effective than traditional disinformation campaigns.

The authors argue that democratic resilience now depends on global coordination, combining technical safeguards such as watermarking and detection tools with stronger governance of political AI use.

Without collective action, they caution that AI-enabled manipulation risks outpacing existing regulatory and institutional defences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!