Indeed expands AI tools to reshape hiring

Indeed is expanding its use of AI to improve hiring efficiency, enhance candidate matching, and support recruiters, while keeping humans in control of final decisions.

The platform offers over 100 AI-powered features across job search, recruitment, and internal operations, supported by a long-term partnership with OpenAI.

Recent launches include Career Scout for job seekers and Talent Scout for employers, streamlining career guidance, sourcing, screening, and engagement.

Additional AI-powered tools introduced through Indeed Connect aim to improve candidate discovery and screening, helping companies move faster while broadening access to opportunities through skills-based matching.

AI adoption has accelerated internally, with over 80% of engineers using AI tools and two-thirds of staff saving up to 2 hours per week. Marketing, sales, and research teams are building custom AI agents to support creativity, personalised outreach, and strategic decision-making.

Responsible AI principles remain central to Indeed’s strategy, prioritising fairness, transparency, and human control in hiring. Early results show faster hiring, stronger candidate engagement, and improved outcomes in hard-to-fill roles, reinforcing confidence in AI-driven recruitment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK banks block large share of crypto transfers, report finds

UK banks are blocking or delaying close to 40% of payments to cryptocurrency exchanges, sharply increasing customer friction and slowing market growth, according to a new industry report.

Around 80% of surveyed exchanges reported rising payment disruptions, while 70% described the banking environment as increasingly hostile, discouraging investment, hiring, and product launches in the UK.

The survey of major platforms, including Coinbase, Kraken, and Gemini, reveals widespread and opaque restrictions across bank transfers and card payments. One exchange reported nearly £1 billion in declined transactions last year, citing unclear rejection reasons despite FCA registration.

Several high-street and digital banks maintain outright blocks, while others impose strict transaction caps. The UK Cryptoasset Business Council warned that blanket debanking practices could breach existing regulations, including those on payment services, consumer protection, and competition.

The council urged the FCA and government to enforce a risk-based approach, expand data sharing, and remove unnecessary barriers as the UK finalises its long-term crypto framework.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Georgia moves to curb AI data centre expansion amid energy concerns

The state of Georgia is emerging as the focal point of a growing backlash against the rapid expansion of data centres powering the US’ AI boom.

Lawmakers in several states are now considering statewide bans, as concerns over energy consumption, water use and local disruption move to the centre of economic and environmental debate.

A bill introduced in Georgia would impose a moratorium on new data centre construction until March next year, giving state and municipal authorities time to establish more explicit regulatory rules.

The proposal arrives after Georgia’s utility regulator approved plans for an additional 10 gigawatts of electricity generation, primarily driven by data centre demand and expected to rely heavily on fossil fuels.

Local resistance has intensified as the Atlanta metropolitan area led the country in data centre construction last year, prompting multiple municipalities to impose their own temporary bans.

Critics argue that rapid development has pushed up electricity bills, strained water supplies and delivered fewer tax benefits than promised. At the same time, utility companies retain incentives to expand generation rather than improve grid efficiency.

The issue has taken on broader political significance as Georgia prepares for key elections that will affect utility oversight.

Supporters of the moratorium frame the pause as a chance for public scrutiny and democratic accountability, while backers of the industry warn that blanket restrictions risk undermining investment, jobs and long-term technological competitiveness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Monnett highlights EU digital sovereignty in social media

Monnett is a European-built social media platform designed to give people control over their online feeds. Users can choose exactly what they see, prioritise friends’ posts, and opt out of surveillance-style recommendation systems that dominate other networks.

Unlike mainstream platforms, Monnett places privacy first, with no profiling or sale of user data, and private chats protected without being mined for advertising. The platform also avoids “AI slop” or generative AI content shaping people’s feeds, emphasising human-centred interaction.

Created and built in Luxembourg at the heart of Europe, Monnett’s design reflects a growing push for digital sovereignty in the European Union, where citizens, regulators and developers want more control over how their digital spaces are governed and how personal data is treated.

Core features include full customisation of your algorithm, no shadowbans, strong privacy safeguards, and a focus on genuine social connection. Monnett aims to win users who prefer meaningful online interaction over addictive feeds and opaque data practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia’s social media ban raises concern for social media companies

Australia’s social media ban for under-16s is worrying social media companies. According to the country’s eSafety Commissioner, these companies fear a global trend of banning such apps. In Australia, regulators say major platforms reluctantly resisted the policy, fearing that similar rules could spread internationally.

In Australia, the ban has already led to the closure of 4.7 million child-linked accounts across platforms, including Instagram, TikTok and Snapchat. Authorities argue the measures are necessary to protect children from harmful algorithms and addictive design.

Social media companies operating in Australia, including Meta, say stronger safeguards are needed but oppose a blanket ban. Critics have warned about privacy risks, while regulators insist early data shows limited migration to alternative platforms.

Australia is now working with partners such as the UK to push tougher global standards on online child safety. In Australia, fines of up to A$49.5m may be imposed on companies failing to enforce the rules effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

China gains ground in global AI race

US companies are increasingly adopting Chinese AI models as part of their core technology stacks, raising questions about global leadership in AI. In the US, Pinterest has confirmed it is using Chinese-developed models to improve recommendations and shopping features.

In the US, executives point to open-source Chinese models such as DeepSeek and tools from Alibaba as faster, cheaper and easier to customise. US firms say these models can outperform proprietary alternatives at a fraction of the cost.

Adoption extends beyond Pinterest in the US, with Airbnb also relying on Chinese AI to power customer service tools. Data from Hugging Face shows Chinese models frequently rank among the most downloaded worldwide, including across US developers.

Researchers at Stanford University have found Chinese AI capabilities now match or exceed global peers. In the US, firms such as OpenAI and Meta remain focused on proprietary systems, leaving China to dominate open-source AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

IMF chief sounds alarm at Davos 2026 over AI and disruption to entry-level labour

AI has dominated discussions at the World Economic Forum in Davos, where IMF managing director Kristalina Georgieva warned that labour markets are already undergoing rapid structural disruption.

According to Georgieva, demand for skills is shifting unevenly, with productivity gains benefiting some workers while younger people and first-time job seekers face shrinking opportunities.

Entry-level roles are particularly exposed as AI systems absorb routine and clerical tasks traditionally used to gain workplace experience.

Georgieva described the effect on young workers as comparable to a labour-market tsunami, arguing that reduced access to foundational roles risks long-term scarring for an entire generation entering employment.

IMF research suggests AI could affect roughly 60 percent of jobs in advanced economies and 40 percent globally, with only about half of exposed workers likely to benefit.

For others, automation may lead to lower wages, slower hiring and intensified pressure on middle-income roles lacking AI-driven productivity gains.

At Davos 2026, Georgieva warned that the rapid, unregulated deployment of AI in advanced economies risks outpacing public policy responses.

Without clear guardrails and inclusive labour strategies, she argued that technological acceleration could deepen inequality rather than supporting broad-based economic resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple accuses the EU of blocking App Store compliance changes

Apple has accused the European Commission of preventing it from implementing App Store changes designed to comply with the Digital Markets Act, following a €500 million fine for breaching the regulation.

The company claims it submitted a formal compliance plan in October and has yet to receive a response from EU officials.

In a statement, Apple argued that the Commission requested delays while gathering market feedback, a process the company says lasted several months and lacked a clear legal basis.

The US tech giant described the enforcement approach as politically motivated and excessively burdensome, accusing the EU of unfairly targeting an American firm.

The Commission has rejected those claims, saying discussions with Apple remain ongoing and emphasising that any compliance measures must support genuinely viable alternative app stores.

Officials pointed to the emergence of multiple competing marketplaces after the DMA entered into force as evidence of market demand.

Scrutiny has increased following the decision by SetApp mobile to shut down its iOS app store in February, with the developer citing complex and evolving business terms.

Questions remain over whether Apple’s proposed shift towards commission-based fees and expanded developer communication rights will satisfy EU regulators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT model draws scrutiny over Grokipedia citations

OpenAI’s latest GPT-5.2 model has sparked concern after repeatedly citing Grokipedia, an AI-generated encyclopaedia launched by Elon Musk’s xAI, raising fresh fears of misinformation amplification.

Testing by The Guardian showed the model referencing Grokipedia multiple times when answering questions on geopolitics and historical figures.

Launched in October 2025, the AI-generated platform rivals Wikipedia but relies solely on automated content without human editing. Critics warn that limited human oversight raises risks of factual errors and ideological bias, as Grokipedia faces criticism for promoting controversial narratives.

OpenAI said its systems use safety filters and diverse public sources, while xAI dismissed the concerns as media distortion. The episode deepens scrutiny of AI-generated knowledge platforms amid growing regulatory and public pressure for transparency and accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Writers challenge troubling AI assumptions about language and style

A growing unease among writers is emerging as AI tools reshape how language is produced and perceived. Long-established habits, including the use of em dashes and semicolons, are increasingly being viewed with suspicion as machine-generated text becomes more common.

The concern is not opposition to AI itself, but the blurring of boundaries between human expression and automated output. Writers whose work was used to train large language models without consent say stylistic traits developed over decades are now being misread as algorithmic authorship.

Academic and editorial norms are also shifting under this pressure. Teaching practices that once valued rhythm, voice, and individual cadence are increasingly challenged by stricter stylistic rules, sometimes framed as safeguards against sloppy or machine-like writing rather than as matters of taste or craft.

At the same time, productivity tools embedded into mainstream software continue to intervene in the writing process, offering substitutions and revisions that prioritise clarity and efficiency over nuance. Such interventions risk flattening language and discouraging the idiosyncrasies that define human authorship.

As AI becomes embedded in publishing, education, and professional writing, the debate is shifting from detection to preservation. Many writers warn that protecting human voice and stylistic diversity is essential, arguing that affectless, uniform prose would erode creativity and trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!