Rethinking AI in journalism with global cooperation

At the Internet Governance Forum 2025 in Lillestrøm, Norway, a vibrant multistakeholder session spotlighted the ethical dilemmas of AI in journalism and digital content. The event was hosted by R&W Media and introduced the Haarlem Declaration, a global initiative to promote responsible AI practices in digital media.

Central to the discussion was unveiling an ‘ethical AI checklist,’ designed to help organisations uphold human rights, transparency, and environmental responsibility while navigating AI’s expanding role in content creation. Speakers emphasised a people-centred approach to AI, advocating for tools that support rather than replace human decision-making.

Ernst Noorman, the Dutch Ambassador for Cyber Affairs, called for AI policies rooted in international human rights law, highlighting Europe’s Digital Services and AI Acts as potential models. Meanwhile, grassroots organisations from the Global South shared real-world challenges, including algorithmic bias, language exclusions, and environmental impacts.

Taysir Mathlouthi of Hamleh detailed efforts to build localised AI models in Arabic and Hebrew, while Nepal’s Yuva organisation, represented by Sanskriti Panday, explained how small NGOs balance ethical use of generative tools like ChatGPT with limited resources. The Global Forum for Media Development’s Laura Becana Ball introduced the Journalism Cloud Alliance, a collective aimed at making AI tools more accessible and affordable for newsrooms.

Despite enthusiasm, participants acknowledged hurdles such as checklist fatigue, lack of capacity, and the need for AI literacy training. Still, there was a shared sense of urgency and optimism, with the consensus that ethical frameworks must be embedded from the outset of AI development and not bolted on as an afterthought.

In closing, organisers invited civil society and media groups to endorse the Harlem Declaration and co-create practical tools for ethical AI governance. While challenges remain, the forum set a clear agenda: ethical AI in media must be inclusive, accountable, and co-designed by those most affected by its implementation.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Perplexity AI bot now makes videos on X

Perplexity’s AI chatbot, now integrated with X (formerly Twitter), has introduced a feature that allows users to generate short AI-created videos with sound.

By tagging @AskPerplexity with a brief prompt, users receive eight-second clips featuring computer-generated visuals and audio, including dialogue. The move is as a potential driver of engagement on the Elon Musk-owned platform.

However, concerns have emerged over the possibility of misinformation spreading more easily. Perplexity claims to have installed strong filters to limit abuse, but X’s poor content moderation continues to fuel scepticism.

The feature has already been used to create imaginative videos involving public figures, sparking debates around ethical use.

The competition between Perplexity’s ‘Ask’ bot and Musk’s Grok AI is intensifying, with the former taking the lead in multimedia capabilities. Despite its popularity on X, Grok does not currently support video generation.

Meanwhile, Perplexity is expanding to other platforms, including WhatsApp, offering AI services directly without requiring a separate app or registration.

Legal troubles have also surfaced. The BBC is threatening legal action against Perplexity over alleged unauthorised use of its content for AI training. In a strongly worded letter, the broadcaster has demanded content deletion, compensation, and a halt to further scraping.

Perplexity dismissed the claims as manipulative, accusing the BBC of misunderstanding technology and copyright law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk wants Grok AI to replace historical facts

Elon Musk has revealed plans to retrain his Grok AI model by rewriting human knowledge, claiming current training datasets contain too much ‘garbage’ and unchecked errors.

He stated that Grok 3.5 would be designed for ‘advanced reasoning’ and tasked with correcting historical inaccuracies before using the revised corpus to retrain itself.

Musk, who has criticised other AI systems like ChatGPT for being ‘politically correct’ and biassed, wants Grok to be ‘anti-woke’ instead.

His stance echoes his earlier approach to X, where he relaxed content moderation and introduced a Community Notes feature in response to the platform being flooded with misinformation and conspiracy theories after his takeover.

The proposal has drawn fierce criticism from academics and AI experts. Gary Marcus called the plan ‘straight out of 1984’, accusing Musk of rewriting history to suit personal beliefs.

Logic professor Bernardino Sassoli de’ Bianchi warned the idea posed a dangerous precedent where ideology overrides truth, calling it ‘narrative control, not innovation’.

Musk also urged users on X to submit ‘politically incorrect but factually true’ content to help train Grok.

The move quickly attracted falsehoods and debunked conspiracies, including Holocaust distortion, anti-vaccine claims and pseudoscientific racism, raising alarms about the real risks of curating AI data based on subjective ideas of truth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn users still hesitate to use AI writing tools

LinkedIn users have readily embraced AI in many areas, but one feature has not taken off as expected — AI-generated writing suggestions for posts.

CEO Ryan Roslansky admitted to Bloomberg that the tool’s popularity has fallen short, likely due to the platform’s professional nature and the risk of reputational damage.

Unlike casual platforms such as X or TikTok, LinkedIn posts often serve as an extension of users’ résumés. Roslansky explained that being called out for using AI-generated content on LinkedIn could damage someone’s career prospects, making users more cautious about automation.

LinkedIn has seen explosive growth in AI-related job demand and skills despite the hesitation around AI-assisted writing. The number of roles requiring AI knowledge has increased sixfold in the past year, while user profiles listing such skills have jumped twentyfold.

Roslansky also shared that he relies on AI when communicating with his boss, Microsoft CEO Satya Nadella. Before sending an email, he uses Copilot to ensure it reflects the polished, insightful tone he calls ‘Satya-smart.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Microsoft’s collaboration is near breaking point

The once-celebrated partnership between OpenAI and Microsoft is now under severe strain as disputes over control and strategic direction threaten to dismantle their alliance.

OpenAI’s move toward a for-profit model has placed it at odds with Microsoft, which has invested billions and provided exclusive access to Azure infrastructure.

Microsoft’s financial backing and technical involvement have granted it a powerful voice in OpenAI’s operations. However, OpenAI now appears determined to gain independence, even if it risks severing ties with the tech giant.

Negotiations are ongoing, but the growing rift could reshape the trajectory of generative AI development if the collaboration collapses.

Amid tensions, Microsoft evaluates alternative options, including developing AI tools and working with rivals like Meta and xAI.

Such a pivot suggests Microsoft is preparing for a future beyond OpenAI, potentially ending its exclusive access to upcoming models and intellectual property.

A breakdown could have industry-wide repercussions. OpenAI may struggle to secure the estimated $40 billion in fresh funding it seeks, especially without Microsoft’s support.

At the same time, the rivalry could accelerate competition across the AI sector, prompting others to strengthen or redefine their positions in the race for dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon CEO warns staff to embrace AI or face job losses

Amazon CEO Andy Jassy has warned staff that they must embrace AI or risk losing their jobs.

In a memo shared publicly, Jassy said generative AI and intelligent agents are already transforming workflows at Amazon, and this shift will inevitably reduce the number of corporate roles in the coming years.

According to Jassy, AI will allow Amazon to operate more efficiently by automating specific roles and reallocating talent to new areas. He acknowledged that it’s difficult to predict the exact outcome but clarified that the corporate workforce will shrink as AI adoption expands across the company.

Those hoping to remain at Amazon will need to upskill quickly. Jassy stressed the need for employees to stay curious and proficient with AI tools to boost their productivity and remain valuable in an increasingly automated environment.

Amazon is not alone in the trend.

BT Group is restructuring to eliminate tens of thousands of roles. At the same time, other corporate leaders, including those at LVMH and ManPower, have echoed concerns that AI’s most significant disruption may be within human resources.

Executives now see AI as a tech shift and a workforce transformation demanding retraining and redefinition of roles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India’s Gen Z founders go viral with AI and robotics ‘Hacker House’ in Bengaluru

A viral video has captured the imagination of tech enthusiasts by offering a rare look inside a ‘Hacker House’ in Bengaluru’s HSR Layout, where a group of Gen Z Indian founders are quietly shaping the future of AI and robotics.

Spearheaded by Localhost, the initiative provides young developers aged 16 to 22 with funding, workspace, and a collaborative environment to rapidly build real-world tech products — no media hype, just raw innovation.

The video, shared by Canadian entrepreneur Caleb Friesen, shows teenage coders intensely focused on their projects. From AI-powered noise-cancelling systems and assistive robots to innovative real estate and podcasting tools, each room in the shared house hums with creativity.

The youngest, 16-year-old Harish, stands out for his deep focus, while Suhas Sumukh, who leads the Bengaluru chapter, acts as both a guide and mentor.

Rather than pitch decks and polished PR, what resonated online was the authenticity and dedication. Caleb’s walk-through showed residents too engrossed in their work to acknowledge his arrival.

Viewers responded with admiration, calling it a rare glimpse into ‘the real future of Indian tech’. The video has since crossed 1.4 million views, sparking global curiosity.

At the heart of the movement is Localhost, founded by Kei Hayashi, which helps young developers build fast and learn faster.

As demand grows for similar hacker houses in Mumbai, Delhi, and Hyderabad, the initiative may start a new chapter for India’s startup ecosystem — fuelled by focus, snacks, and a poster of Steve Jobs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Publishers lose traffic as readers trust AI more

Online publishers are facing an existential threat as AI increasingly becomes the primary source of information for users, warned Cloudflare CEO Matthew Prince during an Axios event in Cannes.

As AI-generated summaries dominate user queries, search engine referrals have plunged, urgently pushing media outlets to reconsider how they sustain revenue from their content.

Traffic patterns have dramatically shifted. A decade ago, Google sent a visitor to publishers for every two pages it crawled.

Today, that ratio has ballooned to 18:1. The picture is more extreme for AI firms: OpenAI’s ratio has jumped from 250:1 to 1,500:1 in just six months, while Anthropic’s has exploded from 6,000:1 to a staggering 60,000:1.

Although AI systems typically include links to sources, Prince noted that ‘people aren’t following the footnotes,’ meaning fewer clicks and less ad revenue.

Prince argued that audiences are beginning to trust AI summaries more than the original articles, reducing publishers’ visibility and direct engagement.

As the web becomes increasingly AI-mediated, fewer people read full articles, raising urgent questions about how creators and publishers can be fairly compensated.

To tackle the issue, Cloudflare is preparing to launch a new anti-scraping tool to block unauthorised data harvesting. Prince hinted that the tool has broad industry support and will be rolled out soon.

He remains confident in Cloudflare’s capacity to fight against such threats, noting the company’s daily battles against sophisticated global cyber actors.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

SoftBank plans $1 trillion AI and robotics park in Arizona

SoftBank founder Masayoshi Son is planning what could become his most audacious venture yet: a $1 trillion AI and robotics industrial park in Arizona.

Dubbed ‘Project Crystal Land’, the initiative aims to recreate a high-tech manufacturing hub reminiscent of China’s Shenzhen, focused on AI-powered robots and next-gen automation.

Son is courting global tech giants — including Taiwan Semiconductor Manufacturing Co. (TSMC) and Samsung — to join the vision, though none have formally committed.

The plan hinges on support from federal and state governments, with SoftBank already discussing possible tax breaks with US officials, including Commerce Secretary Howard Lutnick.

While TSMC is already investing $165 billion in Arizona facilities, sources suggest Son’s project has not altered the chipmaker’s current roadmap. SoftBank hopes to attract semiconductor and AI hardware leaders to power the park’s infrastructure.

Son has also approached SoftBank Vision Fund portfolio companies to participate, including robotics startup Agile Robots.

The park may serve as a production hub for emerging tech firms, complementing SoftBank’s broader investments, such as a potential $30 billion stake in OpenAI, a $6.5 billion acquisition of Ampere Computing, and funding for Stargate, a global data centre venture with OpenAI, Oracle, and MGX.

While the vision is still early, Project Crystal Land could radically shift US high-tech manufacturing. Son’s strategy relies heavily on project-based financing, allowing extensive infrastructure builds with minimal upfront capital.

As SoftBank eyes long-term AI growth and increased investor confidence, whether this futuristic park will become a reality — or another of Son’s high-stakes dreams remains to be seen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act challenges 68% of European businesses, AWS report finds

As AI becomes integral to digital transformation, European businesses struggle to adapt to new regulations like the EU AI Act.

A report commissioned by AWS and Strand Partners revealed that 68% of surveyed companies find the EU AI Act difficult to interpret, with compliance absorbing around 40% of IT budgets.

Businesses unsure of regulatory obligations are expected to invest nearly 30% less in AI over the coming year, risking a slowdown in innovation across the continent.

The EU AI Act, effective since August 2024, introduces a phased risk-based framework to regulate AI in the EU. Some key provisions, including banned practices and AI literacy rules, are already enforceable.

Over the next year, further requirements will roll out, affecting AI system providers, users, distributors, and non-EU companies operating within the EU. The law prohibits exploitative AI applications and imposes strict rules on high-risk systems while promoting transparency in low-risk deployments.

AWS has reaffirmed its commitment to responsible AI, which is aligned with the EU AI Act. The company supports customers through initiatives like AI Service Cards, its Responsible AI Guide, and Bedrock Guardrails.

AWS was the first primary cloud provider to receive ISO/IEC 42001 certification for its AI offerings and continues to engage with the EU institutions to align on best practices. Amazon’s AI Ready Commitment also offers free education on responsible AI development.

Despite the regulatory complexity, AWS encourages its customers to assess how their AI usage fits within the EU AI Act and adopt safeguards accordingly.

As compliance remains a shared responsibility, AWS provides tools and guidance, but customers must ensure their applications meet the legal requirements. The company updates customers as enforcement advances and new guidance is issued.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!