Wikipedia’s human traffic has fallen by 8% over the past year, a decline the Wikimedia Foundation attributes to changing information habits driven by AI and social media.
The foundation’s Marshall Miller explained that updates to Wikipedia’s bot detection system revealed much of the earlier traffic surge came from undetected bots, revealing a sharper drop in genuine visits.
Miller pointed to the growing use of AI-generated search summaries and the rise of short-form video as key factors. Search engines now provide direct answers using generative AI instead of linking to external sources, while younger users increasingly turn to social video platforms rather than traditional websites.
Although Wikipedia’s knowledge continues to feed AI models, fewer people are reaching the original source.
The foundation warns that the shift poses risks to Wikipedia’s volunteer-driven ecosystem and donation-based model. With fewer visitors, fewer contributors may update content and fewer donors may provide financial support.
Miller urged AI companies and search engines to direct users back to the encyclopedia, ensuring both transparency and sustainability.
Wikipedia is responding by developing a new framework for content attribution and expanding efforts to reach new readers. The foundation also encourages users to support human-curated knowledge by citing original sources and recognising the people behind the information that powers AI systems.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Spotify partners with major labels on artist-first AI tools, putting consent and copyright at the centre of product design. The plan aims to align new features with transparent labelling and fair compensation while addressing concerns about generative music flooding platforms.
The collaboration with Sony, Universal, Warner, and Merlin will give artists control over participation in AI experiences and how their catalogues are used. Spotify says it will prioritise consent, clearer attribution, and rights management as it builds new tools.
Early direction points to expanded labelling via DDEX, stricter controls against mass AI uploads, and protections against search and recommendation manipulation. Spotify’s AI DJ and prompt-based playlists hint at how engagement features could evolve without sidelining creators.
Future products are expected to let artists opt in, monitor usage, and manage when their music feeds AI-generated works. Rights holders and distributors would gain better tracking and payment flows as transparency improves across the ecosystem.
Industry observers say the tie-up could set a benchmark for responsible AI in music if enforcement matches ambition. By moving in step with labels, Spotify is pitching a path where innovation and artist advocacy reinforce rather than undermine each other.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI signalled a break with Australia’s tech lobby on copyright, with global affairs chief Chris Lehane telling SXSW Sydney the company’s models are ‘going to be in Australia, one way or the other’, regardless of reforms or data-mining exemptions.
Lehane framed two global approaches: US-style fair use that enables ‘frontier’ AI, versus a tighter, historical copyright that narrows scope, saying OpenAI will work under either regime. Asked if Australia risked losing datacentres without loser laws, he replied ‘No’.
Pressed on launching and monetising Sora 2 before copyright issues are settled, Lehane argued innovation precedes adaptation and said OpenAI aims to ‘benefit everyone’. The company paused videos featuring Martin Luther King Jr.’s likeness after family complaints.
Lehane described the US-China AI rivalry as a ‘very real competition’ over values, predicting that one ecosystem will become the default. He said US-led frontier models would reflect democratic norms, while China’s would ‘probably’ align with autocratic ones.
To sustain a ‘democratic lead’, Lehane said allies must add gigawatt-scale power capacity each week to build AI infrastructure. He called Australia uniquely positioned, citing high AI usage, a 30,000-strong developer base, fibre links to Asia, Five Eyes membership, and fast-growing renewables.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A Quebec court has fined Jean Laprade C$5,000 (US$3,562) for submitting AI-generated content as part of his legal defence. Justice Luc Morin described the move as ‘highly reprehensible,’ warning that it could undermine the integrity of the judicial system.
The case concerned a dispute over a contract for three helicopters and an airplane in Guinea, where a clerical error awarded Laprade a more valuable aircraft than agreed. He resisted attempts by aviation companies to recover it, and a 2021 Paris arbitration ruling ordered him to pay C$2.7 million.
Laprade submitted fabricated AI-generated materials, including non-existent legal citations and inconsistent conclusions, in an attempt to strengthen his defence.
The judge emphasised that AI-generated information must be carefully controlled by humans, and the filing of legal documents remains a solemn responsibility. Morin acknowledged the growing influence of AI in courts but stressed the dangers of misuse.
While noting Laprade’s self-representation, the judge condemned his use of ‘hallucinated’ AI evidence and warned of future challenges from AI in courts.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new Oxford University Press (OUP) report has found that most teenagers are using AI for schoolwork but many cannot tell when information is false. Over 2,000 students aged 13 to 18 took part, with many finding it hard to verify AI content.
Around eight in ten pupils admitted using AI for homework or revision, often treating it as a digital tutor. However, many are simply copying material without being able to check its accuracy.
Despite concerns about misinformation, most pupils view AI positively. Nine in ten said they had benefited from using it, particularly in improving creative writing, problem-solving and critical thinking.
To support schools, OUP has launched an AI and Education Hub to help teachers develop confidence with the technology, while the Department for Education has released guidance on using AI safely in classrooms.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Jim Lee rejects generative AI for DC storytelling, pledging no AI writing, art, or audio under his leadership. He framed AI alongside other overhyped threats, arguing that predictions falter while human craft endures. DC, he said, will keep its focus on creator-led work.
Lee rooted the stance in the value of imperfection and intent. Smudges, rough lines, and hesitation signal authorship, not flaws. Fans, he argued, sense authenticity and recoil from outputs that feel synthetic or aggregated.
Concerns ranged from shrinking attention spans to characters nearing the public domain. The response, Lee said, is better storytelling and world-building. Owning a character differs from understanding one, and DC’s universe supplies the meaning that endures.
Policy meets practice in DCs recent moves against suspected AI art. In 2024, variant covers were pulled after high-profile allegations of AI-generated content. The episode illustrated a willingness to enforce standards rather than just announce them.
Lee positioned 2035 and DC’s centenary as a waypoint, not a finish line. Creative evolution remains essential, but without yielding authorship to algorithms. The pledge: human-made stories, guided by editors and artists, for the next century of DC.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Asia’s creative future takes centre stage at Singapore’s All That Matters, a September forum for sports, tech, marketing, gaming, and music. AI dominated the music track, spanning creation, distribution, and copyright. Session notes signal rapid structural change across the industry.
The web is shifting again as AI reshapes search and discovery. AI-first browsers and assistants challenge incumbents, while Google’s Gemini and Microsoft’s Copilot race on integration. Early builds feel rough, yet momentum points to a new media discovery order.
Consumption defined the last 25 years, moving from CDs to MP3s, piracy, streaming, and even vinyl’s comeback. Creation looks set to define the next decade as generative tools become ubiquitous. Betting against that shift may be comfortable, yet market forces indicate it is inevitable.
Music generators like Suno are advancing fast amid lawsuits and talks with rights holders. Expected label licensing will widen training data and scale models. Outputs should grow more realistic and, crucially, more emotionally engaging.
Simpler interfaces will accelerate adoption. The prevailing design thesis is ‘less UI’: creators state intent and the system orchestrates cloud tools. Some services already turn a hummed idea into an arranged track, foreshadowing release-ready music from plain descriptions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A federal jury in Marshall, Texas, USA, has ordered Samsung Electronics to pay $445.5 million to Collision Communications, a New Hampshire-based Company, after finding that Samsung infringed on multiple wireless communication patents.
The lawsuit, filed in 2023, alleged that Samsung’s Galaxy smartphones, laptops, and other wireless products incorporated patented technologies without authorisation. These patents cover innovations in how devices manage and transmit data over 4G, 5G, and Wi-Fi network technologies.
Collision Communications argued that the inventions were originally developed by defense contractor BAE Systems and later licensed to Collision for commercial use. While BAE Systems was not directly involved in the case, its research formed the basis of the patented technologies.
Samsung denied wrongdoing, asserting that the patents were either invalid or not used in the ways described. The company says it plans to appeal the decision.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Apple is facing a lawsuit from neuroscientists Susana Martinez-Conde and Stephen Macknik, who allege that Apple used pirated books from ‘shadow libraries’ to train its new AI system, Apple Intelligence.
Filed on 9 October in the US District Court for the Northern District of California, the suit claims Apple accessed thousands of copyrighted works without permission, including the plaintiffs’ own books.
The researchers argue Apple’s market value surged by over $200 billion following the AI’s launch, benefiting from the alleged copyright violations.
This case adds to a growing list of legal actions targeting tech firms accused of using unlicensed content to train AI. Apple previously faced similar lawsuits from authors in September.
While Meta and Anthropic have also faced scrutiny, courts have so far ruled in their favour under the ‘fair use’ doctrine. The case highlights ongoing tensions between copyright law and the data demands of AI development.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Tech firms are racing to integrate AI into social media, reshaping online interaction while raising fresh concerns over privacy, misinformation, and copyright. Platforms like OpenAI’s Sora and Meta’s Vibes are at the centre of the push, blending generative AI tools with short-form video features similar to TikTok.
OpenAI’s Sora allows users to create lifelike videos from text prompts, but film studios say copyrighted material is appearing without permission. OpenAI has promised tighter controls and a revenue-sharing model for rights holders, while Meta has introduced invisible watermarks to identify AI content.
Safety concerns are mounting as well. Lawsuits allege that AI chatbots such as Character.AI have contributed to mental health issues among teenagers. OpenAI and Meta have added stronger restrictions for young users, including limits on mature content and tighter communication controls for minors.
Critics question whether users truly want AI-generated content dominating their feeds, describing the influx as overwhelming and confusing. Yet industry analysts say the shift could define the next era of social media, as companies compete to turn AI creativity into engagement and profit.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!