OpenAI is set to argue in an Ontario court that a copyright lawsuit by Canadian news publishers should be heard in the United States. The case, the first of its kind in Canada, alleges that OpenAI scraped Canadian news content to train ChatGPT without permission or payment.
The coalition of publishers, including CBC/Radio-Canada, The Globe and Mail, and Postmedia, says the material was created and hosted in Ontario, making the province the proper venue. They warn that accepting OpenAI’s stance would undermine Canadian sovereignty in the digital economy.
OpenAI, however, says the training of its models and web crawling occurred outside Canada and that the Copyright Act cannot apply extraterritorially. It argues the publishers are politicising the case by framing it as a matter of sovereignty rather than jurisdiction.
The dispute reflects a broader global clash over how generative AI systems use copyrighted works. US courts are already handling several similar cases, though no clear precedent has been established on whether such use qualifies as fair use.
Publishers argue Canadian courts must decide the matter domestically, while OpenAI insists it belongs in US courts. The outcome could shape how copyright laws apply to AI training and digital content across borders.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has introduced a new mid-tier subscription called AI Plus, designed to make its Gemini AI tools more accessible at a lower price. Positioned between the free and premium Pro plans, AI Plus offers broader access to Gemini 2.5 Pro, productivity features across Gmail, Docs, and Sheets, and 200GB of Google One cloud storage.
Subscribers will benefit from a larger 128,000-token context window, compared with 32,000 for free users, and tools such as Veo 3 Fast for video creation, Google Flow for video generation, and Whisk for image-to-video conversion. The plan also includes expanded use of NotebookLM, Google’s AI-powered research assistant.
The service has launched first in Indonesia at Rp. 75,000 ($4.56) per month, a fraction of the AI Pro plan’s price of Rp. 309,000 ($18.79). Google has not set a timeline for global rollout but said AI Plus will cost under $20 per month in other markets, with prices adjusted regionally.
After months of vague descriptions, the new tier follows Google’s move to publish clearer guidelines on Gemini’s usage limits across free, Pro, and Ultra plans. The update aims to bring greater transparency as the company pushes deeper into the competitive AI subscription market.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at Harvard Medical School unveiled an AI designed to match genes and drugs to combat disease in cells. The system, called PDGrapher, aims to tackle conditions ranging from Parkinson’s and Alzheimer’s to rare disorders like X-linked Dystonia-Parkinsonism.
Unlike traditional tools that only detect correlations, PDGrapher forecasts which gene-drug pairings can restore healthy cellular function and explains their mechanisms. It may speed up research, lower expenses, and point to novel treatments.
Early tests suggest that PDGrapher can identify known effective combinations and propose new ones that have yet to be validated. If validated in trials, the technology could move medicine towards personalised treatments.
The debut of PDGrapher reflects a broader trend of AI transforming biotechnology. Innovations in AI are accelerating research by mapping biological systems with unprecedented speed, showing how machine learning can decode complex biological systems faster than ever before.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at NYU’s Tandon School of Engineering have demonstrated how large language models can be utilised to execute ransomware campaigns autonomously. Their prototype, dubbed Ransomware 3.0, simulated every stage of an attack, from intrusion to the generation of a ransom note.
The system briefly raised an alarm after cybersecurity firm ESET discovered its files on VirusTotal, mistakenly identifying them as live malware. The proof-of-concept was designed only for controlled laboratory use and posed no risk outside testing environments.
Instead of pre-written code, the prototype embedded text instructions that triggered AI models to generate tailored attack scripts. Each execution created unique code, evading traditional detection methods and running across Windows, Linux, and Raspberry Pi systems.
The researchers found that the system identified up to 96% of sensitive files and could generate personalised extortion notes, raising psychological pressure on victims. With costs as low as $0.70 per attack using commercial AI services, such methods could lower barriers for criminals.
The team stressed that the work was conducted ethically and aims to help defenders prepare countermeasures. They recommend monitoring file access patterns, limiting outbound AI connections, and developing defences against AI-generated attack behaviours.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Bloomberg’s Mark Gurman now reports that Apple plans to introduce its AI-powered web search tool in spring 2026. The move would position it against OpenAI and Perplexity, while renewing pressure on Google.
The speculation comes after news that Google may integrate its Gemini AI into Apple devices. During an antitrust trial in April, Google CEO Sundar Pichai confirmed plans to roll out updates later this year.
According to Gurman, Apple and Google finalised an agreement for Apple to test a Google-developed AI model to boost its voice assistant. The partnership reflects Apple’s mixed strategy of dependence and rivalry with Google.
With a strong record for accurate Apple forecasts, Gurman suggests the company hopes the move will narrow its competitive gap. Whether it can outpace Google, especially given Pixel’s strong AI features, remains an open question.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The US tech giant Google will not be forced to divest Chrome or Android following the long-running US monopoly case.
Judge Mehta ruled that while Google holds a monopoly in traditional search, the rise of AI companies is creating new competitive pressures.
The judgement prevents Google from striking exclusive distribution deals but still allows it to pay partners for preloading and placement of its products. The court also ordered Google to loosen its control over search data, a move that could enable rivals to build their own AI-driven search tools.
Yet, concerns remain for e-commerce businesses.
Google Zero, the company’s AI-powered search summary, is cutting website traffic by keeping users within Google’s results.
Research shows sharp declines in mobile click-through rates, leaving online retailers uncertain of their future visibility.
Experts warn that zero-click searches are becoming the norm. Businesses are being urged to optimise for Google’s AI overviews, enhance the value of product and review pages, track traffic impacts, and diversify their marketing channels.
While Google has avoided structural remedies, its dominance in search and AI appears far from over.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI CEO Sam Altman has sparked debate after admitting he increasingly struggles to distinguish between genuine online conversations and content generated by bots or AI models.
Altman described a ‘strangest experience’ while reading about OpenAI’s Codex model, saying comments instinctively felt fake even though he knew the growth trend was real. He said social media rewards, ‘LLM-speak,’ and astroturfing make communities feel less genuine.
His comments follow an earlier admission that he had never considered the so-called dead internet theory until now, when large language model accounts seemed to be running X. The theory claims bots and artificial content dominate online activity, though evidence of coordinated control is lacking.
Reactions were divided, with some users agreeing that online communities have become increasingly bot-like. Others argued the change reflects shifting dynamics in niche groups rather than fake accounts.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Mental health experts in Iowa have warned that teenagers are increasingly turning to AI chatbots instead of seeking human connection, raising concerns about misinformation and harmful advice.
The issue comes into focus on National Suicide Prevention Day, shortly after a lawsuit against ChatGPT was filed over a teenager’s suicide.
Jessica Bartz, a therapy supervisor at Vera French Duck Creek, said young people are at a vulnerable stage of identity formation while family communication often breaks down.
She noted that some teens use chatbot tools like ChatGPT, Genius and Copilot to self-diagnose, which can reinforce inaccurate or damaging ideas.
‘Sometimes AI can validate the wrong things,’ Bartz said, stressing that algorithms only reflect the limited information users provide.
Without human guidance, young people risk misinterpreting results and worsening their struggles.
Experts recommend that parents and trusted adults engage directly with teenagers, offering empathy and open communication instead of leaving them dependent on technology.
Bartz emphasised that nothing can replace a caring person noticing warning signs and intervening to protect a child’s well-being.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
President Trump has signed an Executive Order to promote American leadership in AI exports, marking a significant policy shift. The move creates new global opportunities for US businesses but also introduces stricter compliance responsibilities.
The order establishes the American AI Exports Program, overseen by the Department of Commerce, to develop and deploy ‘full-stack’ AI export packages.
These packages cover everything from chips and cloud infrastructure to AI models and cybersecurity safeguards. Industry consortia will be invited to submit proposals, outlining hardware origins, export targets, business models, and federal support requests.
A central element of the initiative is ensuring compliance with US export control regimes. Companies must align with the Export Control Reform Act and the Export Administration Regulations, with special attention to restrictions on advanced computing chips.
New guidance warns against potential violations linked to hardware and highlights red flags for illegal diversion of sensitive technology.
Commerce stresses that participation requires robust export compliance plans and rigorous end user screening.
Legal teams are urged to review policies on AI exports, as regulators focus on preventing misuse of advanced computing systems in military or weapons programmes abroad.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US tech giant Microsoft is expanding its AI strategy by integrating Anthropic’s Claude models into Office 365, adding them to apps like Word, Excel and Outlook instead of relying solely on OpenAI.
Internal tests reportedly showed Anthropic’s systems outperforming OpenAI in specific reasoning and data-processing tasks, prompting Microsoft to adopt a hybrid approach while maintaining OpenAI as a frontier partner.
The shift reflects growing strain between Microsoft and OpenAI, with disputes over intellectual property and cloud infrastructure as well as OpenAI’s plans for greater independence.
By diversifying suppliers, Microsoft reduces risks, lowers costs and positions itself to stay competitive while OpenAI prepares for a potential public offering and develops its own data centres.
Anthropic, backed by Amazon and Google, has built its reputation on safety-focused AI, appealing to Microsoft’s enterprise customers wary of regulatory pressures.
Analysts believe the move could accelerate innovation, spark a ‘multi-model era’ of AI integration, and pressure OpenAI to enhance its technology faster.
The decision comes amid Microsoft’s push to broaden its AI ecosystem, including its in-house MAI-1 model and partnerships with firms like DeepSeek.
Regulators are closely monitoring these developments, given Microsoft’s dominant role in AI investment and the potential antitrust implications of its expanding influence.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!