In a legal move that underscores growing scrutiny of digital platforms, the Australian Competition and Consumer Commission (ACCC) has filed a lawsuit in the Federal Court against Microsoft Corporation, accusing the company of misleading approximately 2.7 million Australian personal and family-plan subscribers of its Microsoft 365 service after integrating its AI assistant Copilot.
According to the ACCC, Microsoft raised subscription prices by 45 % for the Personal plan and 29 % for the Family plan after bundling Copilot starting 31 October 2024.
The regulator says Microsoft told consumers their only options were to pay the higher price with AI or cancel their subscription, while failing to clearly disclose a cheaper ‘Classic’ version of the plan without Copilot that remained available.
The ACCC argues Microsoft’s communications omitted the existence of that lower-priced plan unless consumers initiated the cancellation process. Chair Gina Cass-Gottlieb described this omission as ‘very serious conduct’ that deprived customers of informed choice.
The regulator is seeking penalties, consumer redress, injunctions and costs, with potential sanctions of AUS $50 million (or more) per breach.
This action signals a broader regulatory push into how major technology firms bundle AI features, raise prices and present options to consumers, an issue that ties into digital economy governance, consumer trust and platform accountability.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In a joint effort, Foxconn announced it will work with NVIDIA Corporation, Stellantis N.V. and Uber Technologies, Inc. on developing and deploying Level 4 (hands-off, eyes-off) autonomous vehicles for robotaxi services. Foxconn brings its expertise in high-performance computing, sensor integration and electronic control systems to the partnership.
The collaboration assigns distinct roles. Nvidia contributes its DRIVE AV software stack and DRIVE AGX Hyperion 10 architecture, Stellantis provides vehicle platforms engineered for autonomy, Foxconn handles hardware and system integration, and Uber offers its global ride-service network to scale the deployment.
Foxconn chairman Young Liu described autonomous mobility as a strategic priority within its EV programme, while Nvidia CEO Jensen Huang said this venture ‘is a leap in AI capability’.
This move underscores how hardware makers, AI firms and mobility service providers are converging around the autonomous-vehicle ecosystem.
It also highlights the expanding role of companies like Foxconn beyond traditional electronics manufacturing into mobility, AI and sensor integration, areas increasingly relevant for digital diplomacy, supply-chain resilience and global tech competition.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The United States and South Korea agreed on a broad science and technology memorandum to deepen alliance ties and bolster Indo-Pacific stability. The non-binding pact aims to accelerate innovation while protecting critical capabilities. Both sides cast it as groundwork for a new Golden Age of Innovation.
AI sits at the centre. Plans include pro-innovation policy alignment, trusted exports across the stack, AI-ready datasets, safety standards, and enforcement of compute protection. Joint metrology and standards work links the US Center for AI Standards and Innovation with the AI Safety Institute of South Korea.
Trusted technology leadership extends beyond AI. The memorandum outlines shared research security, capacity building for universities and industry, and joint threat analysis. Telecommunications cooperation targets interoperable 6G supply chains and coordinated standards activity with industry partners.
Quantum and basic research are priority growth areas. Participants plan interoperable quantum standards, stronger institutional partnerships, and secured supply chains. Larger projects and STEM exchanges aim to widen collaboration, supported by shared roadmaps and engagement in global consortia.
Space cooperation continues across civil and exploration programmes. Strands include Artemis contributions, a Korean cubesat rideshare on Artemis II, and Commercial Lunar Payload Services. The Korea Positioning System will be developed for maximum interoperability with GPS.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Speaking at the CNBC Technology Executive Council Summit in New York, Wikipedia founder Jimmy Wales has expressed scepticism about Elon Musk’s new AI-powered Grokipedia, suggesting that large language models cannot reliably produce accurate wiki entries.
Wales highlighted the difficulties of verifying sources and warned that AI tools can produce plausible but incorrect information, citing examples where chatbots fabricated citations and personal details.
He rejected Musk’s claims of liberal bias on Wikipedia, noting that the site prioritises reputable sources over fringe opinions. Wales emphasised that focusing on mainstream publications does not constitute political bias but preserves trust and reliability for the platform’s vast global audience.
Despite his concerns, Wales acknowledged that AI could have limited utility for Wikipedia in uncovering information within existing sources.
However, he stressed that substantial costs and potential errors prevent the site from entirely relying on generative AI, preferring careful testing before integrating new technologies.
Wales concluded that while AI may mislead the public with fake or plausible content, the Wiki community’s decades of expertise in evaluating information help safeguard accuracy. He urged continued vigilance and careful source evaluation as misinformation risks grow alongside AI capabilities.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China has pledged to expand its high-tech industries over the next decade. Officials said emerging sectors such as quantum computing, hydrogen energy, nuclear fusion, and brain-computer interfaces will receive major investment and policy backing.
Development chief Zheng Shanjie told reporters that the coming decade will redefine China’s technology landscape, describing it as a ‘new scale’ of innovation. The government views breakthroughs in science and AI as key to boosting economic resilience amid a slowing property market and demographic decline.
The plan underscores Beijing’s push to rival Washington in cutting-edge technology, with billions already channelled into state-led innovation programmes. Public opinion in Beijing appears supportive, with many citizens expressing optimism that China could lead the next technological revolution.
Economists warn, however, that sustained progress will require tackling structural issues, including low domestic consumption and reduced investor confidence. Analysts said Beijing’s long-term success will depend on whether it can balance rapid growth with stable governance and transparent regulation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Tech firms now spend a record €151 million a year on lobbying at EU institutions, up from €113 million in 2023, according to transparency-register analysis by Corporate Europe Observatory and LobbyControl.
Spending is concentrated among US giants. The ten biggest tech companies, including Meta, Microsoft, Apple, Amazon, Qualcomm and Google, together outspend the top ten in pharma, finance and automotive. Meta leads with a budget above €10 million.
Estimates calculate there are 890 full-time lobbyists now working to influence tech policy in Brussels, up from 699 in 2023, with 437 holding European Parliament access badges. In the first half of 2025, companies declared 146 meetings with the Commission and 232 with MEPs, with artificial intelligence regulation and the industry code of practice frequently on the agenda.
As industry pushes back on the Digital Markets Act and Digital Services Act and the Commission explores the ‘simplification’ of EU rulebooks, lobbying transparency campaigners fear a rollback on the progress made to regulate the digital sector. On the contrary, companies argue that lobbying helps lawmakers grasp complex markets and assess impacts on innovation and competitiveness.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has introduced new features in ChatGPT to encourage healthier use for people who spend extended periods chatting with the AI. Users may see a pop-up message reading ‘Just checking in. You’ve been chatting for a while, is this a good time for a break?’.
Users can dismiss it or continue, helping to prevent excessive screen time while staying flexible. The update also guides high-stakes personal decisions.
ChatGPT will not give direct advice on sensitive topics such as relationships, but instead asks questions and encourages reflection, helping users consider their options safely.
OpenAI acknowledged that AI can feel especially personal for vulnerable individuals. Earlier versions sometimes struggled to recognise signs of emotional dependency or distress.
The company is improving the model to detect these cases and direct users to evidence-based resources when needed, making long interactions safer and more mindful.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The California Department of Financial Protection & Innovation (DFPI) has warned that criminals are weaponising AI to scam consumers. Deepfakes, cloned voices, and slick messages mimic trusted people and exploit urgency. Learning the new warning signs cuts risk quickly.
Imposter deepfakes and romance ruses often begin with perfect profiles or familiar voices pushing you to pay or invest. Grandparent scams use cloned audio in fake emergencies; agree a family passphrase and verify on a separate channel. Influencers may flaunt fabricated credentials and followers.
Automated attacks now use AI to sidestep basic defences and steal passwords or card details. Reduce exposure with two-factor authentication, regular updates, and a reputable password manager. Pause before clicking unexpected links or attachments, even from known names.
Investment frauds increasingly tout vague ‘AI-powered’ returns while simulating growth and testimonials, then blocking withdrawals. Beware guarantees of no risk, artificial deadlines, unsolicited messages, and recruit-to-earn offers. Research independently and verify registrations before sending money.
DFPI advises careful verification before acting. Confirm identities through trusted channels, refuse to move money under pressure, and secure devices. Report suspicious activity promptly; smart habits remain the best defence.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI ECG analysis improved heart attack detection and reduced false alarms in a multicentre study of 1,032 suspected STEMI cases. Conducted across three primary PCI centres from January 2020 to May 2024, it points to quicker, more accurate triage, especially beyond specialist hospitals.
ST-segment elevation myocardial infarction occurs when a major coronary artery is blocked. Guideline targets call for reperfusion within 90 minutes of first medical contact. Longer delays are associated with roughly a 3-fold increase in mortality, underscoring the need for rapid, reliable activation.
The AI ECG model, trained to detect acute coronary occlusion and STEMI equivalents, analysed each patient’s initial tracing. Confirmatory angiography and biomarkers identified 601 true STEMIs and 431 false positives. AI detected 553 of 601 STEMIs, versus 427 identified by standard triage on the first ECG.
False positives fell sharply with AI. Investigators reported a 7.9 percent false-positive rate with the model, compared with 41.8 percent under standard protocols. Clinicians said earlier that more precise identification could streamline transfers from non-PCI centres and help teams reach reperfusion targets.
An editorial welcomed the gains but urged caution. The model targets acute occlusion rather than STEMI, needs prospective validation in diverse populations, and must be integrated with clear governance and human oversight.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Ontario’s privacy watchdog has released an expanded set of deidentification guidelines to help organisations protect personal data while enabling innovation. The 100-page document from the Office of the Information and Privacy Commissioner (IPC) offers step-by-step advice, checklists and examples.
The update modernises the 2016 version to reflect global regulatory changes and new data protection practices. She emphasised that the guidelines aim to help organisations of all sizes responsibly anonymise data while maintaining its usefulness for research, AI development and public benefit.
Developed through broad stakeholder consultation, the guidelines were refined with input from privacy experts and the Canadian Anonymization Network. The new version responds to industry requests for more detailed, operational guidance.
Although the guidelines are not legally binding, experts said following them can reduce liability risks and strengthen compliance with privacy laws. The IPC hopes they will serve as a practical reference for executives and data officers.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!