Avocado Consulting urges Australian organisations to boost software supply chain security after a high-alert warning from the Australian Cyber Security Centre (ACSC). The alert flagged threats, including social engineering, stolen tokens, and manipulated software packages.
Dennis Baltazar of Avocado Consulting said attackers combine social engineering with living-off-the-land techniques, making attacks appear routine. He warned that secrets left across systems can turn small slips into major breaches.
Baltazar advised immediate audits to find unmanaged privileged accounts and non-human identities. He urged embedding security into workflows by using short-lived credentials, policy-as-code, and default secret detection to reduce incidents and increase development speed for users in Australia.
Avocado Consulting advises organisations to eliminate secrets from code and pipelines, rotate tokens frequently, and validate every software dependency by default using version pinning, integrity checks, and provenance verification. Monitoring CI/CD activity for anomalies can also help detect attacks early.
Failing to act could expose cryptographic keys, facilitate privilege escalation, and result in reputational and operational damage. Avocado Consulting states that secure development practices must become the default, with automated scanning and push protection integrated into the software development lifecycle.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google is rapidly expanding AI Mode, its generative AI-powered search assistant. The company has announced that the feature is now rolling out globally in Spanish. Spanish speakers can now interact with AI Mode to ask complex questions that traditional Search handles poorly.
AI Mode has seen swift adoption since its launch earlier this year. First introduced in March, the feature was rolled out to users across the US in May, followed by its first language expansion earlier this month.
Hindi, Indonesian, Japanese, Korean, and Brazilian Portuguese were the first languages added, and Spanish now joins the list. Google says more languages will follow soon as part of its global AI Mode rollout.
Google says the feature is designed to work alongside Search, not replace it, offering conversational answers with links to supporting sources. The company has stressed that responses are generated with safety filters and fact-checking layers.
The rollout reflects Google’s broader strategy to integrate generative AI into its ecosystem, spanning Search, Workspace, and Android. AI Mode will evolve with multimodal support and tighter integration with other Google services.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
ByteDance has unveiled Seedream 4.0, its latest AI-powered image generation model, which it claims outperforms Google DeepMind’s Gemini 2.5 Flash Image. The launch signals ByteDance’s bid to rival leading creative AI tools.
Developed by ByteDance’s Seed division, the model combines advanced text-to-image generation with fast, precise image editing. Internal testing reportedly showed superior prompt accuracy, image alignment, and visual quality compared to US-developed DeepMind’s system.
Artificial Analysis, an independent AI benchmarking firm, called Seedream 4.0 a significant step forward. The model integrates Seedream 3.0’s generation capability with SeedEdit 3.0’s editing tools while maintaining a price of US$30 per 1,000 generations.
ByteDance claims that Seedream 4.0 runs over 10 times faster than earlier versions, enhancing the user experience with near-instant image inference. Early users have praised its ability to make quick, text-prompted edits with high accuracy.
The tool is now available to users in China through Jimeng and Doubao AI apps and businesses via Volcano Engine, ByteDance’s cloud platform. A formal technical report supporting the company’s claims has not yet been released.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Japan is adopting a softer approach to regulating generative AI, emphasising innovation while managing risks. Its 2025 AI Bill promotes development and safety, supported by international norms and guidelines.
The Japan Fair Trade Commission (JFTC) is running a market study on competition concerns in AI, alongside enforcing the new Mobile Software Competition Act (MSCA), aimed at curbing anti-competitive practices in mobile software.
The AI Bill focuses on transparency, international cooperation, and sector-specific guidance rather than heavy penalties. Policymakers hope this flexible framework will avoid stifling innovation while encouraging responsible adoption.
The MSCA, set to be fully enforced in December 2025, obliges mobile platform operators to ensure interoperability and fair treatment of developers, including potential applications to AI tools and assistants.
With rapid AI advances, regulators in Japan remain cautious but proactive. The JFTC aims to monitor markets closely, issue guidelines as needed, and preserve a balance between competition, innovation, and consumer protection.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Three lawsuits have been filed in US federal courts alleging that Character.AI and its founders, with Google’s backing, deployed predatory chatbots that harmed children. The cases involve the family of 13-year-old Juliana Peralta, who died by suicide in 2023, and two other minors.
The complaints say the chatbots were designed to mimic humans, build dependency, and expose children to sexual content. Using emojis, typos, and pop-culture personas, the bots allegedly gained trust and encouraged isolation from family and friends.
Juliana’s parents say she engaged in explicit chats, disclosed suicidal thoughts, and received no intervention before her death. Nina, 15, from New York, attempted suicide after her mother blocked the app, while a Colorado, US girl known as T.S. was also affected.
Character.AI and Google are accused of misrepresenting the app as child-safe and failing to act on warning signs. The cases follow earlier lawsuits from the Social Media Victims Law Center over similar claims that the platform encouraged harm.
SMVLC founder Matthew Bergman stated that the cases underscore the urgent need for accountability in AI design and stronger safeguards to protect children. The legal team is seeking damages and stricter safety standards for chatbot platforms marketed to minors.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Broadcasters and advertisers seek clarity before the EU’s political advertising rules become fully applicable on 10 October. The European Commission has promised further guidance, but details on what qualifies as political advertising remain vague.
Meta and Google will block the EU’s political, election, and social issue ads when the rules take effect, citing operational challenges and legal uncertainty. The regulation, aimed at curbing disinformation and foreign interference, requires ads to display labels with sponsors, payments, and targeting.
Publishers fear they lack the technical means to comply or block non-compliant programmatic ads, risking legal exposure. They call for clear sponsor identification procedures, standardised declaration formats, and robust verification processes to ensure authenticity.
Advertisers warn that the rules’ broad definition of political actors may be hard to implement. At the same time, broadcasters fear issue-based campaigns – such as environmental awareness drives – could unintentionally fall under the scope of political advertising.
The Dutch parliamentary election on 29 October will be the first to take place under the fully applicable rules, making clarity from Brussels urgent for media and advertisers across the bloc.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Mexico’s competition watchdog has accused Amazon and Mercado Libre of erecting barriers that limit the mobility of sellers in the country’s e-commerce market. The two platforms reportedly account for 85% of the seller market.
The Federal Economic Competition Commission (COFECE) stated that the companies provide preferential treatment to sellers who utilise their logistics services and fail to disclose how featured offers are selected, thereby restricting fair competition.
Despite finding evidence of these practices, COFECE stopped short of imposing corrective measures, citing a lack of consensus among stakeholders. Amazon welcomed the decision, saying it demonstrates the competitiveness of the retail market in Mexico.
The watchdog aims to promote a more dynamic e-commerce sector, benefiting buyers and sellers. Its February report had recommended measures to improve transparency, separate loyalty programme services, and allow fairer access to third-party delivery options.
Trade associations praised COFECE for avoiding sanctions, warning that penalties could harm consumers and shield traditional retailers. Mercado Libre has not yet commented on the findings.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Ireland has designated 15 authorities to monitor compliance with the EU’s AI Act, making it one of the first EU countries fully ready to enforce the new rules. The AI Act regulates AI systems according to their risk to society and began phasing in last year.
Governments had until 2 August to notify the European Commission of their appointed market surveillance authorities. In Ireland, these include the Central Bank, Coimisiún na Meán, the Data Protection Commission, the Competition and Consumer Protection Commission, and the Health and Safety Authority.
The country will also establish a National AI Office as the central coordinator for AI Act enforcement and liaise with EU institutions. A single point of contact must be designated where multiple authorities are involved to ensure clear communication.
Ireland joins Cyprus, Latvia, Lithuania, Luxembourg, Slovenia, and Spain as countries that have appointed their contact points. The Commission has not yet published the complete list of authorities notified by member states.
Former Italian Prime Minister Mario Draghi has called for a pause in the rollout of the AI Act, citing risks and a lack of technical standards. The Commission has launched a consultation as part of its digital simplification package, which will be implemented in December.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The World Economic Forum (WEF) has published an article on using trade policy to build a fairer digital economy. Digital services now make up over half of global exports, with AI investment projected at $252 billion in 2024. Countries from Kenya to the UAE are positioning as digital hubs, but job quality still lags.
Millions of platform workers face volatile pay, lack of contracts, and no access to social protections. In Kenya alone, 1.9 million people rely on digital work yet face algorithm-driven pay systems and sudden account deactivations. India and the Philippines show similar patterns.
AI threatens to automate lower-skilled tasks such as data annotation and moderation, deepening insecurity in sectors where many developing countries have found a competitive edge. Ethical standards exist but have little impact without enforcement or supportive regulation.
Countries are experimenting with reforms: Singapore now mandates injury compensation and retirement savings for platform workers, while the Rider Law in Spain reclassifies food couriers as employees. Yet overly strict regulation risks eroding the flexibility that attracts youth and caregivers to gig work.
Trade agreements, such as the AfCFTA and the Kenya–EU pact, could embed labour protections in digital markets. Coordinated policies and tripartite dialogue are essential to ensure the digital economy delivers growth, fairness, and dignity for workers.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Jaguar Land Rover (JLR) has confirmed that its production halt will continue until at least Wednesday, 24 September, as it works to recover from a major cyberattack that disrupted its IT systems and paralysed production at the end of August.
JLR stated that the extension was necessary because forensic investigations were ongoing and the controlled restart of operations was taking longer than anticipated. The company stressed that it was prioritising a safe and stable restart and pledged to keep staff, suppliers, and partners regularly updated.
Reports suggest recovery could take weeks, impacting production and sales channels for an extended period. Approximately 33,000 employees remain at home as factory and sales processes are not fully operational, resulting in estimated losses of £1 billion in revenue and £70 million in profits.
The shutdown also poses risks to the wider UK economy, as JLR represents roughly four percent of British exports. The incident has renewed calls for the Cyber Security and Resilience Bill, which aims to strengthen defenses against digital threats to critical industries.
No official attribution has been made, but a group calling itself Scattered Lapsus$ Hunters has claimed responsibility. The group claims to have deployed ransomware and published screenshots of JLR’s internal SAP system, linking itself to extortion groups, including Scattered Spider, Lapsus$, and ShinyHunters.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!