AI and genetics reveal how language develops in the brain

Recent research shows that language emerges from a dynamic, adaptable system in the brain rather than a single region. AI, high-field MRI, and genetic studies are helping scientists understand how humans acquire and process language.

Large language models can predict speech processing in children as young as two, while MRI shows language dominance exists on a fluid brain continuum. Genetic analyses show hundreds of genes contribute to language, with overlaps between musical rhythm and dyslexia.

High-level language skills, such as grammar, continue to mature between ages two and ten, while phonetic processing stabilises earlier. Combining AI, imaging, and genetics allows researchers to understand individual differences and neurovariability in communication.

The integrated approach could improve early diagnosis and treatment for language disorders, offering insights into how the brain learns, adapts, and uses language across the lifespan.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google adds option to disable AI search in Google Photos

Users of Google Photos will now have greater control over how they search their images, after Google introduced a visible toggle that returns to the traditional search experience.

The update follows complaints about the AI-powered Ask Photos feature.

Ask Photos was designed to allow users to search for images using natural language queries rather than simple keywords. The tool aimed to make photo searches more flexible, enabling complex queries such as descriptions of people, events or locations captured in images.

However, some users reported that the AI system produced slower results and occasionally failed to locate images that the classic search had previously found more reliably.

Although an option to turn off the AI feature already existed, it was hidden within settings and often overlooked.

The new update introduces a visible switch directly on the search interface. Users can now easily alternate between the AI-powered search and the traditional search system depending on their preferences.

Google said improvements have also been made to the quality of common searches following user feedback. The company emphasised that search remains one of the most frequently used functions within Google Photos and that ongoing updates will continue to refine the experience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Malicious npm package targets developers with Openclaw impersonation

Security researchers uncovered a malicious npm package impersonating an Openclaw AI installer, designed to infect developer machines with credential-stealing malware.

JFrog Security Research identified the attack in early March 2026 after the package appeared on the npm registry and was downloaded roughly 178 times.

The deceptive package mimics legitimate Openclaw tools and contains ordinary-looking JavaScript files and documentation. Hidden scripts run during installation, displaying a fake command-line interface and a fabricated system prompt that requests the user’s password.

Entering the password grants the malware elevated access and allows it to download an encrypted payload from a remote command server. Once installed, the payload deploys Ghostloader, a remote access trojan that persists on the system and communicates with attacker servers.

Researchers say the malware targets sensitive information, including saved passwords, browser cookies, SSH keys, and cryptocurrency wallet files. Developers are advised to remove the package immediately, rotate credentials, and install software only from verified sources.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Dutch court increases pressure on Meta over non-profiling social media feeds

A court in the Netherlands has increased potential penalties against Meta after ruling that changes to social media timelines must be implemented urgently.

The decision raises the potential fine for non-compliance from €5 million to €10 million if required adjustments are not applied to Facebook and Instagram feeds.

Judges at the Amsterdam Court of Appeals said users must be able to select a timeline that does not rely on profiling-based recommendations.

The ruling follows a legal challenge from the digital rights organisation Bits of Freedom, which argued that users who switched away from algorithmic feeds were automatically returned to them after navigating the platform or reopening the application.

The court concluded that the automatic resetting mechanism represents a deceptive design practice known as a ‘dark pattern’.

Such practices are prohibited under the EU’s Digital Services Act, which requires large online platforms to provide greater transparency and user control over recommendation systems.

Judges acknowledged that Meta had already introduced several technical changes, although not all required measures were fully implemented. The company must ensure that the non-profiling timeline option remains active once selected, rather than reverting to algorithmic recommendations.

The dispute also highlights regulatory tensions within the European framework. Before turning to the courts, Bits of Freedom submitted a complaint to Coimisiún na Meán, the national authority responsible for overseeing Meta’s compliance with the EU rules.

According to the organisation, the lack of progress from regulators encouraged legal action in Dutch courts.

Meta indicated that the company intends to challenge the decision and pursue further legal proceedings. The case could become an important test of how the Digital Services Act is enforced against major online platforms across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New York moves to ban chatbots from giving legal and medical advice

New York lawmakers are considering legislation that would ban AI chatbots from providing legal or medical advice. The bill aims to stop automated systems from impersonating licensed professionals such as doctors and lawyers.

The proposal would also require chatbot operators to clearly inform users that they are interacting with an AI system. Notices must be prominent, written in the same language as the chatbot, and use a readable font.

A key feature of the bill is a private right of action. However, this would allow users to file civil lawsuits against chatbot owners who violate the law, recovering damages and legal fees. Experts say this enforcement tool strengthens the rules and deters abuse.

Supporters of the legislation argue it protects New Yorkers’ safety, particularly minors. Other bills in the same package would regulate online platforms like Roblox and set standards for generative AI, synthetic content, and the handling of biometric data.

The bill’s author, state Senator Kristen Gonzalez, said AI innovation should not come at the expense of public safety. She pointed to recent cases where AI chatbots were linked to harmful outcomes for minors, highlighting the need for transparency and accountability.

If passed, the law would take effect 90 days after the governor signs it. Lawmakers hope it will balance innovation with user protection, ensuring AI tools are used responsibly and safely across the state.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US government faces lawsuits over Anthropic AI move

Anthropic has launched two lawsuits against the US Department of Defence, disputing its recent designation of the AI firm as a ‘supply chain risk.’ The company claims the move is unlawful and infringes on its First Amendment rights.

The company argues that the government is punishing it for refusing to allow the military to use its AI for domestic surveillance or for fully autonomous weapons.

The lawsuits, filed in California and Washington, DC courts, follow the Pentagon’s unprecedented use of the supply chain risk tool against a US company. The designation requires other government contractors to sever ties with Anthropic, posing a serious threat to its business operations.

The company maintains it remains committed to supporting national security applications of its AI.

The Department of Defence has used anthropic’s AI model Claude in operations targeting Iran. The company says it has worked with the DoD on system adaptations and seeks to continue negotiations while protecting its business and partners.

The firm claims government actions cause harm, though CEO Dario Amodei said the designation’s impact is limited. Anthropic insists judicial review is a necessary step to defend its business and ensure the responsible deployment of its technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canada warns about AI-generated scams targeting citizens online

Authorities in Canada have issued a warning about the growing use of AI in impersonation scams targeting citizens. Fraudsters increasingly deploy advanced tools capable of mimicking politicians, government officials and other public figures with convincing realism.

Deepfake videos, synthetic audio and AI-generated messages allow scammers to create convincing communications that appear to come from trusted authorities.

Such tactics are often used to persuade victims to send money, reveal personal information, install malicious software or engage with fraudulent investment offers.

Officials also warn about fake government websites created with AI-assisted tools that imitate official pages by copying national symbols and similar domain names. Suspicious websites often use unusual web addresses, extra characters, or unfamiliar domain endings to mislead visitors.

Authorities advise Canadians to verify unexpected messages through official channels rather than clicking links or responding immediately.

Suspected impersonation attempts should be reported to the Competition Bureau or the Canadian Anti-Fraud Centre.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Malaysia expands AI learning across universities with Google tools

AI tools from Google are now available across all public universities in Malaysia after the nationwide deployment of Gemini for Education.

An initiative that integrates AI capabilities into university systems, providing digital research and learning support to nearly 600,000 students and 75,000 faculty members.

The rollout is coordinated with the Ministry of Higher Education Malaysia as part of the country’s broader strategy to become an AI-driven economy by 2030. Universities already using Google Workspace for

Education can now access advanced tools, including NotebookLM and the reasoning model Gemini 3.1 Pro, which are designed to support research, writing and personalised learning.

Several universities are already experimenting with AI-assisted teaching. At Universiti Malaysia Perlis, lecturers have created customised AI assistants to guide students through specialised engineering courses.

Meanwhile, researchers and students at Universiti Putra Malaysia are using AI tools to improve literature reviews and academic research workflows.

Other institutions are focusing on digital literacy and AI skills.

At Universiti Malaysia Sarawak, hundreds of lecturers and students are receiving AI certifications, while training programmes are expanding across campuses.

Officials believe the combination of AI tools, training and research support will strengthen the education system of Malaysia and prepare graduates for an increasingly AI-driven economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Concerns grow over Grok AI content on X platform

Social media platform X has launched an investigation into racist and offensive posts generated by its Grok AI chatbot in the UK. The review follows a Sky News analysis that flagged troubling responses produced publicly by the system.

Analysis by the broadcaster found Grok generating highly offensive replies, including profanities targeting certain religions. Some responses also repeated false claims blaming Liverpool supporters for the 1989 Hillsborough disaster.

Sky News reporter Rob Harris said X safety teams were urgently examining the chatbot’s behaviour after the posts spread online. The company and its AI developer xAI did not immediately respond to requests for comment.

Concerns around Grok come as governments and regulators increasingly scrutinise AI-generated content on social platforms. Authorities in several countries have already raised alarms about sexually explicit or harmful material created by chatbots.

Earlier this year, xAI introduced new restrictions to limit some image editing features in Grok. Users in certain jurisdictions were also blocked from generating images of people in revealing clothing where such content is illegal.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hackers can use AI to de-anonymise social media accounts

AI technology behind platforms like ChatGPT is making it significantly easier for hackers to identify anonymous social media users, a new study warns. LLMs could match anonymised accounts to real identities by analysing users’ posts across platforms.

Researchers Simon Lermen and Daniel Paleka warned that AI enables cheap, highly personalised privacy attacks, urging a rethink of what counts as private online. The study highlighted risks from government surveillance to hackers exploiting public data for scams.

Experts caution that AI-driven de-anonymisation is not flawless. Errors in linking accounts could wrongly implicate individuals, while public datasets beyond social media- such as hospital or statistical records- may be exposed to unintended analysis.

Users are urged to reconsider what information they share, and platforms are encouraged to limit bulk data access and detect automated scraping.

The study underscores growing concerns about AI surveillance. While the technology cannot guarantee complete de-anonymisation, its rapid capabilities demand stronger safeguards to protect privacy online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot