Meta explores AI system for digital afterlife

Meta has been granted a patent describing an AI system that could simulate a person’s social media activity, even after their death. The patent, originally filed in 2023 and approved in late December, outlines how AI could replicate a user’s online presence by drawing on their past posts, messages and interactions.

According to the filing, a large language model could analyse a person’s digital history, including comments, chats, voice messages and reactions, to generate new content that mirrors their tone and behaviour. The system could respond to other users, publish updates and continue conversations in a way that resembles the original account holder.

The patent suggests the technology could be used when someone is temporarily absent from a platform, but it also explicitly addresses the possibility of continuing activity after a user’s death. It notes that such a scenario would carry more permanent implications, as the person would not be able to return and reclaim control of the account.

More advanced versions of the concept could potentially simulate voice or even video interactions, effectively creating a digital persona capable of engaging with others in real time. The idea aligns with previous comments by Meta CEO Mark Zuckerberg, who has said AI could one day help people interact with digital representations of loved ones, provided consent mechanisms are in place.

Meta has stressed that the patent does not signal an imminent product launch, describing it as a protective filing for a concept that may never be developed. Still, similar services offered by startups have already sparked ethical debate, raising questions about digital identity, consent and the emotional impact of recreating the online presence of someone who has died.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

AI cheating allegation sparks discrimination lawsuit

A University of Michigan student has filed a federal lawsuit accusing the university of disability discrimination after professors allegedly claimed she used AI to write her essays. The student, identified in court documents as ‘Jane Doe,’ denies using AI and argues that symptoms linked to her medical conditions were wrongly interpreted as signs of cheating.

According to the complaint, Doe has obsessive-compulsive disorder and generalised anxiety disorder. Her lawyers argue that traits associated with those conditions, including a formal tone, structured writing, and consistent style, were cited by instructors as evidence that her work was AI-generated. They say she provided proof and medical documentation supporting her case but was still subjected to disciplinary action and prevented from graduating.

The lawsuit alleges that the university failed to provide appropriate disability-related accommodations during the academic integrity process. It also claims that the same professor who raised the concerns remained responsible for grading and overseeing remedial work, despite what the complaint describes as subjective judgments and questionable AI-detection methods.

The case highlights broader tensions on campuses as educators grapple with the rapid rise of generative AI tools. Professors across the United States report growing difficulty distinguishing between student work and machine-generated text, while students have increasingly challenged accusations they say rely on unreliable detection software.

Similar legal disputes have emerged elsewhere, with students and families filing lawsuits after being accused of submitting AI-written assignments. Research has suggested that some AI-detection systems can produce inaccurate results, raising concerns about fairness and due process in academic settings.

The University of Michigan has been asked to comment on the lawsuit, which is likely to intensify debate over how institutions balance academic integrity, disability rights, and the limits of emerging AI detection technologies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

New AI innovation hub aims to position Ethiopia as regional leader

Ethiopia has launched a new Artificial Intelligence University Innovation Pod in Addis Ababa, marking a significant step in its ambition to become Africa’s leading AI hub.

The Ethiopian Artificial Intelligence Institute leads the initiative in partnership with Addis Ababa University and the UN Development Programme, under the latter’s Timbuktoo Initiative.

Officials say the centre is designed to strengthen national AI capacity, promote homegrown technological solutions and build a sustainable innovation ecosystem. The AI UniPod will support university students, researchers and start-ups working on advanced digital technologies, with a focus on transforming young people from job seekers into technology creators.

The Ethiopian Artificial Intelligence Institute highlighted recent achievements, including patented tools for breast cancer diagnosis and coffee seed identification, as evidence of the country’s growing technological capability. Leaders described the new facility as a shift from ambition to practical implementation of AI.

Data sovereignty was emphasised as a central pillar of the strategy. Authorities argued that control over digital infrastructure and data resources is essential for national sovereignty, particularly as AI becomes embedded in economic and public systems.

The government views the AI UniPod as a long-term platform for innovation, aimed not only at Ethiopia but also at the wider African continent.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

Ireland’s DPC opens data privacy probe into X’s Grok

Ireland’s Data Protection Commission (DPC) has opened a formal investigation into X, focusing on whether the platform complied with its EU privacy obligations after users reportedly generated and shared sexualised, AI-altered images using Grok, the chatbot integrated into X. The inquiry will examine how the EU users’ personal data was processed in connection with this feature, under Ireland’s Data Protection Act and the GDPR framework.

The controversy centres on prompts that can ‘edit’ real people’s photos, sometimes producing non-consensual sexualised imagery, with allegations that some outputs involve children. The DPC has said it has been engaging with X since the reports first emerged and has now launched what it describes as a large-scale inquiry into the platform’s compliance with core GDPR duties.

Public and political reaction has intensified as examples of users altering images posted by others without consent, including ‘undressing’ edits, circulated. Child-safety concerns have widened the issue beyond platform moderation into questions of legality, safeguards, and accountability for generative tools embedded in mass-use social networks.

X has said it has introduced restrictions and safety measures around Grok’s image features, but regulators appear unconvinced that guardrails are sufficient when tools can be repurposed for non-consensual sexual content at scale. The DPC’s inquiry will test, in practical terms, whether a platform can roll out powerful image-generation/editing functions while still meeting the EU privacy requirements for lawful processing, risk management, and protection of individuals.

Why does it matter?

The DPC (Data Protection Commission) is Ireland’s national data protection authority, an Irish public regulator, but at the same time, it operates within the EU’s GDPR system as part of the network of EU/EEA regulators (the ‘supervisory authorities’). The DPC’s probe lands on top of a separate European Commission investigation launched in January under the EU’s Digital Services Act, after concerns that Grok-fuelled deepfakes on X included manipulated sexually explicit images that ‘may amount to child sexual abuse material,’ and questions about whether X properly assessed and mitigated those risks before deployment. Together, the two tracks show how the EU is using both privacy law (GDPR) and platform safety rules (DSA) to pressure large platforms to prove that ‘generative’ features are not being shipped faster than the safeguards needed to prevent serious harm, especially when women and children are the most likely targets.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

Parliament halts built-in AI tools on tablets and other devices over data risks

The European Parliament has disabled built-in AI features on tablets issued to lawmakers, citing cybersecurity and data protection risks. An internal email states that writing assistants, summarisation tools, and enhanced virtual assistants were turned off after security assessments.

Officials said some AI functions on tablets rely on cloud processing for tasks that could be handled locally, potentially transmitting data off the device. A review is underway to clarify how much information may be shared with service providers.

Only pre-installed AI tools were affected, while third-party apps remain available. Lawmakers were advised to review AI settings on personal devices, limit app permissions, and avoid exposing work emails or documents to AI systems.

The step reflects wider European concerns about digital sovereignty and reliance on overseas technology providers. US legislation, such as the Cloud Act, allows authorities to access data held by American companies, raising cross-border data protection questions.

Debate over AI security is intensifying as institutions weigh innovation against the risks of remote processing and granular data access. Parliament’s move signals growing caution around handling sensitive information in cloud-based AI environments.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

From Milan-Cortina to factory floors, AI powers Zhejiang manufacturing

As Chinese skater Sun Long stood on the Milan-Cortina Winter Olympics podium, the vivid red of his uniform reflected more than national pride. It also highlighted AI’s expanding role in China’s textile manufacturing.

In Shaoxing, AI-powered image systems calibrate fabric colours in real time. Factory managers say digital printing has lifted pass rates from about 50% to above 90%, easing longstanding production bottlenecks.

Tyre manufacturing firm Zhongce Rubber Group uses AI to generate multiple 3D designs in minutes. Engineers report shorter development cycles and reduced manual input across research and testing.

Electric vehicle maker Zeekr uses AI visual inspection in its 5G-enabled factory. Officials say tyre verification now takes seconds, helping eliminate assembly errors.

Provincial authorities in China report that large industrial firms are fully digitalized. Zhejiang plans to further integrate AI by 2027, expanding smart factories and industrial intelligence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study says China AI governance not purely state-driven

New research challenges the view that China’s AI controls are solely the product of authoritarian rule, arguing instead that governance emerges from interaction between the state, private sector and society.

A study by Xuechen Chen of Northeastern University London and Lu Xu of Lancaster University argues that China’s AI governance is not purely top-down. Published in the Computer Law & Security Review, it says safeguards are shaped by regulators, companies and social actors, not only the central government.

Chen calls claims that Beijing’s AI oversight is entirely state-driven a ‘stereotypical narrative’. Although the Cyberspace Administration of China leads regulation, firms such as ByteDance and DeepSeek help shape guardrails through self-regulation and commercial strategy.

China was the first country to introduce rules specific to generative AI. Systems must avoid unlawful or vulgar content, and updated legislation strengthens minor protection, limiting children’s online activity and requiring child-friendly device modes.

Market incentives also reinforce compliance. As Chinese AI firms expand globally, consumer expectations and cultural norms encourage content moderation. The study concludes that governance reflects interaction between state authority, market forces and society.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hollywood groups challenge ByteDance over Seedance 2.0 copyright concerns

ByteDance is facing scrutiny from Hollywood organisations over its AI video generator Seedance 2.0. Industry groups allege the system uses actors’ likenesses and copyrighted material without permission.

The Motion Picture Association said the tool reflects large-scale unauthorised use of protected works. Chairman Charles Rivkin called on ByteDance to halt what he described as infringing activities that undermine creators’ rights and jobs.

SAG-AFTRA also criticised the platform, citing concerns over the use of members’ voices and images. Screenwriter Rhett Reese warned that rapid AI development could reshape opportunities for creative professionals.

ByteDance acknowledged the concerns and said it would strengthen safeguards to prevent misuse of intellectual property. The company reiterated its commitment to respecting copyright while addressing complaints.

The dispute underscores wider tensions between technological innovation and rights protection as generative AI tools expand. Legal experts say the outcome could influence how AI video systems operate within existing copyright frameworks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Qwen3.5 debuts with hybrid architecture and expanded multimodal capabilities

Alibaba has released Qwen3.5-397B-A17B, the first open-weight model in its Qwen3.5 series. Designed as a native vision-language system, it contains 397 billion parameters, though only 17 billion are activated per forward pass to improve efficiency.

The model uses a hybrid architecture that combines sparse mixture-of-experts with linear attention via Gated Delta Networks. According to the company, this design improves inference speed while maintaining strong results across reasoning, coding, and agent benchmarks.

Multilingual coverage expands from 119 to 201 languages and dialects, supported by a 250k vocabulary and larger visual-text pretraining datasets. Alibaba says the model achieves performance comparable to significantly larger predecessors.

A hosted version, Qwen3.5-Plus, is available through Alibaba Cloud Model Studio, with a 1-million-token context window and built-in adaptive tool use. Reinforcement learning environments were scaled to prioritise generalisation across tasks rather than narrow optimisation.

Infrastructure upgrades include an FP8 training pipeline and an asynchronous reinforcement learning framework to improve efficiency and stability. Alibaba positions Qwen3.5 as a base for multimodal agents that support reasoning, search, and coding.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Researchers teach AI to interpret complex scientific data from brain scans to alloy design

Research teams are developing artificial intelligence systems designed to assist scientists in making sense of complex, high-dimensional data across disciplines such as neuroscience and materials engineering.

Traditional analysis methods often require extensive human expertise and time; AI models trained to identify patterns, reduce noise, and suggest hypotheses could significantly accelerate research cycles.

In neuroscience, AI is being used to extract meaningful features from detailed brain imaging datasets, enabling better understanding of neural processes and potentially enhancing diagnosis and treatment development.

In materials science, generative and predictive models help identify promising alloy compositions and properties by learning from vast experimental datasets, reducing reliance on trial-and-error experimentation.

Researchers emphasise that these AI tools don’t replace domain expertise but rather augment scientists’ abilities to navigate complex datasets, improve reproducibility and prioritise experiments with higher scientific payoff.

Ethical considerations and careful validation remain important to ensure models don’t propagate biases or misinterpret subtle signals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!