Universities in Ireland urged to rethink assessments amid AI concerns

Face-to-face interviews and oral verification could become a routine part of third-level assessments under new recommendations aimed at addressing the improper use of AI. Institutions are being encouraged to redesign assessment methods to ensure student work is authentic.

The proposals are set out in new guidelines published by the Higher Education Authority (HEA) of Ireland, which regulates universities and other third-level institutions. The report argues that assessment systems must evolve to reflect the growing use of generative AI in education.

While encouraging institutions to embrace AI’s potential, the report stresses the need to ensure students are demonstrating genuine learning. Academics have raised concerns that AI-generated assignments are increasingly difficult to distinguish from original student work.

To address this, the report recommends redesigning assessments to prioritise student authorship and human judgement. Suggested measures include oral verification, process-based learning, and, where appropriate, a renewed reliance on written exams conducted without technology.

The authors also caution against relying on AI detection tools, arguing that integrity processes should be based on dialogue and evidence. They call for clearer policies, staff and student training, and safeguards around data use and equitable access to AI tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Data centre cluster in Tennessee strengthens xAI’s compute ambitions

xAI is expanding its AI infrastructure in the southern United States after acquiring another data centre site near Memphis. The move significantly increases planned computing capacity and supports ambitions for large-scale AI training.

The expansion centres on the purchase of a third facility near Memphis, disclosed by Elon Musk in a post on X. The acquisition brings xAI’s total planned power capacity close to 2 gigawatts, placing the project among the most energy-intensive AI data centre developments currently underway.

xAI has already completed one major US facility in the area, known as Colossus, while a second site, Colossus 2, remains under construction. The newly acquired building, called MACROHARDRR, is located in Southaven and directly adjoins the Colossus 2 site, as previously reported.

By clustering facilities across neighbouring locations, xAI is creating a contiguous computing campus. The approach enables shared power, cooling, and high-speed data infrastructure for large-scale AI workloads.

The Memphis expansion underscores the rising computational demands of frontier AI models. By owning and controlling its infrastructure, xAI aims to secure long-term access to high-end compute as competition intensifies among firms investing heavily in AI data centres.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

High-profile AI acquisition puts Manus back in focus

Manus has returned to the spotlight after agreeing to be acquired by Meta in a deal reportedly worth more than $2 billion. The transaction is one of the most high-profile acquisitions of an Asian AI startup by a US technology company and reflects Meta’s push to expand agentic AI capabilities across its platforms.

The startup drew attention in March after unveiling an autonomous AI agent designed to execute tasks such as résumé screening and stock analysis. Founded in China, Manus later moved its headquarters to Singapore and was developed by the AI product studio Butterfly Effect.

Since launch, Manus has expanded its features to include design work, slide creation, and browser-based task completion. The company reported surpassing $100 million in annual recurring revenue and raised $75 million earlier this year at a valuation of about $500 million.

Meta said the acquisition would allow it to integrate the Singapore-based company’s technology into its wider AI strategy while keeping the product running as a standalone service. Manus said subscriptions would continue uninterrupted and that operations would remain based in Singapore.

The deal has drawn political scrutiny in the US due to Manus’s origins and past links to China. Meta said the transaction would sever remaining ties to China, as debate intensifies over investment, data security, and competition in advanced AI systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI secures massive funding round led by SoftBank

SoftBank Group has completed a $41 billion investment in OpenAI, marking one of the largest private funding rounds on record. The deal gives the Japanese conglomerate an estimated 11 percent stake in the ChatGPT developer.

The investment reflects SoftBank chief executive Masayoshi Son’s renewed focus on AI and supporting infrastructure. The company is seeking to capitalise on rising demand for the computing capacity that underpins advanced AI models.

SoftBank said the latest funding includes an additional $22.5 billion investment, following an earlier $7.5 billion injection in April. OpenAI also secured a further $11 billion through an expanded syndicated co-investment from other backers.

The funding values OpenAI at roughly $300 billion on a post-money basis, though secondary market transactions later placed the company’s valuation closer to $500 billion. The investment follows SoftBank’s recent agreement to acquire DigitalBridge Group, a digital infrastructure investor.

OpenAI remains a central beneficiary of the global surge in AI spending. The company is also involved in Stargate, a large-scale data centre project backed by SoftBank and other partners to support next-generation AI systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI model improves speech therapy planning for hearing-impaired children

A new international study has shown that an AI model using deep transfer learning can predict spoken language outcomes for children following cochlear implants with 92% accuracy.

Researchers analysed pre-implantation brain MRI scans from 278 children across Hong Kong, Australia, and the US, covering English, Spanish, and Cantonese speakers.

Cochlear implants are the only effective treatment for severe hearing loss, though speech development after early implantation can vary widely. The AI model identifies children needing intensive therapy, enabling clinicians to tailor interventions before implantation.

The study demonstrated that deep learning outperformed traditional machine learning models, handling complex, heterogeneous datasets across multiple centres with different scanning protocols and outcome measures.

Researchers described the approach as a robust prognostic tool for cochlear implant programmes worldwide.

Experts highlighted that the AI-powered ‘predict-to-prescribe’ method could transform paediatric audiology by optimising therapy plans and improving spoken language development for children receiving cochlear implants.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Scam texts impersonating Illinois traffic authorities spread

Illinois Secretary of State Alexi Giannoulias has warned residents to stay alert for fraudulent text messages claiming unpaid traffic violations or tolls. Officials say the messages are part of a phishing campaign targeting Illinois drivers.

The scam texts typically warn recipients that their vehicle registration or driving privileges are at risk of suspension. The messages urge immediate action via links that steal money or personal information.

The Secretary of State’s office said it sends text messages only to remind customers about scheduled DMV appointments. It does not communicate by text about licence status, vehicle registration issues, or enforcement actions.

Officials advised residents not to click on links or provide personal details in response to such messages. The texts are intended to create fear and pressure victims into acting quickly.

Residents who receive scam messages are encouraged to report them to the Federal Trade Commission through its online fraud reporting system.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Belgium’s influencers seek clarity through a new certification scheme

The booming influencer economy of Belgium is colliding with an advertising rulebook that many creators say belongs to another era.

Different obligations across federal, regional and local authorities mean that wording acceptable in one region may trigger a reprimand in another. Some influencers have even faced large fines for administrative breaches such as failing to publish business details on their profiles.

In response, the Influencer Marketing Alliance in Belgium has launched a certification scheme designed to help creators navigate the legal maze instead of risking unintentional violations.

Influencers complete an online course on advertising and consumer law and must pass a final exam before being listed in a public registry monitored by the Jury for Ethical Practices.

Major brands, including L’Oréal and Coca-Cola, already prefer to collaborate with certified creators to ensure compliance and credibility.

Not everyone is convinced.

Some Belgian influencers argue that certification adds more bureaucracy at a time when they already struggle to understand overlapping rules. Others see value as a structured reminder that content creators remain legally responsible for commercial communication shared with followers.

The alliance is also pushing lawmakers to involve influencers more closely when drafting future rules, including taxation and safeguards for child creators.

Consumer groups such as BEUC support clearer definitions and obligations under the forthcoming EU Digital Fairness Act, arguing that influencer advertising should follow the same standards as other media instead of remaining in a grey zone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Best AI dictation tools for faster speech-to-text in 2026

AI dictation reached maturity during the years after many attempts of patchy performance and frustrating inaccuracies.

Advances in speech-to-text engines and large language models now allow modern dictation tools to recognise everyday speech more reliably while keeping enough context to format sentences automatically instead of producing raw transcripts that require heavy editing.

Several leading apps have emerged with different strengths. Wispr Flow focuses on flexibility with style options and custom vocabulary, while Willow blends automation with privacy by storing transcripts locally.

Monologue also prioritises privacy by allowing users to download the model and run transcription entirely on their own machines. Superwhisper caters for power users by supporting multiple downloadable models and transcription from audio or video files.

Other tools take different approaches. VoiceTypr offers an offline-first design with lifetime licensing, Aqua promotes speed and phrase-based shortcuts, Handy provides a simple free open source starting point, and Typeless gives one of the most generous free allowances while promising strong data protection.

Each reflects a wider trend where developers try to balance convenience, privacy, control and affordability.

Users now benefit from cleaner, more natural-sounding transcripts instead of the rigid audio typing tools of previous years. AI dictation has become faster, more accurate and far more usable for everyday note-taking, messaging and work tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China proposes strict AI rules to protect children

China has proposed stringent new rules for AI aimed at protecting children and preventing chatbots from providing advice that could lead to self-harm, violence, or gambling.

The draft regulations, published by the Cyberspace Administration of China (CAC), require developers to include personalised settings, time limits, and parental consent for services offering emotional companionship.

High-risk chats involving self-harm or suicide must be passed to a human operator, with guardians or emergency contacts alerted. AI providers must not produce content that threatens national security, harms national honour, or undermines national unity.

The rules come as AI usage surges, with platforms such as DeepSeek, Z.ai, and Minimax attracting millions of users in China and abroad. The CAC supports safe AI use, including tools for local culture and elderly companionship.

The move reflects growing global concerns over AI’s impact on human behaviour. Notably, OpenAI has faced legal challenges over alleged chatbot-related harm, prompting the company to create roles focused on tracking AI risks to mental health and cybersecurity.

China’s draft rules signal a firm approach to regulating AI technology as its influence expands rapidly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Visually impaired gamers call for more accessible games

Many visually impaired gamers find mainstream video games difficult due to limited accessibility features. Support groups enable players to share tips, recommend titles, and connect with others who face similar challenges.

Audio and text‑based mobile games are popular, yet console and PC titles often lack voiceovers or screen reader support. Adjustable visual presets could make mainstream games more accessible for partially sighted players.

UK industry bodies acknowledge progress, but barriers remain for millions of visually impaired players. Communities offer social support and provide feedback to developers to improve games and make them inclusive.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!