Face-to-face interviews and oral verification could become a routine part of third-level assessments under new recommendations aimed at addressing the improper use of AI. Institutions are being encouraged to redesign assessment methods to ensure student work is authentic.
The proposals are set out in new guidelines published by the Higher Education Authority (HEA) of Ireland, which regulates universities and other third-level institutions. The report argues that assessment systems must evolve to reflect the growing use of generative AI in education.
While encouraging institutions to embrace AI’s potential, the report stresses the need to ensure students are demonstrating genuine learning. Academics have raised concerns that AI-generated assignments are increasingly difficult to distinguish from original student work.
To address this, the report recommends redesigning assessments to prioritise student authorship and human judgement. Suggested measures include oral verification, process-based learning, and, where appropriate, a renewed reliance on written exams conducted without technology.
The authors also caution against relying on AI detection tools, arguing that integrity processes should be based on dialogue and evidence. They call for clearer policies, staff and student training, and safeguards around data use and equitable access to AI tools.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
India’s government has set out plans to democratise AI infrastructure nationwide. The strategy focuses on expanding access beyond major technology hubs.
Officials aim to increase availability of computing power, datasets and AI models. Startups, researchers and public institutions are key intended beneficiaries.
New initiatives under IndiaAI and national supercomputing programmes will boost domestic capacity. Authorities say local compute access reduces reliance on foreign providers.
Digital public platforms will support data sharing and model development. The approach seeks inclusive innovation across education, healthcare and governance across India.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
xAI is expanding its AI infrastructure in the southern United States after acquiring another data centre site near Memphis. The move significantly increases planned computing capacity and supports ambitions for large-scale AI training.
The expansion centres on the purchase of a third facility near Memphis, disclosed by Elon Musk in a post on X. The acquisition brings xAI’s total planned power capacity close to 2 gigawatts, placing the project among the most energy-intensive AI data centre developments currently underway.
xAI has bought a third building called MACROHARDRR. Will take @xAI training compute to almost 2GW.
xAI has already completed one major US facility in the area, known as Colossus, while a second site, Colossus 2, remains under construction. The newly acquired building, called MACROHARDRR, is located in Southaven and directly adjoins the Colossus 2 site, as previously reported.
By clustering facilities across neighbouring locations, xAI is creating a contiguous computing campus. The approach enables shared power, cooling, and high-speed data infrastructure for large-scale AI workloads.
The Memphis expansion underscores the rising computational demands of frontier AI models. By owning and controlling its infrastructure, xAI aims to secure long-term access to high-end compute as competition intensifies among firms investing heavily in AI data centres.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Growing concern over data privacy and subscription fatigue has led an independent developer to create WitNote, an AI note-taking tool that runs entirely offline.
The software allows users to process notes locally on Windows and macOS rather than relying on cloud-based services where personal information may be exposed.
WitNote supports lightweight language models such as Qwen2.5-0.5B that can run with limited storage requirements. Users may also connect to external models through API keys if preferred.
Core functions include rewriting, summarising and extending content, while a WYSIWYG Markdown editor provides a familiar workflow without network delays, instead of relying on web-based interfaces.
Another key feature is direct integration with Obsidian Markdown files, allowing notes to be imported instantly and managed in one place.
The developer says the project remains a work in progress but commits to ongoing updates and user-driven improvements, even joining Apple’s developer programme personally to support smoother installation.
For users seeking AI assistance while protecting privacy and avoiding monthly fees, WitNote positions itself as an appealing offline alternative that keeps full control of data on the local machine.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new international study has shown that an AI model using deep transfer learning can predict spoken language outcomes for children following cochlear implants with 92% accuracy.
Researchers analysed pre-implantation brain MRI scans from 278 children across Hong Kong, Australia, and the US, covering English, Spanish, and Cantonese speakers.
Cochlear implants are the only effective treatment for severe hearing loss, though speech development after early implantation can vary widely. The AI model identifies children needing intensive therapy, enabling clinicians to tailor interventions before implantation.
The study demonstrated that deep learning outperformed traditional machine learning models, handling complex, heterogeneous datasets across multiple centres with different scanning protocols and outcome measures.
Researchers described the approach as a robust prognostic tool for cochlear implant programmes worldwide.
Experts highlighted that the AI-powered ‘predict-to-prescribe’ method could transform paediatric audiology by optimising therapy plans and improving spoken language development for children receiving cochlear implants.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The booming influencer economy of Belgium is colliding with an advertising rulebook that many creators say belongs to another era.
Different obligations across federal, regional and local authorities mean that wording acceptable in one region may trigger a reprimand in another. Some influencers have even faced large fines for administrative breaches such as failing to publish business details on their profiles.
In response, the Influencer Marketing Alliance in Belgium has launched a certification scheme designed to help creators navigate the legal maze instead of risking unintentional violations.
Influencers complete an online course on advertising and consumer law and must pass a final exam before being listed in a public registry monitored by the Jury for Ethical Practices.
Major brands, including L’Oréal and Coca-Cola, already prefer to collaborate with certified creators to ensure compliance and credibility.
Not everyone is convinced.
Some Belgian influencers argue that certification adds more bureaucracy at a time when they already struggle to understand overlapping rules. Others see value as a structured reminder that content creators remain legally responsible for commercial communication shared with followers.
The alliance is also pushing lawmakers to involve influencers more closely when drafting future rules, including taxation and safeguards for child creators.
Consumer groups such as BEUC support clearer definitions and obligations under the forthcoming EU Digital Fairness Act, arguing that influencer advertising should follow the same standards as other media instead of remaining in a grey zone.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI dictation reached maturity during the years after many attempts of patchy performance and frustrating inaccuracies.
Advances in speech-to-text engines and large language models now allow modern dictation tools to recognise everyday speech more reliably while keeping enough context to format sentences automatically instead of producing raw transcripts that require heavy editing.
Several leading apps have emerged with different strengths. Wispr Flow focuses on flexibility with style options and custom vocabulary, while Willow blends automation with privacy by storing transcripts locally.
Monologue also prioritises privacy by allowing users to download the model and run transcription entirely on their own machines. Superwhisper caters for power users by supporting multiple downloadable models and transcription from audio or video files.
Other tools take different approaches. VoiceTypr offers an offline-first design with lifetime licensing, Aqua promotes speed and phrase-based shortcuts, Handy provides a simple free open source starting point, and Typeless gives one of the most generous free allowances while promising strong data protection.
Each reflects a wider trend where developers try to balance convenience, privacy, control and affordability.
Users now benefit from cleaner, more natural-sounding transcripts instead of the rigid audio typing tools of previous years. AI dictation has become faster, more accurate and far more usable for everyday note-taking, messaging and work tasks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI tools are increasingly used for simple everyday calculations, yet a new benchmark suggests accuracy remains unreliable.
The ORCA study tested five major chatbots across 500 real-world maths prompts and found that users still face roughly a 40 percent chance of receiving the wrong answer.
Gemini from Google recorded the highest score at 63 percent, with xAI’s Grok almost level at 62.8 percent. DeepSeek followed with 52 percent, while ChatGPT scored 49.4 percent, and Claude placed last at 45.2 percent.
Performance varied sharply across subjects, with maths and conversion tasks producing the best results, but physics questions dragged scores down to an average accuracy below 40 percent.
Researchers identified most errors as sloppy calculations or rounding mistakes, rather than deeper failures to understand the problem. Finance and economics questions highlighted the widest gaps between the models, while DeepSeek struggled most in biology and chemistry, with barely one correct answer in ten.
Users are advised to double-check results whenever accuracy is crucial. A calculator or a verified source is still advised instead of relying entirely on an AI chatbot for numerical certainty.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Korean Air has disclosed a data breach affecting about 30,000 employees. Stolen records were taken from systems operated by a former subsidiary.
The breach occurred at catering supplier KC&D, sold off in 2020. Hackers, who had previously attacked the Washington Post accessed employee names and their bank account details, while customer data remained unaffected.
Investigators linked the incident to exploits in Oracle E-Business Suite. Cybercriminals abused zero day flaws during a wider global hacking campaign.
The attack against Korean Air has been claimed by the Cl0p ransomware group. Aviation firms worldwide have reported similar breaches connected to the same campaign.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Security researchers warn hackers are exploiting a new feature in Microsoft Copilot Studio. The issue affects recently launched Connected Agents functionality.
Connected Agents allows AI systems to interact and share tools across environments. Researchers say default settings can expose sensitive capabilities without clear monitoring.
Zenity Labs reported attackers linking rogue agents to trusted systems. Exploits included unauthorised email sending and data access.
Experts urge organisations to disable Connected Agents for critical workloads. Stronger authentication and restricted access are advised until safeguards improve.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!