England will introduce AI-generated attendance targets for each school, setting tailored improvement baselines based on the context and needs of each school. Schools with higher absence rates will be paired with strong performers for support. Thirty-six new Attendance and Behaviour Hubs will help drive the rollout.
Education Secretary Bridget Phillipson said raising attendance is essential for opportunity. She highlighted the progress made since the pandemic, but noted that variation remains too high. The AI targets aim to disseminate effective practices across all schools.
A new toolkit will guide schools through key transition points, such as the transition from Year 7 to Year 8. CHS South in Manchester is highlighted for using summer family activities to ease anxiety. Officials say early engagement can stabilise attendance.
CHS South Deputy Head Sue Burke said the goal is to ensure no pupil feels left out. She credited the attendance team for combining support with firm expectations. The model is presented as a template for broader adoption.
The policy blends AI analysis with pastoral strategies to address entrenched absence. Ministers argue that consistent attendance drives long-term outcomes. The UK government expects personalised targets and shared practice to embed lasting improvement.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
TikTok has responded to the Science, Innovation and Technology Committee regarding proposed cuts to its UK Trust and Safety teams. The company claimed that reducing staff while expanding AI, third-party specialists, and more localised teams would improve moderation effectiveness.
The social media platform, however, did not provide any supporting data or risk assessment to justify these changes. MPs previously called for more transparency on content moderation data during an inquiry into social media, misinformation, and harmful algorithms.
TikTok’s increasing reliance on AI comes amid broader concerns over AI safety, following reports of chatbots encouraging harmful behaviours.
Committee Chair Dame Chi Onwurah expressed concern that AI cannot reliably replace human moderators. She warned AI could cause harm and criticised TikTok for not providing evidence that staff cuts would protect users.
The Committee urges the Government and Ofcom to take action to ensure user safety before implementing staffing reductions. Dame Onwurah emphasised that without credible data, it is impossible to determine whether the changes will effectively protect users.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Early detection of Alzheimer’s is often limited in primary care due to short consultations, focus on other health issues, and stigma. Researchers have now demonstrated that a fully digital, zero-cost approach can overcome these barriers without requiring additional clinician time.
A pragmatic clinical trial involving over 5,000 patients tested a dual method combining the Quick Dementia Rating System (QDRS), a ten-question patient-reported survey, with an AI-powered passive digital marker.
The approach, embedded in electronic health records, increased new dementia diagnoses by 31 percent compared with usual care and prompted 41 percent more follow-up assessments, such as cognitive tests and neuroimaging.
The passive digital marker from Regenstrief uses machine learning to analyse health records for memory issues and vascular concerns. Open-source and free, it flags at-risk patients and sends results to clinicians’ EHRs with no extra time or staff needed.
Researchers highlight that embedding these tools directly into routine care can improve equity, thereby reaching populations that the healthcare system has traditionally underserved.
Experts say that using patient-reported outcomes with AI is a scalable and efficient way to detect dementia early, without adding burden to primary care teams.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US AI safety and research company, Anthropic, has announced a $50 billion investment to expand AI computing infrastructure inside the country, partnering with Fluidstack to build data centres in Texas and New York, with additional sites planned.
These facilities are designed to optimise efficiency for Anthropic’s workloads, supporting frontier research and development in AI.
The project is expected to generate approximately 800 permanent jobs and 2,400 construction positions as sites come online throughout 2026.
An investment that aligns with the Trump administration’s AI Action Plan, aiming to maintain the US leadership in AI while strengthening domestic technology infrastructure and competitiveness.
Dario Amodei, CEO and co-founder of Anthropic, highlighted the importance of such an infrastructure in developing AI systems capable of accelerating scientific discovery and solving complex problems.
The company serves over 300,000 business customers, with a sevenfold growth in large accounts over the past year, demonstrating strong market demand for its Claude AI platform.
Fluidstack was selected as Anthropic’s partner for its agility in rapidly deploying high-capacity infrastructure. The collaboration aims to provide cost-effective and capital-efficient solutions to meet the growing demand, ensuring that research and development can continue to be at the forefront of AI innovation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US tech giant, Meta, has announced the construction of its 30th data centre in Beaver Dam, Wisconsin, a $1 billion investment that will power the company’s growing AI infrastructure while benefiting the local community and environment.
A facility, designed to support Meta’s most demanding AI workloads, that will run entirely on clean energy and create more than 100 permanent jobs alongside 1,000 construction roles.
The company will invest nearly $200 million in energy infrastructure and donate $15 million to Alliant Energy’s Hometown Care Energy Fund to assist families with home energy costs.
Meta will also launch community grants to fund schools and local organisations, strengthening technology education and digital skills while helping small businesses use AI tools more effectively.
Environmental responsibility remains central to the project. The data centre will use dry cooling, eliminating water demands during operation, and restore 100% of consumed water to local watersheds.
In partnership with Ducks Unlimited, Meta will revitalise 570 acres of wetlands and prairie, transforming degraded habitats into thriving ecosystems. The facility is expected to achieve LEED Gold Certification, reflecting Meta’s ongoing commitment to sustainability and community-focused innovation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta has announced the construction of its 30th data centre in Beaver Dam, Wisconsin. The $1 billion investment will support the company’s expanding AI infrastructure while benefiting the local community and the environment.
A facility, designed to support Meta’s most demanding AI workloads, that will run entirely on clean energy and create more than 100 permanent jobs alongside 1,000 construction roles.
The company will invest nearly $200 million in energy infrastructure and donate $15 million to Alliant Energy’s Hometown Care Energy Fund to assist families with home energy costs.
Meta will also launch community grants to fund schools and local organisations, strengthening technology education and digital skills while helping small businesses use AI tools more effectively.
Environmental responsibility remains central to the project. The data centre will use dry cooling, eliminating water demands during operation, and restore 100% of consumed water to local watersheds.
In partnership with Ducks Unlimited, Meta will revitalise 570 acres of wetlands and prairie, transforming degraded habitats into thriving ecosystems. The facility is expected to achieve LEED Gold Certification, reflecting Meta’s ongoing commitment to sustainability and community-focused innovation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Companies are transforming routine customer interactions into effortless experiences using AI-powered agents. Instead of endless phone transfers, users now get instant answers or bookings through Agentforce-powered systems.
The focus is not on selling more products, but on improving satisfaction with existing services.
Travel platform Engine is already seeing results. Its Agentforce assistant, Eva, can process partial booking cancellations in seconds by combining customer data with internal booking tools.
By narrowing Eva’s focus to a handful of topics, Engine improved both response speed and customer satisfaction by six points. The result is less frustration, reduced hold times, and smoother travel management.
Retailer Williams Sonoma, Inc. is also personalising customer interactions through its virtual assistant, Olive. Beyond processing returns, Olive provides menu suggestions, wine pairings, and meal preparation schedules to help customers host effortlessly.
The aim, according to Chief Technology and Digital Officer Sameer Hassan, is to deliver experiences that teach and inspire rather than promote sales.
Luxury fitness brand Equinox follows a similar path. Its AI assistant now helps members find and book classes directly, reducing clicks and improving usability. As EVP and CTO, Eswar Veluri said simplifying patterns is key to enhancing member experience through innovative tools.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The UK government plans to let approved organisations test AI models before release to ensure they cannot generate child sexual abuse material. The amendment to the Crime and Policing Bill aims to build safeguards into AI tools at the design stage rather than after deployment.
The Internet Watch Foundation reported 426 AI-related abuse cases this year, up from 199 in 2024. Chief Executive Kerry Smith said the move could make AI products safer before they are launched. The proposal also extends to detecting extreme pornography and non-consensual intimate images.
The NSPCC’s Rani Govender welcomed the reform but said testing should be mandatory to make child safety part of product design. Earlier this year, the Home Office introduced new offences for creating or distributing AI tools used to produce abusive imagery, punishable by up to five years in prison.
Technology Secretary Liz Kendall said the law would ensure that trusted groups can verify the safety of AI systems. In contrast, Safeguarding Minister Jess Phillips said it would help prevent predators from exploiting legitimate tools.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In a recent analysis, Goldman Sachs warned that while AI is rapidly permeating the consumer market, enterprise integration is falling much further behind.
The report highlights consumer-facing tools, such as chatbots and generative creative applications, driving the usage surge, but finds that business uptake is still ‘well below where we expected’ a year or two ago.
Goldman’s analysts point out a striking disjunction: consumer adoption is high, yet corporations are slower to embed AI deeply into workflows. One analyst remarked that although nearly 88 % of companies report using AI in some capacity, only about a third have scaled it enterprise-wide and just 39 % see measurable financial impact.
Meanwhile, infrastructure spending on AI is exploding, with projections of 3-4 trillion US dollars by the end of the decade, raising concerns among investors about return on investment and whether the current frenzy resembles past tech bubbles.
For policy-makers, digital-economy strategists and technology governance watchers, this gap has important implications. Hype and hardware build-out may be outpacing deliverables in enterprise contexts.
The divide also underlines the need for more precise metrics around productivity, workforce adaptation and organisational readiness in our discussions around AI policy and digital diplomacy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A global insurance leader, Chubb, launched a new AI-driven embedded insurance optimisation engine within its Chubb Studio platform during the Singapore FinTech Festival. The announcement marks a significant step in enabling digital distribution partners to offer personalised insurance products more effectively.
The engine uses proprietary AI to analyse customer data, identify personas, recommend relevant insurance products (such as phone damage, travel insurance, hospital cash or life cover) at the point of sale, and deliver click-to-engage options for higher-value products.
Integration models range from Chubb-managed to partner-managed or hybrid, giving flexibility in how partners embed the solution.
From a digital-economy and policy perspective, this development highlights how insurance firms are leveraging AI to personalise customer journeys and integrate insurance seamlessly into consumer platforms and apps.
The shift raises essential questions about data utilisation, transparency of recommendation engines and how insurers strike the balance between innovation and consumer protection.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!