AI is increasingly used for emotional support and companionship, raising questions about the values embedded in its responses, particularly for Christians seeking guidance. Research cited by Harvard Business Review shows therapy-related use now dominates generative AI.
As Christians turn to AI for advice on anxiety, relationships, and personal crises, concerns are growing about the quality and clarity of its responses. Critics warn that AI systems often rely on vague generalities and may lack the moral grounding expected by faith-based users.
A new benchmark released by technology firm Gloo assessed how leading AI models support human flourishing from a Christian perspective. The evaluation examined seven areas, including relationships, meaning, health, and faith, and found consistent weaknesses in how models addressed Christian belief.
The findings show many AI systems struggle with core Christian concepts such as forgiveness and grace. Responses often default to vague spirituality rather than engaging directly with Christian values.
The authors argue that as AI increasingly shapes worldviews, greater attention is needed to how systems serve Christians and other faith communities. They call for clearer benchmarks and training approaches that allow AI to engage respectfully with religious values without promoting any single belief system.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Samsung will open its CES 2026 presence with a Sunday evening press conference focused on integrating AI across its product portfolio. The event will take place on 4 January at the Wynn in Las Vegas and will be livestreamed online.
Senior executives, including TM Roh, head of the Device eXperience division, and leaders from Samsung’s visual display and digital appliance businesses, are expected to outline the company’s AI strategy. Samsung says the presentation will emphasise AI as a core layer across products and services.
The company has already previewed several AI-enabled devices ahead of CES. The devices include a portable projector that adapts to its surroundings, expanded Google Photos integration on Samsung TVs, and new Micro RGB television displays.
The company is also highlighting AI-powered home appliances designed to anticipate user needs. Examples include refrigerators that track food supplies, generate shopping lists, and detect early signs of device malfunction.
New smartphones are not expected at the event, with the next Galaxy Unpacked launch reportedly scheduled for later in January or early February.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A large energy and AI campus is taking shape outside Amarillo, Texas, as startup Fermi America plans to build what it says would be the world’s largest private power grid. The project aims to support large-scale AI training using nuclear, gas, and solar power.
Known as Project Matador, the development would host millions of square metres of data centres and generate more electricity than many US states consume at peak demand. The site is near the Pantex nuclear weapons facility and is part of a broader push for US energy and AI dominance.
Fermi is led by former Texas governor and energy secretary Rick Perry alongside investor Toby Neugebauer. The company plans to deploy next-generation nuclear reactors and offer off-grid computing infrastructure, though it has yet to secure a confirmed anchor tenant.
The scale and cost of the project have raised questions among analysts and local residents. Critics point to financing risks, water use, and the challenge of delivering nuclear reactors on time and within budget, while supporters argue the campus could drive economic growth and national security benefits.
Backed by political momentum and rising demand for AI infrastructure, Fermi is pressing ahead with construction and partnerships. Whether Project Matador can translate ambition into delivery remains a key test as competition intensifies in the global race to power next-generation AI systems.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI is reshaping Australia’s labour market at a pace that has reignited anxiety about job security and skills. Experts say the speed and visibility of AI adoption have made its impact feel more immediate than previous technological shifts.
Since the public release of ChatGPT in late 2022, AI tools have rapidly moved from novelty to everyday workplace technology. Businesses are increasingly automating routine tasks, including through agentic AI systems that can execute workflows with limited human input.
Research from the HR Institute of Australia suggests the effects are mixed. While some entry-level roles have grown in the short term, analysts warn that clerical and administrative jobs remain highly exposed as automation expands across organisations.
Economic modelling indicates that AI could boost productivity and incomes if adoption is carefully managed, but may also cause short-term job displacement. Sectors with lower automation potential, including construction, care work, and hands-on services, are expected to absorb displaced workers.
Experts and unions say outcomes will depend on skills, policy choices, and governance. Australia’s National AI Plan aims to guide the transition, while researchers urge workers to upskill and use AI as a productivity tool rather than avoiding it.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A Guardian investigation has found that Google’s AI Overviews have displayed false and misleading health information that could put people at risk of harm. The summaries, which appear at the top of search results, are generated using AI and are presented as reliable snapshots of key information.
The investigation identified multiple cases where Google’s AI summaries provided inaccurate medical advice. Examples included incorrect guidance for pancreatic cancer patients, misleading explanations of liver blood test results, and false information about women’s cancer screening.
Health experts warned that such errors could lead people to dismiss symptoms, delay treatment, or follow harmful advice. Some charities said the summaries lacked essential context and could mislead users during moments of anxiety or crisis.
Concerns were also raised about inconsistencies, with the same health queries producing different AI-generated answers at different times. Experts said this variability undermines trust and increases the risk that misinformation will influence health decisions.
Google said most AI Overviews are accurate and helpful, and that the company continually improves quality, particularly for health-related topics. It said action is taken when summaries misinterpret content or lack appropriate context.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Chinese President Xi Jinping said 2025 marked a year of major breakthroughs for the country’s AI and semiconductor industries. In his New Year’s address, he said that Chinese technology firms had made significant progress in AI models and domestic chip development.
China’s AI sector gained global attention with the rise of DeepSeek. The company launched advanced models focused on reasoning and efficiency, drawing comparisons with leading US systems and triggering volatility in global technology markets.
Other Chinese firms also expanded their AI capabilities. Alibaba released new frontier models and pledged large-scale investment in cloud and AI infrastructure, while Huawei announced new computing technologies and AI chips to challenge dominant suppliers.
China’s progress prompted mixed international responses. Some European governments restricted the use of Chinese AI models over data security concerns, while US companies continued engaging with Chinese-linked AI firms through acquisitions and partnerships.
Looking ahead to 2026, China is expected to prioritise AI and semiconductors in its next five-year development plan. Analysts anticipate increased research funding, expanded infrastructure, and stronger support for emerging technology industries.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Face-to-face interviews and oral verification could become a routine part of third-level assessments under new recommendations aimed at addressing the improper use of AI. Institutions are being encouraged to redesign assessment methods to ensure student work is authentic.
The proposals are set out in new guidelines published by the Higher Education Authority (HEA) of Ireland, which regulates universities and other third-level institutions. The report argues that assessment systems must evolve to reflect the growing use of generative AI in education.
While encouraging institutions to embrace AI’s potential, the report stresses the need to ensure students are demonstrating genuine learning. Academics have raised concerns that AI-generated assignments are increasingly difficult to distinguish from original student work.
To address this, the report recommends redesigning assessments to prioritise student authorship and human judgement. Suggested measures include oral verification, process-based learning, and, where appropriate, a renewed reliance on written exams conducted without technology.
The authors also caution against relying on AI detection tools, arguing that integrity processes should be based on dialogue and evidence. They call for clearer policies, staff and student training, and safeguards around data use and equitable access to AI tools.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
India’s government has set out plans to democratise AI infrastructure nationwide. The strategy focuses on expanding access beyond major technology hubs.
Officials aim to increase availability of computing power, datasets and AI models. Startups, researchers and public institutions are key intended beneficiaries.
New initiatives under IndiaAI and national supercomputing programmes will boost domestic capacity. Authorities say local compute access reduces reliance on foreign providers.
Digital public platforms will support data sharing and model development. The approach seeks inclusive innovation across education, healthcare and governance across India.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
xAI is expanding its AI infrastructure in the southern United States after acquiring another data centre site near Memphis. The move significantly increases planned computing capacity and supports ambitions for large-scale AI training.
The expansion centres on the purchase of a third facility near Memphis, disclosed by Elon Musk in a post on X. The acquisition brings xAI’s total planned power capacity close to 2 gigawatts, placing the project among the most energy-intensive AI data centre developments currently underway.
xAI has bought a third building called MACROHARDRR. Will take @xAI training compute to almost 2GW.
xAI has already completed one major US facility in the area, known as Colossus, while a second site, Colossus 2, remains under construction. The newly acquired building, called MACROHARDRR, is located in Southaven and directly adjoins the Colossus 2 site, as previously reported.
By clustering facilities across neighbouring locations, xAI is creating a contiguous computing campus. The approach enables shared power, cooling, and high-speed data infrastructure for large-scale AI workloads.
The Memphis expansion underscores the rising computational demands of frontier AI models. By owning and controlling its infrastructure, xAI aims to secure long-term access to high-end compute as competition intensifies among firms investing heavily in AI data centres.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Manus has returned to the spotlight after agreeing to be acquired by Meta in a deal reportedly worth more than $2 billion. The transaction is one of the most high-profile acquisitions of an Asian AI startup by a US technology company and reflects Meta’s push to expand agentic AI capabilities across its platforms.
The startup drew attention in March after unveiling an autonomous AI agent designed to execute tasks such as résumé screening and stock analysis. Founded in China, Manus later moved its headquarters to Singapore and was developed by the AI product studio Butterfly Effect.
Since launch, Manus has expanded its features to include design work, slide creation, and browser-based task completion. The company reported surpassing $100 million in annual recurring revenue and raised $75 million earlier this year at a valuation of about $500 million.
Meta said the acquisition would allow it to integrate the Singapore-based company’s technology into its wider AI strategy while keeping the product running as a standalone service. Manus said subscriptions would continue uninterrupted and that operations would remain based in Singapore.
The deal has drawn political scrutiny in the US due to Manus’s origins and past links to China. Meta said the transaction would sever remaining ties to China, as debate intensifies over investment, data security, and competition in advanced AI systems.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!