CelcomDigi and AmBank partner to revolutionise digital healthcare in Malaysia

CelcomDigi and AmBank have formed a strategic partnership to revolutionise digital healthcare in Malaysia through a newly signed Memorandum of Understanding (MoU). That collaboration will deliver affordable digital healthcare solutions over the next three years, empowering healthcare providers with advanced tools and services that leverage AI to enhance patient care and healthcare delivery.

Under this partnership, CelcomDigi will provide essential connectivity, while AmBank will offer financial services such as specialised medical financing, loans, insurance, and payment solutions, making these innovations more accessible to healthcare institutions. The initiative will introduce various solutions, including Smart Health Kiosks for monitoring vital health metrics and Medi-Scan technology, which utilises AI for biometric assessments. The focus is particularly on improving healthcare access in underserved areas, addressing the historical limitations of quality healthcare in these regions.

The commitment to enhancing healthcare accessibility for all Malaysians aligns with the initiatives of the Malaysian Communications and Multimedia Commission to elevate the country’s healthcare system to a global standard. Integrating telecommunications and digital infrastructure is deemed essential to achieve this goal. Together, the organisations aim to create a more connected and inclusive healthcare ecosystem that supports predictive, preventive, and precision treatments, ultimately improving clinical outcomes for patients.

The United States, Japan, and South Korea collaborate to strengthen India’s digital infrastructure

The United States, Japan, and South Korea collaborate to strengthen digital infrastructure development in India through the recently announced Digital Infrastructure Growth Initiative for India Framework, known as the DiGi Framework. The significant partnership seeks to leverage the strengths of three influential nations, with key financial support from the US International Development Finance Corporation (DFC), the Japan Bank for International Cooperation (JBIC), and the Export-Import Bank of Korea (Korea Eximbank).

The primary objective of the DiGi Framework is to promote private sector investments in India’s digital infrastructure by addressing the strategic needs of various projects. Targeted sectors include multiple technologies and services, such as information and communications technologies (ICT), Open RAN, 5G telecommunications, submarine cables, optical fibre networks, telecom towers, data centres, smart cities, e-commerce, AI, and quantum technology.

Additionally, the initiative aims to foster meaningful dialogues between the Indian government and the private sector to promote funding for digital infrastructure projects. The collaborative effort builds upon an earlier agreement signed in August 2023, emphasising the importance of coordination and cooperation among like-minded countries to support private sector investment in infrastructure.

By enhancing collaboration and communication, the DiGi Framework aims to create an environment conducive to investment and innovation within India’s digital landscape. That initiative signifies a strong commitment to enhancing India’s digital infrastructure, positioning the country for sustainable growth and technological advancement in an increasingly digital world.

Why does it matter?

With the support of these three nations, the framework represents a strategic move to strengthen India’s technological capabilities and improve connectivity, ultimately benefiting its economic development and resilience in the face of future challenges.

US finalising rules to curb investment in China’s AI and defence tech

The Biden administration announced on Monday new rules restricting US investments in specific technology sectors in China, including AI, semiconductors, and quantum computing, citing national security concerns. These rules, effective from 2 January, aim to prevent US capital and expertise from aiding China’s development of military and intelligence capabilities. Issued under an executive order from August 2023, the regulations will be managed by the Treasury’s new Office of Global Transactions.

The targeted technologies are considered crucial to future military and cyber defence. Treasury officials note that US investments often include more than money—managerial support, network access, and intellectual expertise—that could benefit Chinese advancements in sensitive sectors. A senior Treasury official, Paul Rosen, emphasised that these restrictions curb potential US involvement in developing cutting-edge technologies for adversarial nations.

The US Commerce Secretary Gina Raimondo has previously highlighted the importance of these measures, viewing them as essential to slowing China’s progress in military technologies. The new regulations allow for investments in publicly traded Chinese securities; however, existing rules still restrict transactions involving certain Chinese firms deemed to support military development.

Additionally, the rules respond to recent criticism from the House Select Committee on China, which has scrutinised American index providers for funnelling US investments into Chinese companies linked to military advancements. With these regulations, the administration underscores its intent to protect US interests by limiting China’s access to critical technology expertise and capital.

Biden’s national security memorandum prioritises AI regulation and international collaboration

President Biden signed a landmark national security memorandum to strengthen how AI is employed across defence and intelligence operations. The directive outlines strict protections on AI use, preventing autonomous systems from making high-stakes decisions like nuclear launches and immigration rulings. Jake Sullivan, the national security adviser, highlighted the need for the US to maintain its competitive edge in AI to safeguard national security.

‘Few technologies will be as critical to our future security as AI,’ Sullivan said at the National Defense University in Washington. He underscored the administration’s aim to roll out AI protections faster than other global powers and underscored a balance between open market competition and secure innovation.

The memorandum also directs federal agencies to bolster the security and diversity of chip supply chains and prioritise gathering intelligence on foreign AI operations targeting the US sector. These insights will support AI developers in protecting their products from adversarial threats.

However, with many recommendations set to take effect post-2025, it’s uncertain if the next administration will uphold these regulations. Experts emphasise that while AI is kept out of nuclear launch decisions, it still influences the data presidents receive, raising questions about reliance on AI for critical decision-making.

In the meantime, the administration will convene a global safety summit in San Francisco next month to address AI risks and foster international cooperation. This move adds to Biden’s executive order from last year, which aimed to limit AI’s risks to consumers, workers, and minority groups.

Global standards for AI, DPI move forward after India proposal

The International Telecommunication Union (ITU) will prioritise new global standards for AI and digital public infrastructure (DPI), with the aim of fostering interoperability, trust, and inclusivity. The resolution, adopted at the World Telecommunication Standardisation Assembly (WTSA) held in Delhi, was led by India, which has promoted DPI platforms such as Aadhaar and UPI. This adoption underscores DPI’s importance as a technology that can bridge access to essential services across both public and private sectors, sparking particular interest from developing economies.

This year’s WTSA, attended by a record-breaking 3,700 delegates, also introduced standardisation frameworks for sustainable digital transformation, AI, and the metaverse, as well as enhancements to communications in vehicular technology and emergency services. These efforts aim to facilitate safer, more reliable AI innovations, particularly for nations lacking frameworks for emerging technologies. ITU Secretary General Doreen Bogdan-Martin emphasised that strong AI standards are essential for building global trust and enabling responsible tech growth.

India’s influence at WTSA highlights its commitment to shaping the global tech landscape, including standards for next-generation technologies like 6G, IoT, and satellite communications. To that end, the assembly also introduced study group (ITU-T Study Group 21), focusing on multimedia and content delivery standards.

Meta partners with Reuters for AI news content

Meta Platforms announced a new partnership with Reuters on Friday, allowing its AI chatbot to give users real-time answers about news and current events using Reuters content. The agreement marks Meta’s return to licensed news distribution after scaling back on news content due to ongoing disputes over misinformation and revenue sharing with regulators and publishers. The financial specifics of the deal remain undisclosed, as Meta and Reuters-parent Thomson Reuters have chosen to keep the terms confidential.

Meta’s AI chatbot, available on platforms like Facebook, WhatsApp, and Instagram, will now offer users summaries and links to Reuters articles when they ask news-related questions. Although Meta hasn’t clarified if Reuters content will be used to train its language models further, the company assures that Reuters will be compensated under a multi-year agreement, as reported by Axios.

Reuters, known for its fact-based journalism, confirmed its licensed content to multiple tech providers for AI usage without detailing specific deals.

Why does it matter?

The partnership reflects a growing trend in tech, with companies like OpenAI and Perplexity also forming agreements with media outlets to enhance their AI responses with verified information from trusted news sources. Reuters has already collaborated with Meta on fact-checking initiatives, a partnership that began in 2020. This latest agreement aims to improve the reliability of Meta AI’s responses to real-time questions, potentially addressing ongoing concerns around misinformation and helping to balance the distribution of accurate, trustworthy news on social media platforms.

Krakow radio station replaces journalists with AI presenters

A radio station in Krakow, Poland, has ignited controversy by replacing its human journalists with AI-generated presenters, marking what it claims to be ‘the first experiment in Poland.’ OFF Radio Krakow relaunched this week after laying off its staff, introducing virtual avatars aimed at engaging younger audiences on cultural, social, and LGBTQ+ topics.

The move has faced significant backlash, particularly from former journalist Mateusz Demski, who penned an open letter warning that this shift could set a dangerous precedent for job losses in the media and creative sectors. His petition against the change quickly gathered over 15,000 signatures, highlighting widespread public concern about the implications of using AI in broadcasting.

Station head Marcin Pulit defended the layoffs, stating that they were due to the station’s low listenership rather than the introduction of AI. However, Deputy Prime Minister Krzysztof Gawkowski called for regulations on AI usage, emphasising the need to establish boundaries for its application in media.

On its first day back on air, the station featured an AI-generated interview with the late Polish poet Wisława Szymborska. Michał Rusinek, president of the Wisława Szymborska Foundation, expressed support for the project, suggesting that the poet would have found the use of her name in this context humorous. As OFF Radio Krakow ventures into this new territory, discussions around the role of AI in journalism and its effects on employment are intensifying.

Nvidia expands AI push in India

Nvidia has deepened its ties with major Indian firms, including Reliance Industries, as it seeks to capitalise on the country’s growing AI market. At an AI summit in Mumbai, CEO Jensen Huang announced the launch of a new Hindi-focused AI model, Nemotron-4-Mini-Hindi-4B, designed to help businesses develop language-specific AI tools. This is part of Nvidia’s broader strategy to boost computing infrastructure in India, which Huang said will expand nearly 20 times by the end of this year.

The new model is tailored for Hindi, one of India’s 22 official languages, and aims to support companies in creating AI-driven solutions for customer service and content translation. Tech Mahindra is the first to adopt Nvidia’s offering, using it to develop a custom AI model, Indus 2.0, which also focuses on Hindi and its various dialects. Nvidia is also working with major IT players like Infosys, TCS, and Wipro to train half a million developers in AI.

In addition, companies such as Reliance and Ola Electric will use Nvidia’s “Omniverse” technology for virtual factory simulations, enhancing their industrial planning capabilities. The summit highlighted India’s growing significance in the global AI landscape as the country accelerates efforts to develop its semiconductor industry and AI infrastructure.

AI cheating scandal at University sparks concern

Hannah, a university student, admits to using AI to complete an essay when overwhelmed by deadlines and personal illness. Struggling with COVID and intense academic pressure, she turned to AI for help but later faced an academic misconduct hearing. Though cleared due to insufficient evidence, Hannah warns others about the risks of relying on AI tools for dishonest purposes.

Universities now grapple with teaching students to use AI responsibly while preventing misuse. A lecturer discovered Hannah’s essay had been generated by AI using detection software, reflecting the complexities of monitoring academic integrity. Some institutions prohibit AI unless explicitly approved, while others allow limited use for grammar checks or structural guidance if properly cited.

Lecturers note that AI-generated content often lacks coherence and critical thinking. Dr Sarah Lieberman from Canterbury Christchurch University explains how AI-produced essays can be spotted easily, describing them as lacking the human touch. Nonetheless, she acknowledges AI’s potential benefits, such as generating ideas or guiding students in their research, if used appropriately.

Students hold mixed views on AI in education. Some embrace it as a helpful tool for structuring work or exam preparation, while others resist it, preferring to rely on their efforts. A Department for Education spokesperson emphasises the need for universities to find a balance between maintaining academic integrity and preparing students for the workplace by equipping them with essential AI skills.

AI tool decodes pig emotions for farmers

European scientists have developed an AI algorithm that can interpret pig sounds to help farmers monitor their animals’ emotions, potentially improving pig welfare. The tool, created by researchers from universities across several European countries, analyses grunts, oinks, and squeals to identify whether pigs are experiencing positive or negative emotions. This could give farmers new insights beyond just monitoring physical health, as emotions are key to animal welfare but are often overlooked on farms.

The study found that pigs on free-range or organic farms produce fewer stress-related calls compared to conventionally raised pigs, suggesting a link between environment and emotional well-being. The AI algorithm could eventually be used in an app to alert farmers when pigs are stressed or uncomfortable, allowing for better management. Short grunts are associated with positive feelings, while longer grunts and high-pitched squeals often indicate stress or discomfort.

Researchers believe that once fully developed, this technology could not only benefit animal welfare but also help consumers make more informed choices about the farms they support.