Huawei to boost Malawi’s digital transformation

Huawei is significantly contributing to Malawi’s digital transformation through its comprehensive Smart Village Program, which aims to bridge the digital divide in rural areas. This program integrates smart agriculture technologies, expands access to financial services, and enhances education and healthcare through digital solutions.

As part of this initiative, Huawei will establish technical training centres in rural regions to equip young people with crucial digital skills in AI, cybersecurity, and smart agriculture. That effort is a key component of Huawei’s larger $430 million investment plan for Africa, which includes funding for cloud development, talent development, and long-term technological progress.

The initiative supports Malawi’s MW2063 agenda, which envisions transforming the country into an industrialised upper-middle-income nation by 2063. It also builds on previous collaborations, such as the launch of Malawi’s National Data Centre in 2022, marking a significant advancement in the nation’s digital infrastructure.

In addition to Malawi, Huawei’s regional impact extends to other African countries, including Zambia and Uganda, where it is involved in smart village projects, and Kenya, where it contributes to smart city initiatives. These efforts aim to enhance connectivity and drive technological innovation across the continent.

NEXTDC to raise A$750 million for Asian expansion

Australian data centre operator NEXTDC plans to raise A$750 million ($500.33 million) to expand its data centre projects across Asia, driven by growing demand for cloud and AI-based digital infrastructure. The company announced it would raise A$550 million through a placement priced at A$17.15 per share, alongside a share purchase plan capped at A$200 million.

NEXTDC said the increased demand for data centres across its core Asian markets was creating strong growth opportunities. The company is positioning itself to benefit from the global AI boom, which is boosting the need for digital infrastructure throughout the Asia-Pacific region.

In a related trend, United States investment firm Blackstone recently acquired the data centre company AirTrunk from Australia in a deal worth A$24 billion, reflecting the ongoing interest in the sector. NEXTDC also raised its capital expenditure forecast for fiscal 2025 to between A$1.3 billion and A$1.5 billion, up from the earlier range of A$900 million to A$1.1 billion.

The capital injection is set to help NEXTDC further expand its footprint across Asia, allowing it to keep pace with the rising demand for digital services and AI-powered applications.

Mobily transforms telecommunications with AI, supporting Saudi Arabia’s Vision 2030

Mobily is leveraging AI to revolutionise the telecommunications industry, particularly in the Middle East. By aligning with Saudi Arabia’s Vision 2030, Mobily is using AI to drive growth and innovation. The company’s AI-driven solutions improve network efficiency, enhance customer experience, and boost business agility, positioning Mobily as a leader in the region’s telecom sector.

Through predictive maintenance, Mobily ensures network reliability, while AI-powered customer service chatbots and analytics platforms optimise performance and provide personalised services to meet the growing demands of digital consumers. Mobily also places a strong emphasis on enhancing the customer experience through AI. The company uses AI to offer personalised support, analyse customer data to deliver tailored recommendations, anticipate needs, and provide proactive service. AI-powered tools like chatbots and virtual assistants streamline customer service, resulting in faster response times and improved satisfaction.

Additionally, Mobily ensures its use of AI adheres to strict ethical standards, prioritising data privacy, transparency, and fairness. With robust encryption, user consent practices, and bias mitigation strategies, Mobily safeguards customer information while building trust through ethical AI use.

Mobily also focuses on building and developing AI talent. The company collaborates with universities to create internship programs and invests in continuous learning initiatives for its employees, fostering a culture of innovation and ensuring that the organisation stays ahead in AI advancements. Furthermore, Mobily emphasises cross-departmental collaboration to integrate AI effectively across marketing, operations, and other business units.

iPhone 16 criticised in China for lack of AI

Apple’s new iPhone 16, launched on Monday, faced criticism in China for its lack of AI features, as the company contends with increasing competition from domestic tech giant Huawei. While Apple highlighted AI-enhanced capabilities in its global announcement, the iPhone 16’s Chinese version will not have AI functionality until next year, which sparked significant debate on Chinese social media platforms.

On Weibo, discussions centred on the absence of AI, with users questioning the value of the new model compared to Huawei’s imminent launch of a three-way foldable smartphone. Some users expressed disappointment that Apple hadn’t yet partnered with a local AI provider to enhance the iPhone‘s functionality in China.

Despite the AI criticism, analysts believe that the lack of immediate AI integration is unlikely to impact short-term sales. Experts pointed to Apple’s strong customer loyalty and predicted that users of older iPhone models will still drive demand for upgrades. However, they warned that the company must develop a robust AI ecosystem in China to stay competitive in the long run.

Pre-orders for the iPhone 16 will begin on Friday through platforms such as JD.com, with deliveries expected from 20 September. Meanwhile, Huawei’s latest models continue to gain popularity in China, posing a growing challenge to Apple’s market share.

Responsible AI in the Military Domain: REAIM Blueprint for Action

REAIM Blueprint for Action

Artificial Intelligence (AI), as an enabling technology, holds extraordinary potential to transform every aspect of military affairs, including military operations, command and control, intelligence, surveillance and reconnaissance (ISR) activities, training, information management and logistical support.

With the rapid advancement and progress in AI, there is a growing interest by states to leverage Al technology in the military domain. At the same time, AI applications in the military domain could be linked to a range of challenges and risks from humanitarian, legal, security, technological, societal or ethical perspectives that need to be identified, assessed and addressed.

To harness the benefits and opportunities of AI while adequately addressing the risks and challenges involved, AI capabilities in the military domain, including systems enabled by Al, should be applied in a responsible manner throughout their entire life cycle and in compliance with applicable international law, in particular, international humanitarian law.

Building on the Call to Action laid out at the REAIM Summit 2023, we invite all stakeholders including states, industry, academia, civil society, regional and international organizations to:

The impact of AI on international peace and security

1. Affirm that AI applications in the military domain should be developed, deployed and used in a way that maintains and does not undermine international peace, security and stability;

2. Recognize that AI applications in the military domain may bring benefits such as increased situational awareness and understanding, precision, accuracy and efficiency, which can enhance the implementation of international humanitarian law and assist in efforts to protect civilians as well as civilian objects in armed conflicts; and AI applications in the military domain may increase effectiveness of and support for peacebuilding and peacekeeping activities, and enhance verification and monitoring capabilities for arms control and other compliance regimes;

3. Recognize also that Al applications can present both foreseeable and unforeseeable risks across various facets of the military domain, which may, inter alia, originate from design flaws, unintended consequences, including from data, algorithmic and other biases, potential misuse or malicious use of the technology and the interaction of Al applications with the complex dynamics of global and regional conflicts and stability, including risks of an arms race, miscalculation, escalation and lowering threshold of conflict;

4. Further recognize that possible high impact applications in the military domain that deserve particular policy attention could include Al-enabled weapons, Al-enabled decision-support systems for combat operations, AI in cyber operations, AI in electronic warfare and AI in information operations;

5. Stress the need to prevent AI technologies from being used to contribute to the proliferation of weapons of mass destruction (WMDs) by state and non-state actors including terrorist groups, and emphasize that AI technologies support and do not hinder disarmament, arms control and non-proliferation efforts; and it is especially crucial to maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment, without prejudice to the ultimate goal of a world free of nuclear weapons;

6. Underscore the importance of establishing robust control and security measures to prevent irresponsible actors from acquiring and misusing potentially harmful AI capabilities in the military domain, including systems enabled by AI, while bearing in mind that these measures should not undermine equitable access to the benefits of AI capabilities in other non-military domains;

Implementing responsible AI in the military domain

7. Affirm that AI applications must be developed, deployed and used in accordance with international law, including, as applicable, the UN Charter, international humanitarian law, international human rights law; and, as appropriate, other relevant legal frameworks, including regional instruments;

8. Stress the importance of establishing national strategies, principles, standards and norms, policies and frameworks and legislation as appropriate to ensure responsible AI applications in the military domain;

9. Acknowledge the following, which are not exhaustive, to ensure responsible AI in the military domain;

(a) Al applications should be ethical and human-centric.

(b) AI capabilities in the military domain must be applied in accordance with applicable national and international law.

(c) Humans remain responsible and accountable for their use and effects of Al applications in the military domain, and responsibility and accountability can never be transferred to machines.

(d) The reliability and trustworthiness of AI applications need to be ensured by establishing appropriate safeguards to reduce the risks of malfunctions or unintended consequences, including from data, algorithmic and other biases.

(e) Appropriate human involvement needs to be maintained in the development, deployment and use of AI in the military domain, including appropriate measures that relate to human judgement and control over the use of force.

(f) Relevant personnel should be able to adequately understand, explain, trace and trust the outputs produced by Al capabilities in the military domain, including systems enabled by Al. Efforts to improve the explainability and traceability of AI in the military domain need to continue.

10. Commit to engaging in further discussions and to promoting dialogue on developing measures to ensure responsible AI in the military domain at the national, regional and international level, including through international normative frameworks, rigorous testing and evaluation (T&E) protocols, comprehensive verification, validation and accreditation (VV&A) processes, robust national oversight mechanisms, continuous monitoring processes, comprehensive training programs, exercises, enhanced cyber security and clear accountability frameworks;

11. Encourage the development of effective legal review procedures, trust and confidence building measures and appropriate risk reduction measures, as well as the exchange of information and consultations on good practices and lessons learned among states; and invite other stakeholders, including industry, academia, civil society and regional and international organizations to actively engage in these efforts, as appropriate, including through regular multi-stakeholder exchanges, dissemination of case studies and other relevant documentation and active participation in collaborative initiatives;

12. Stress that efforts on responsible AI in the military domain can be taken in parallel and do not hamper the efforts on research, development, experimentation and innovation with AI technology;

Envisaging future governance of AI in the military domain

13. Recognize that the discussion on the governance of Al in the military domain should include fostering a common understanding of Al technology and its capabilities and limitations, and a shared understanding on the possible applications of AI in the military domain and their potential impacts, including both benefits and risks;

14. Emphasize that such a discussion should take place in an open and inclusive manner to fully reflect wide- ranging views, bearing in mind that different states and regions are at varying stages of integrating AI capabilities in the military domain, come from different security environments and have varying security concerns;

15. Stress the importance of capacity-building, especially in developing countries, to promote full participation of those countries in the discussions on the governance of AI in the military domain, and to facilitate the responsible approach in the development, deployment and use of military AI capabilities;

16. Commit to strengthening international cooperation on capacity-building aimed at reducing the knowledge gap on responsible development, deployment and use of AI in the military domain;

17. Note that data plays a crucial part in AI applications in the military domain, and acknowledge that states and other relevant stakeholders need to engage in further discussions on adequate data governance mechanisms, including clear policies and procedures for data collection, storage, processing, exchange and deletion as well as data protection;

18. Recognize the need for a flexible, balanced, and realistic approach to the governance of Al in the military domain to keep pace with the rapid development and advancement of technologies;

19. Acknowledge developments across multiple initiatives related to the AI applications in the military domain, including the REAIM Summit with its relevant regional events and the establishment of the REAIM Global Commission, the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, as well as the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (LAWS GGE) established under the Convention on Certain Conventional Weapons (CCW), and the discussions in the UN Disarmament Commission and the Conference on Disarmament; take note also of the UN General Assembly Resolution 78/241 on Lethal autonomous weapons systems and relevant regional and international conferences; and stress that these initiatives should be synergistic and complementary, without prejudice to ongoing discussions on related subjects in other fora;

20. Commit to continuing global and regional dialogue on responsible AI in the military domain in an open and inclusive manner with active involvement from and exchange among stakeholders, as appropriate, acknowledging that efforts on responsible AI in the military domain is a task of generations requiring meaningful engagement with the youth.

We invite states to join this Blueprint for Action and also welcome other relevant stakeholders including industry, academia, civil society, regional and international organizations to support and associate with the Blueprint for Action as we continue our efforts to establish responsible Al for the future of humanity.

California’s AI bill gains industry support

Around 120 current and former employees from AI giants like OpenAI, Anthropic, DeepMind, and Meta have publicly voiced their support for California’s new AI regulation bill, SB 1047. The bill, which includes whistle-blower protections for employees revealing the risks in AI models, aims to impose stronger regulations on developing powerful AI technologies. Supporters argue that these measures are crucial to prevent potential threats such as cyberattacks and the misuse of biological weapons.

California’s SB 1047 has already passed the State Assembly and Senate and is awaiting Governor Gavin Newsom’s decision, with a deadline set for 30 September. Notably, high-profile signatories of the letter backing the bill include Geoffrey Hinton, a Turing Award winner, and Jan Leike, a former OpenAI alignment lead, signalling wide support from influential figures in the tech world.

Proponents of the bill believe AI companies should be responsible for testing and ensuring their models don’t pose significant harm. They argue that regulations are essential to safeguard critical infrastructure and prevent AI misuse. Despite its limitations, experts like Harvard’s Lawrence Lessig have called the bill a ‘solid step forward’ in managing AI risks.

However, not everyone agrees. OpenAI and other major tech organisations, including the US Chamber of Commerce and the Software and Information Industry Association, oppose the bill, claiming it would stifle innovation in the fast-moving AI sector. Tech industry advocates argue that over-regulation may hinder the development of cutting-edge technologies.

Musk denies xAI-Tesla collaboration claims on AI technology

The CEO of Tesla has denied claims that his AI startup xAI had entered discussions to share future Tesla revenue in exchange for giving the automaker access to its technology. The Wall Street Journal reported that Tesla was considering licensing xAI’s artificial intelligence models to enhance its full self-driving software and splitting revenue with the startup.

Musk refuted the report, stating that although Tesla had benefited from conversations with xAI engineers, there was no need to license any technology. He called the article ‘not accurate’ in a post on social media platform X.

The Journal’s report suggested that xAI could also help Tesla develop other features, such as a voice assistant for its electric vehicles and software for its humanoid robot, Optimus. Musk has previously mentioned that xAI could play a role in advancing Tesla’s self-driving capabilities and building a new data centre.

The billionaire launched xAI last year to rival OpenAI, with plans to integrate its AI chatbot, Grok, into Tesla’s systems. Discussions have reportedly taken place regarding a potential $5 billion investment in xAI by Tesla.

Egypt Prime Minister secures key tech and telecom MoUs with China

Egypt Prime Minister Mostafa Madbouly signed five key Memoranda of Understanding (MoUs) with Chinese firms and institutions to enhance Egypt-China telecommunications and information technology cooperation. These agreements, made during the Forum on China-Africa Cooperation (FOCAC) in Beijing, mark a significant development in Egypt’s tech and infrastructure sectors.

The first MoU with FiberHome Telecommunication Technologies involves setting up a fibre optic cable factory in Egypt, producing one million fibre kilometres annually and creating 200 jobs. It will also include a research and development centre and a training facility for network engineers.

The second MoU, with ITIDA, Tsinghua Unigroup, and Telecom Egypt, focuses on building a data centre and cloud services operation supported by a $300 million investment fund. This partnership will also establish a research centre for semiconductor design and develop AI applications, including an Arabic language model.

Huawei Egypt’s MoU will establish a development centre for local industry solutions, software, and cloud computing, aiming to train 1,500 developers by 2025 and support startups with cloud resources. The fourth MoU with ZTE will localise network equipment production and establish training labs for 5G and GPON technologies, providing training for 1,200 participants.

The final MoU with Hengtong Group will create a second fibre optic cable factory in the Suez Canal Economic Zone with a $15 million investment, producing 3 million kilometres of cables annually and including a training academy in collaboration with the National Telecommunications Institute. These agreements highlight Egypt’s commitment to advancing its technological infrastructure and deepening its partnership with China.

US proposes mandatory reporting for advanced AI and cloud providers

The US Commerce Department has proposed new rules that would require developers of advanced AI and cloud computing providers to report their activities to the government. The proposal aims to ensure that cutting-edge AI technologies are safe and secure, particularly against cyberattacks.

It also mandates detailed reporting on cybersecurity measures and the results of ‘red-teaming’ efforts, where systems are tested for vulnerabilities, including potential misuse for cyberattacks or the development of dangerous weapons.

The move comes as AI, especially generative models, has sparked excitement and concern, with fears over job displacement, election interference, and catastrophic risks. Under the proposal, the collected data would help the US government enforce safety standards and protect against threats from foreign adversaries.

Why does this matter?

The following regulatory push follows President Biden’s 2023 executive order requiring AI developers to share safety test results with the government before releasing certain systems to the public. The new rules come amid stalled legislative action on AI and are part of broader efforts to limit the use of US technology by foreign powers, particularly China.

South Korea hosts global summit on AI in warfare

South Korea hosted a pivotal international summit on Monday to craft guidelines for the responsible use of AI in the military. Representatives from over 90 countries, including the US and China, attended the two-day event in Seoul. The summit aimed to produce a blueprint for AI use in warfare, though any agreement is expected to lack binding legal power. The initiative marked the second such gathering, following a similar summit in Amsterdam last year, where nations endorsed a call to action without legal obligations.

South Korean Defense Minister Kim Yong-hyun highlighted AI’s growing role in modern warfare, referencing Ukraine’s use of AI-powered drones in its ongoing conflict with Russia. He likened AI’s potential in the military to a ‘double-edged sword,’ emphasising its ability to enhance operational capabilities and its risks if misused. South Korea‘s foreign minister, Cho Tae-yul, further underscored the need for international safeguards, suggesting that mechanisms be put in place to prevent autonomous weapons from making lethal decisions without human oversight.

The summit aims to outline principles for the responsible use of AI in the military, drawing from guidelines established by NATO and various national governments. However, many attending nations will endorse the proposed frame, which remains to be seen. While the document seeks to establish minimum guardrails for AI, it is not expected to impose legally binding commitments.

Beyond this summit, international discussions on AI’s role in warfare are ongoing. ThUN also explores potential restrictions on lethal autonomous weapons under the 1983 Convention on Certain Conventional Weapons (CCW). Additionally, the US government has been leading efforts to promote responsible AI use in the military, with 55 countries already endorsing its declaration.

Co-hosted by the Netherlands, Singapore, Kenya, and the United Kingdom, Seoul brings together around 2,000 participants, including representatives from international organisations, academia, and the private sector, discussing various topics, from civilian protection to AI’s potential role in nuclear weapon control. The summit seeks to ensure ongoing collaboration on the rapidly evolving technology, especially as governments remain the key decision-makers in this crucial area.