US proposes mandatory reporting for advanced AI and cloud providers

The US Commerce Department has proposed new rules that would require developers of advanced AI and cloud computing providers to report their activities to the government. The proposal aims to ensure that cutting-edge AI technologies are safe and secure, particularly against cyberattacks.

It also mandates detailed reporting on cybersecurity measures and the results of ‘red-teaming’ efforts, where systems are tested for vulnerabilities, including potential misuse for cyberattacks or the development of dangerous weapons.

The move comes as AI, especially generative models, has sparked excitement and concern, with fears over job displacement, election interference, and catastrophic risks. Under the proposal, the collected data would help the US government enforce safety standards and protect against threats from foreign adversaries.

Why does this matter?

The following regulatory push follows President Bidenโ€™s 2023 executive order requiring AI developers to share safety test results with the government before releasing certain systems to the public. The new rules come amid stalled legislative action on AI and are part of broader efforts to limit the use of US technology by foreign powers, particularly China.

South Korea hosts global summit on AI in warfare

South Korea hosted a pivotal international summit on Monday to craft guidelines for the responsible use of AI in the military. Representatives from over 90 countries, including the US and China, attended the two-day event in Seoul. The summit aimed to produce a blueprint for AI use in warfare, though any agreement is expected to lack binding legal power. The initiative marked the second such gathering, following a similar summit in Amsterdam last year, where nations endorsed a call to action without legal obligations.

South Korean Defense Minister Kim Yong-hyun highlighted AIโ€™s growing role in modern warfare, referencing Ukraineโ€™s use of AI-powered drones in its ongoing conflict with Russia. He likened AI’s potential in the military to a ‘double-edged sword,’ emphasising its ability to enhance operational capabilities and its risks if misused. South Korea‘s foreign minister, Cho Tae-yul, further underscored the need for international safeguards, suggesting that mechanisms be put in place to prevent autonomous weapons from making lethal decisions without human oversight.

The summit aims to outline principles for the responsible use of AI in the military, drawing from guidelines established by NATO and various national governments. However, many attending nations will endorse the proposed frame, which remains to be seen. While the document seeks to establish minimum guardrails for AI, it is not expected to impose legally binding commitments.

Beyond this summit, international discussions on AI’s role in warfare are ongoing. ThUN also explores potential restrictions on lethal autonomous weapons under the 1983 Convention on Certain Conventional Weapons (CCW). Additionally, the US government has been leading efforts to promote responsible AI use in the military, with 55 countries already endorsing its declaration.

Co-hosted by the Netherlands, Singapore, Kenya, and the United Kingdom, Seoul brings together around 2,000 participants, including representatives from international organisations, academia, and the private sector, discussing various topics, from civilian protection to AI’s potential role in nuclear weapon control. The summit seeks to ensure ongoing collaboration on the rapidly evolving technology, especially as governments remain the key decision-makers in this crucial area.

FedEx expands fulfilment with investment in AI robotics firm Nimble

FedEx has made a strategic investment in AI robotics and automation company Nimble to enhance its fulfilment services for small and medium-sized businesses. The investment aims to support FedExโ€™s Fulfilment unit, which assists businesses with order fulfilment and inventory management.

The investment comes as parcel delivery companies increasingly turn to automation to reduce costs and improve efficiency, particularly during periods of lower freight demand. FedEx believes Nimble’s automated third-party logistics solutions will help optimise supply chain operations across North America.

Scott Temple, president of FedEx Supply Chain, stated that the alliance with Nimble will expand the company’s presence in e-commerce, allowing FedEx to scale its fulfilment offerings throughout North America. The exact size of the investment has not been disclosed.

Nimble’s AI robotics technology is expected to help FedEx improve the efficiency of its fulfilment operations and further strengthen its position in the e-commerce sector.

ChatGPT gains over million subscribers, new pricing plans discussed

OpenAI announced on Thursday that it now has over 1 million paying users across its ChatGPT business products, including Enterprise, Team, and Edu. The increase from 600,000 users in April highlights CEO Sam Altman’s success in driving enterprise adoption of the AI tool.

Recent reports suggest OpenAI executives are discussing premium subscriptions for upcoming large-language models, such as the reasoning-focused Strawberry and a new flagship model called Orion. Subscription prices could reach as high as $2,000 per month for these advanced AI tools.

ChatGPT Plus currently costs $20 per month, while the free tier continues to be used by hundreds of millions every month. OpenAI is also working on Strawberry to enable its AI models to perform deep research, refining them after their initial training.

The discussion around premium pricing follows news that Apple and Nvidia are in talks to invest in OpenAI, with the AI company expected to be valued at over $100 billion. ChatGPT currently has more than 200 million weekly active users, doubling its user base since last autumn.

NIST releases new digital identity and AI guidelines for contractors

US National Institute of Standards and Technology (NIST) has released a new draft of its Digital Identity Guidelines, introducing updates for government contractors in cybersecurity, identity verification, and AI use. The guidelines propose expanded identity proofing methods, including remote and onsite verification options. These enhancements aim to improve the reliability of identity systems used by government contractors to access federally controlled facilities and information. By providing different assurance levels for identity verification, NIST ensures that contractors can implement secure and appropriate measures based on the context and location of the verification process.

A significant focus of the guidelines is on continuous evaluation and monitoring. Organisations are now required to implement ongoing programs that track the performance of identity management systems and evaluate their effectiveness against emerging threats. The guidelines also emphasise the importance of proactive fraud detection. Contractors and credential service providers (CSPs) must continuously assess and update their fraud detection methods to align with the evolving threat landscape.

One of the notable updates in the guidelines is the introduction of syncable authenticators and digital wallets. This allows contractors to manage their digital credentials more efficiently by storing them securely in digital wallets. These wallets provide flexibility in how contractors present their identity attributes when accessing different federal systems.

The guidelines also introduce a risk-based approach to authentication, where authentication levels are tailored to the sensitivity of the system or information being accessed. That gives government agencies the flexibility to assign different authentication methods depending on the security needs of the transaction. For example, accessing highly sensitive systems would require stronger multi-factor authentication (MFA) measures, including biometrics, while less critical systems may have less stringent requirements.

Why does this matter?

The use of AI and ML in identity systems is another key aspect of the Draft Guidelines. NIST emphasises transparency and accountability in integrating AI and ML into these systems. Organisations must document how AI is used, disclose the datasets for training models, and ensure that AI systems are evaluated for risks like bias and inequitable outcomes. The guidelines address the concern that AI technologies could exacerbate existing inequities or produce biassed results in identity verification processes. Organisations are encouraged to adopt NISTโ€™s AI Risk Management Framework to mitigate these risks and consult its guidance on managing bias in AI.

Lastly, the guidelines highlight the importance of privacy, equity, and usability in digital identity systems. Ensuring broad participation and access to digital services, especially for individuals with disabilities, is a core requirement. NIST stresses that digital identity systems must be designed to be inclusive and accessible to all contractors, addressing any potential usability challenges while maintaining security.

Former OpenAI scientist aims to develop superintelligent AI safely

Ilya Sutskever, OpenAIโ€™s former chief scientist, has launched a new company called Safe Superintelligence (SSI) to develop safe AI systems that significantly surpass human intelligence. In an interview, Sutskever explained that SSI aims to take a different approach to AI scaling compared to OpenAI, emphasising the need for safety in superintelligent systems. He believes that once superintelligence is achieved, it will transform our understanding of AI and introduce new challenges for ensuring its safe use.

Sutskever acknowledged that defining what constitutes ‘safe’ AI is still a work in progress, requiring significant research to address the complexities involved. He also highlighted that as AI becomes more powerful, safety concerns will intensify, making it essential to test and evaluate AI systems rigorously. While the company does not plan to open-source all of its work, there may be opportunities to share parts of its research related to superintelligence safety.

SSI aims to contribute to the broader AI community’s safety efforts, which Sutskever views positively. He believes that as AI companies progress, they will realise the gravity of the safety challenges they face and that SSI can make a valuable contribution to this ongoing conversation.

Global AI framework signed to safeguard human rights

The UK has become one of the first signatories of an international treaty designed to regulate AI and prevent its misuse. This legally binding agreement, drafted by the Council of Europe and signed by countries including the EU, US, and Israel, mandates safeguards to protect human rights, democracy, and the rule of law from potential AI threats. Governments are expected to tackle risks such as AI-generated misinformation and the use of biassed data in decision-making processes.

The treaty outlines several key principles, including ensuring data protection, non-discrimination, and the responsible development of AI. Both public and private sector AI users will be required to assess the impact of AI systems on human rights and provide transparency to the public. Individuals will also have the right to challenge AI-made decisions and file complaints with relevant authorities, ensuring accountability and fairness in AI applications.

In the UK, the government is reviewing how to implement the treaty’s provisions within existing legal frameworks, such as human rights laws. A consultation on a new AI bill is underway, which could further strengthen these safeguards. Once ratified, the treaty will allow authorities to impose sanctions, including bans on certain AI uses, like systems utilising facial recognition from unauthorised data sources.

Australia introduces new AI regulations

Australia’s government is advancing its AI regulation framework with new rules focusing on human oversight and transparency. Industry and Science Minister Ed Husic announced that the guidelines aim to ensure that AI systems have human intervention capabilities throughout their lifecycle to prevent unintended consequences or harm. These guidelines, though currently voluntary, are part of a broader consultation to determine if they should become mandatory in high-risk settings.

The following initiative follows rising global concerns about the role of AI in spreading misinformation and fake news, fueled by the growing use of generative AI models like OpenAI’s ChatGPT and Google’s Gemini. In response, other regions, such as the European Union, have already enacted more comprehensive AI laws to address these challenges.

Australia’s existing AI regulations, first introduced in 2019, were criticised for being insufficient for high-risk scenarios. Ed Husic emphasised that only about one-third of businesses use AI responsibly, underscoring the need for stronger measures to ensure safety, fairness, accountability, and transparency.

NTIA launches inquiry to support US data centres’ growth

The US National Telecommunications and Information Administration (NTIA) has launched an inquiry to address the challenges surrounding US data centres’ growth, resilience, and security. This initiative is crucial in light of the increasing demand for computing power driven by advancements in AI and other emerging technologies. Currently, the US has over 5,000 data centres, with demand projected to grow by approximately 9% annually through 2030, highlighting their role as foundational elements of a secure technology ecosystem.

To effectively tackle these challenges, the NTIA has issued a Request for Comment (RFC) to solicit stakeholders’ input on various data centre growth issues. Key focus areas include supply chain resilience, access to trusted equipment, energy demands, and the need for a specialised workforce. The RFC also explores the implications of data centre modernisation on society and the necessary data security practices for facilities hosting AI models. Insights from this inquiry will help develop comprehensive policy recommendations supporting sustainable and resilient data centre growth.

The inquiry is being conducted in coordination with the Department of Energy (DOE), highlighting the importance of addressing energy challenges associated with data centres. The collaboration aims to ensure the US can meet the energy demands of expanding data centre infrastructure while promoting clean energy solutions. The feedback received from the RFC will inform a report that outlines actionable recommendations for the US government, ultimately fostering a robust data centre ecosystem capable of supporting future technological advancements.

Telecom giants urge European policymakers to enhance digital competitiveness through improved connectivity

Ericsson, Nokia, and Vodafone have united in a call to action for European policymakers to enhance digital competitiveness through advanced connectivity and digitalisation. They argue that achieving a true Digital Single Market is essential for fostering innovation and ensuring Europe can compete globally. The following initiative emphasises the need for coherent implementation of existing regulations and the avoidance of unnecessary regulatory burdens that could hinder the rapid deployment of digital infrastructure.

Ericsson, Nokia, and Vodafone highlight the importance of incentivising investment in advanced connectivity solutions, such as 5G and future 6G technologies. They stress that a modernised regulatory framework is crucial for maintaining healthy telecom operators capable of making substantial investments in infrastructure. This includes advocating for longer spectrum licenses and harmonised rules across the EU member states, facilitating a more robust telecommunications landscape.

Ericsson, Nokia, and Vodafone also propose that policymakers differentiate between business-to-business (B2B) and consumer-facing technologies when crafting regulations. Tailoring regulations to these sectors’ specific needs and operational structures will help create a more level playing field and address market failures effectively. This distinction is vital for fostering an environment where trusted companies can thrive and innovate.

Ericsson, Nokia, and Vodafone highlight the need for Europe to prepare for emerging technologies like quantum computing and AI. They advocate for policies encouraging experimentation and attracting private investment, ensuring Europe can leverage these advancements while addressing security challenges.