US National Institute of Standards and Technology (NIST) has released a new draft of its Digital Identity Guidelines, introducing updates for government contractors in cybersecurity, identity verification, and AI use. The guidelines propose expanded identity proofing methods, including remote and onsite verification options. These enhancements aim to improve the reliability of identity systems used by government contractors to access federally controlled facilities and information. By providing different assurance levels for identity verification, NIST ensures that contractors can implement secure and appropriate measures based on the context and location of the verification process.
A significant focus of the guidelines is on continuous evaluation and monitoring. Organisations are now required to implement ongoing programs that track the performance of identity management systems and evaluate their effectiveness against emerging threats. The guidelines also emphasise the importance of proactive fraud detection. Contractors and credential service providers (CSPs) must continuously assess and update their fraud detection methods to align with the evolving threat landscape.
One of the notable updates in the guidelines is the introduction of syncable authenticators and digital wallets. This allows contractors to manage their digital credentials more efficiently by storing them securely in digital wallets. These wallets provide flexibility in how contractors present their identity attributes when accessing different federal systems.
The guidelines also introduce a risk-based approach to authentication, where authentication levels are tailored to the sensitivity of the system or information being accessed. That gives government agencies the flexibility to assign different authentication methods depending on the security needs of the transaction. For example, accessing highly sensitive systems would require stronger multi-factor authentication (MFA) measures, including biometrics, while less critical systems may have less stringent requirements.
Why does this matter?
The use of AI and ML in identity systems is another key aspect of the Draft Guidelines. NIST emphasises transparency and accountability in integrating AI and ML into these systems. Organisations must document how AI is used, disclose the datasets for training models, and ensure that AI systems are evaluated for risks like bias and inequitable outcomes. The guidelines address the concern that AI technologies could exacerbate existing inequities or produce biassed results in identity verification processes. Organisations are encouraged to adopt NIST’s AI Risk Management Framework to mitigate these risks and consult its guidance on managing bias in AI.
Lastly, the guidelines highlight the importance of privacy, equity, and usability in digital identity systems. Ensuring broad participation and access to digital services, especially for individuals with disabilities, is a core requirement. NIST stresses that digital identity systems must be designed to be inclusive and accessible to all contractors, addressing any potential usability challenges while maintaining security.
Ilya Sutskever, OpenAI’s former chief scientist, has launched a new company called Safe Superintelligence (SSI) to develop safe AI systems that significantly surpass human intelligence. In an interview, Sutskever explained that SSI aims to take a different approach to AI scaling compared to OpenAI, emphasising the need for safety in superintelligent systems. He believes that once superintelligence is achieved, it will transform our understanding of AI and introduce new challenges for ensuring its safe use.
Sutskever acknowledged that defining what constitutes ‘safe’ AI is still a work in progress, requiring significant research to address the complexities involved. He also highlighted that as AI becomes more powerful, safety concerns will intensify, making it essential to test and evaluate AI systems rigorously. While the company does not plan to open-source all of its work, there may be opportunities to share parts of its research related to superintelligence safety.
SSI aims to contribute to the broader AI community’s safety efforts, which Sutskever views positively. He believes that as AI companies progress, they will realise the gravity of the safety challenges they face and that SSI can make a valuable contribution to this ongoing conversation.
The UK has become one of the first signatories of an international treaty designed to regulate AI and prevent its misuse. This legally binding agreement, drafted by the Council of Europe and signed by countries including the EU, US, and Israel, mandates safeguards to protect human rights, democracy, and the rule of law from potential AI threats. Governments are expected to tackle risks such as AI-generated misinformation and the use of biassed data in decision-making processes.
The treaty outlines several key principles, including ensuring data protection, non-discrimination, and the responsible development of AI. Both public and private sector AI users will be required to assess the impact of AI systems on human rights and provide transparency to the public. Individuals will also have the right to challenge AI-made decisions and file complaints with relevant authorities, ensuring accountability and fairness in AI applications.
Historic moment! The #CoE opens the first-ever legally binding global treaty on #AI and human rights.
Signed by EU 🇦🇩 🇬🇪 🇮🇸 🇳🇴🇲🇩🇸🇲 🇬🇧 🇮🇱 🇺🇸, this Framework Convention ensures AI aligns with our values.
In the UK, the government is reviewing how to implement the treaty’s provisions within existing legal frameworks, such as human rights laws. A consultation on a new AI bill is underway, which could further strengthen these safeguards. Once ratified, the treaty will allow authorities to impose sanctions, including bans on certain AI uses, like systems utilising facial recognition from unauthorised data sources.
Australia’s government is advancing its AI regulation framework with new rules focusing on human oversight and transparency. Industry and Science Minister Ed Husic announced that the guidelines aim to ensure that AI systems have human intervention capabilities throughout their lifecycle to prevent unintended consequences or harm. These guidelines, though currently voluntary, are part of a broader consultation to determine if they should become mandatory in high-risk settings.
The following initiative follows rising global concerns about the role of AI in spreading misinformation and fake news, fueled by the growing use of generative AI models like OpenAI’s ChatGPT and Google’s Gemini. In response, other regions, such as the European Union, have already enacted more comprehensive AI laws to address these challenges.
Australia’s existing AI regulations, first introduced in 2019, were criticised for being insufficient for high-risk scenarios. Ed Husic emphasised that only about one-third of businesses use AI responsibly, underscoring the need for stronger measures to ensure safety, fairness, accountability, and transparency.
The US National Telecommunications and Information Administration (NTIA) has launched an inquiry to address the challenges surrounding US data centres’ growth, resilience, and security. This initiative is crucial in light of the increasing demand for computing power driven by advancements in AI and other emerging technologies. Currently, the US has over 5,000 data centres, with demand projected to grow by approximately 9% annually through 2030, highlighting their role as foundational elements of a secure technology ecosystem.
To effectively tackle these challenges, the NTIA has issued a Request for Comment (RFC) to solicit stakeholders’ input on various data centre growth issues. Key focus areas include supply chain resilience, access to trusted equipment, energy demands, and the need for a specialised workforce. The RFC also explores the implications of data centre modernisation on society and the necessary data security practices for facilities hosting AI models. Insights from this inquiry will help develop comprehensive policy recommendations supporting sustainable and resilient data centre growth.
The inquiry is being conducted in coordination with the Department of Energy (DOE), highlighting the importance of addressing energy challenges associated with data centres. The collaboration aims to ensure the US can meet the energy demands of expanding data centre infrastructure while promoting clean energy solutions. The feedback received from the RFC will inform a report that outlines actionable recommendations for the US government, ultimately fostering a robust data centre ecosystem capable of supporting future technological advancements.
Ericsson, Nokia, and Vodafone have united in a call to action for European policymakers to enhance digital competitiveness through advanced connectivity and digitalisation. They argue that achieving a true Digital Single Market is essential for fostering innovation and ensuring Europe can compete globally. The following initiative emphasises the need for coherent implementation of existing regulations and the avoidance of unnecessary regulatory burdens that could hinder the rapid deployment of digital infrastructure.
Ericsson, Nokia, and Vodafone highlight the importance of incentivising investment in advanced connectivity solutions, such as 5G and future 6G technologies. They stress that a modernised regulatory framework is crucial for maintaining healthy telecom operators capable of making substantial investments in infrastructure. This includes advocating for longer spectrum licenses and harmonised rules across the EU member states, facilitating a more robust telecommunications landscape.
Ericsson, Nokia, and Vodafone also propose that policymakers differentiate between business-to-business (B2B) and consumer-facing technologies when crafting regulations. Tailoring regulations to these sectors’ specific needs and operational structures will help create a more level playing field and address market failures effectively. This distinction is vital for fostering an environment where trusted companies can thrive and innovate.
Ericsson, Nokia, and Vodafone highlight the need for Europe to prepare for emerging technologies like quantum computing and AI. They advocate for policies encouraging experimentation and attracting private investment, ensuring Europe can leverage these advancements while addressing security challenges.
Huawei Cloud introduced advanced AI technologies at the Saudi Arabia 2024 Summit, aiming to accelerate the country’s digital transformation and support Vision 2030. This new infrastructure promises ultra-low latency and robust AI model training and inference capabilities, enhancing various sectors nationwide. The company is also the first cloud provider in Saudi Arabia to fully comply with local data security policies, ensuring high levels of data protection and aligning with the country’s digital sovereignty strategy.
The impact of Huawei Cloud is significant, with a tenfold increase in public cloud revenue over the past year. It serves a diverse client base, including government bodies, telecom carriers, FinTech firms, and media organisations, highlighting its role in the digital economy. Sector-specific solutions include supporting smart city projects for the government, market expansion for local e-commerce businesses like Zode, and advanced digital banking services.
Technological innovations, such as the Pangu model and CodeArts, drive industry advancements and accelerate software development. Additionally, Huawei Cloud invests in the local ecosystem by training over 3,000 university students and partnering with over 100 local businesses, fostering a thriving digital landscape in Saudi Arabia
Taiwan Semiconductor Manufacturing Company (TSMC), in collaboration with leading global chip designers and suppliers such as Broadcom and Nvidia, is focusing on developing advanced silicon photonics technology. This initiative has gained momentum due to the increasing demand for faster data transmission speeds driven by the rise of AI applications. TSMC has established a dedicated R&D team of over 200 employees to explore high-speed computing chips based on silicon photonics, with production expected to commence in the second half of next year.
TSMC’s efforts aim to solve critical challenges in energy efficiency and AI computing power, positioning silicon photonics as a transformative force in the semiconductor industry. The company also targets a range of chip processes, from 45 to 7 nm, with mass production anticipated by 2025.
The silicon photonics market is projected to grow substantially, with significant developments expected as early as 2024. TSMC’s partnerships with major customers are crucial for advancing this technology, and it is poised to revolutionise applications across CPUs, GPUs, and other computing processes. As the semiconductor industry continues to evolve, TSMC’s commitment to silicon photonics underscores its role as a leader in shaping the future of high-speed data communication and AI innovations.
The US Department of Justice has intensified its antitrust investigation into Nvidia by issuing a subpoena, according to reports from Bloomberg. The subpoena comes after previous questionnaires were sent, signalling the authorities’ increased scrutiny of the AI chipmaker’s business practices. Nvidia is accused of making it difficult for buyers to switch suppliers and potentially penalising customers who don’t exclusively use its AI chips.
The investigation reportedly stems from complaints by competitors who claim Nvidia may be abusing its market dominance. Several other companies have also received subpoenas as part of the broader probe. The escalating crackdown coincides with increased caution among investors regarding AI companies, as concerns about overspending and high expectations loom.
Despite a worldwide growth in demand for AI chips, Nvidia’s recent quarterly forecast disappointed investors, leading to a sharp drop in its share value. The company’s stock fell 2.5% in extended trading on Tuesday, following a 9.5% decline during regular market hours. This scrutiny adds further pressure to the AI giant during a sensitive period for the industry.
Nvidia declined to comment on the ongoing investigation, while the United States Department of Justice has yet to respond to requests for further information. The outcome of this probe could have significant implications for the company and the broader AI market.
AI could lower oil prices over the next decade by boosting supply and cutting costs, according to a report by Goldman Sachs. AI is expected to improve logistics and increase the amount of profitably recoverable resources, potentially reducing the marginal price incentive for oil by around $5 per barrel. This could have a negative impact on the incomes of oil-producing nations, including OPEC+ members.
While AI is expected to modestly increase oil demand, particularly in power and natural gas sectors, Goldman Sachs predicts that the cost savings enabled by AI will have a more significant effect on lowering oil prices. An estimated 25% productivity gain from AI could push prices down, outweighing the demand boost and resulting in a net negative impact on oil prices.
Goldman Sachs also forecasts that AI could reduce the cost of new shale wells by up to 30%. Furthermore, AI could increase the recovery factors of the United States‘ shale resources by 10% to 20%, potentially boosting oil reserves by 8% to 20%, or 10 to 30 billion barrels. This enhanced productivity could further contribute to downward pressure on oil prices.
Oil futures have already experienced declines, with Brent crude futures dropping by 4.5% to $74.02 per barrel, marking their lowest level since December. Meanwhile, West Texas Intermediate crude futures fell by 4.1% to $70.58, their lowest since January. As AI advances, US technology companies are also pursuing energy assets from bitcoin miners to secure power for their expanding data centres.