Snapshot: The developments that made waves
AI governance
The US Department of Energy (DOE) and the US Department of Commerce (DOC) have joined forces to promote the safe, secure, and trustworthy development of AI through a newly established Memorandum of Understanding (MOU).
A recent assessment of some of the top AI models has revealed significant gaps in compliance with the EU regulations, particularly in cybersecurity resilience and preventing discriminatory outputs. The study by Swiss startup LatticeFlow, in collaboration with EU officials, tested generative AI models from major tech companies like Meta, OpenAI, and Alibaba.
Technologies
Three scientists, David Baker, John Jumper, and Demis Hassabis, have been awarded the 2024 Nobel Prize in Chemistry for their pioneering work in protein science. David Baker, of the University of Washington, was acknowledged for his innovations in computational protein design, while John Jumper and Demis Hassabis of Google DeepMind were recognised for using AI to predict protein structures.
American scientist John Hopfield and British-Canadian Geoffrey Hinton have been awarded the 2024 Nobel Prize in Physics for their groundbreaking work in machine learning, which has significantly contributed to the rise of AI.
Companies in Japan are increasingly turning to AI to manage customer service roles, addressing the country’s ongoing labour shortage. These AI systems are now being used for more complex tasks, assisting workers across various industries.
Russia has announced a substantial increase in the use of AI-powered drones in its military operations in Ukraine. Russian Defence Minister Andrei Belousov emphasised the importance of these autonomous drones in battlefield tactics, saying they are already deployed in key regions and proved successful in combat situations.
Chinese researchers from Shanghai University claim to have made a significant breakthrough in quantum computing, asserting they have breached encryption algorithms commonly used in banking and cryptocurrency.
Infrastructure
The competition between Elon Musk and Mukesh Ambani is intensifying as they vie for dominance in India’s emerging satellite broadband market.
A group of major tech companies, including Microsoft, Alphabet, Meta, and Amazon, has proposed new terms for how data centres in Ohio should pay for their energy needs.
Siemens relies on its digital platform, Xcelerator, to drive future growth, especially in its factory automation business, which has faced slowing demand in China and Europe.
A US federal appeals court has expressed doubts over the Federal Communications Commission’s authority to reinstate net neutrality rules.
Cybersecurity
Six Democratic senators have urged the Biden administration to address critical concerns about human rights and cybersecurity in the upcoming United Nations Cybercrime Convention, which is set for a vote at the UN General Assembly.
According to a new threat assessment, Canada’s signals intelligence agency has identified China’s hacking activities as the most significant state-sponsored cyber threat facing the country.
Russia is using generative AI to ramp up disinformation campaigns against Ukraine, warned Ukraine’s Deputy Foreign Minister, Anton Demokhin, during a cyber conference in Singapore.
Forrester’s 2025 Predictions report outlines critical cybersecurity, risk, and privacy challenges on the horizon. Cybercrime costs are expected to cost $12 trillion by 2025, with regulators stepping up efforts to protect consumer data.
Digital rights
The EU‘s voluntary code of practice on disinformation will soon become a formal set of rules under the Digital Services Act (DSA).
The Consumer Financial Protection Bureau (CFPB) has informed Meta of its intention to consider ‘legal action’ concerning allegations that the tech giant improperly acquired consumer financial data from third parties for its targeted advertising operations.
Legal
Chinese online retailer Temu is exploring joining a European Union-led initiative to combat counterfeit goods, which includes major retailers such as Amazon, Alibaba, and brands like Adidas and Hermes.
South Korea’s data protection agency has fined Meta Platforms, the owner of Facebook, KRW 21.62 billion ($15.67 million) for improperly collecting and sharing sensitive user data with advertisers.
Seven families in France are suing TikTok, alleging that the platform’s algorithm exposed their teenage children to harmful content, leading to tragic consequences, including the suicides of two 15-year-olds.
The Kremlin has called on Google to lift its restrictions on Russian broadcasters on YouTube, highlighting mounting legal claims against the tech giant as potential leverage.
Internet economy
World Liberty Financial, a decentralised finance (DeFi) crypto project associated with former President Donald Trump and his sons, plans to limit its token sales to $30 million within the USA.
Italy‘s Economy Minister Giancarlo Giorgetti has defended plans to raise taxes on cryptocurrency capital gains as part of the country’s 2025 budget, despite facing opposition from members of his own League party.
The State Bank of Pakistan (SBP) has proposed a significant framework to recognise digital assets, including cryptocurrency, as legal currency in Pakistan.
Thailand Board of Investment (BOI) announced on Friday it has approved $2 billion in new investments aimed at bolstering the nation’s data centre and electronics manufacturing sectors.
Development
Morocco’s Panafsat and Thales Alenia Space have signed a memorandum of understanding (MoU) to build a high-capacity satellite telecommunications system to advance digital connectivity across 26 African countries, including 23 French-speaking nations.
Kenya partners with Google to enhance its digital infrastructure and empower its citizens in the evolving digital economy.
Sociocultural
Seven families in France are suing TikTok, alleging that the platform’s algorithm exposed their teenage children to harmful content, leading to tragic consequences, including the suicides of two 15-year-olds.
OpenAI has introduced new search functions to its popular ChatGPT, making it a direct competitor to Google, Microsoft’s Bing, and other emerging AI-driven search tools.
Meta has announced an extended ban on new political ads following the United States election, aiming to counter misinformation in the tense post-election period.
Mozambique and Mauritius are facing criticism for recent social media shutdowns amid political crises, with many arguing these actions infringe on digital rights. In Mozambique, platforms like Facebook and WhatsApp were blocked following protests over disputed election results.
Trump vs Harris: The tech industry’s role in 2024
As the 5 November US presidential election nears, the race between former President Donald Trump and Vice President Kamala Harris is extremely close, making voter mobilisation critical. The support of influential business figures, particularly from Big Tech, could prove pivotal. Elon Musk, the founder of X, has voiced strong support for Trump, spotlighting the role that tech giants, especially the ‘Magnificent Seven’ (Apple, Microsoft, Amazon, Nvidia, Meta, Tesla, and Alphabet), could play in the election outcome. Both Trump and Harris are courting corporate America, reflecting Big Tech’s growing influence over public policy and voter sentiment.
Tech leaders have increasingly reached out to Trump. Figures like Apple’s Tim Cook and Amazon’s Andy Jassy have engaged with him, and even Mark Zuckerberg has shown respect toward Trump despite previous tensions, such as Facebook’s ban on Trump after the Capitol riot. Zuckerberg has stated he will remain neutral in the 2024 election, though Trump has hinted at a newfound mutual understanding. Musk’s relationship with Trump has also evolved; despite past criticism, Musk now aligns more closely with Trump, particularly since taking over Twitter, where he promotes issues resonant with Trump’s base, such as scepticism of the media and government censorship.
Musk’s financial contributions are significant, with his America PAC offering $1 million daily to registered voters who support First and Second Amendment causes. However, this initiative has raised legal concerns over incentivising voter registration, with experts questioning the legality of tying financial rewards to political participation.
On the other hand, Kamala Harris enjoys substantial support from Silicon Valley’s elite. Her connections to tech stem from her time as California’s attorney general and later as a US senator. Figures like former Facebook CEO Sheryl Sandberg and philanthropist Melinda French Gates are backing her, along with over 800 venture capitalists and thousands of tech employees. Harris’s appeal to Silicon Valley aligns with her stance on AI regulation and data privacy, which is seen as more favourable than Trump’s deregulation approach. While most of Silicon Valley leans Democratic, there are exceptions, such as David Marcus, a former PayPal president who has shifted allegiance to the Republican Party.
Big Tech is under regulatory scrutiny, especially from the Biden administration’s antitrust actions against companies like Apple and Google. The Department of Justice has accused these companies of anti-competitive practices. Trump, however, has suggested he would lessen regulatory pressure on tech firms if elected, contrasting sharply with the Biden administration’s regulatory approach.
Trump’s tech policy emphasises deregulation, which he believes will stimulate growth. He opposes what he calls ‘illegal censorship’ by tech companies and advocates for a hands-off approach to AI and cryptocurrencies, favouring minimal government oversight to drive US competitiveness. He also supports corporate tax cuts and reduced regulatory burdens, aligning with a market-driven vision for tech growth.
Conversely, Harris, as Biden’s appointed AI czar, supports stronger regulations on AI and tech to ensure public safety. She has pushed for data privacy and bias protection laws, aligning her campaign with Biden’s regulatory framework on technology. Harris’s support for initiatives like the CHIPS Act highlights her focus on US tech independence and national security, prioritising consumer protection and a controlled tech landscape.
This election presents voters with a choice between contrasting tech policies: Harris’s vision of an equitable, regulated tech environment and Trump’s preference for minimal government intervention.
AI and ethics in modern society
Humanity’s rapid advancements in AI and robotics have brought ethical and philosophical issues into urgent focus, especially as AI technologies now shape areas like medicine, governance, and the economy. Governments, corporations, international organisations, and individuals are responsible for navigating these advancements ethically, ensuring that AI use respects human rights and fosters societal good.
Ethics in AI refers to principles guiding right and wrong actions, requiring AI technologies to respect societal values and protect human dignity. AI, defined as systems that autonomously analyse and make decisions, spans various forms, from voice assistants to autonomous vehicles. Without an ethical framework, AI risks worsening inequality, eroding accountability, and infringing on privacy and autonomy, highlighting the necessity of embedding fairness and responsibility into AI’s design and regulation.
AI ethics aims to minimise risks from misuse, poor design, or harmful applications, addressing issues like unauthorised surveillance and AI weaponisation. Global initiatives like UNESCO’s 2021 Recommendation on the Ethics of AI and the EU’s AI Act seek to ensure responsible AI development, balancing the challenge of early regulation against the entrenchment of unregulated technologies. These frameworks respond to real-world impacts like algorithmic bias, emphasising the need for timely, well-constructed oversight.
AI ethics draws inspiration from Asimov’s fictional Three Laws of Robotics, although real-world AI complexities extend far beyond this basic framework. Current AI applications, such as autonomous vehicles and facial recognition, introduce accountability, privacy, and other issues, demanding nuanced strategies beyond foundational ethical rules. Real-world AI systems require complex governance, focusing on areas such as legal, social, and environmental impacts.
Legal accountability, particularly in autonomous systems scenarios, raises questions about responsibility in accidents, stressing the need for legal reforms. Financially, AI risks worsening inequality due to algorithmic biases in areas like lending. Environmentally, AI’s large energy requirements for training models impact sustainability, and it is crucial to develop energy-efficient systems to address this issue. Socially, automation disrupts traditional jobs, and biased algorithms could deepen social inequality, especially in employment and criminal justice. The use of AI in surveillance also raises serious privacy concerns.
The psychological effects of AI, such as how AI-driven customer service may lack empathy or how manipulative marketing tactics may impact well-being, require careful attention. Public mistrust in AI, stemming from the opacity of AI systems and the potential for algorithmic bias, is a significant barrier to widespread AI adoption. Transparent, explainable AI that allows users to understand decision-making processes, along with strong accountability frameworks, is essential for fostering public trust and establishing a fair AI landscape.
Addressing these ethical challenges demands global coordination and adaptable regulation to ensure AI supports humanity’s best interests, respects human dignity, and promotes fairness across all sectors. The ethical challenges surrounding AI impact fundamental human rights, economic equality, environmental sustainability, and social trust. A collaborative approach, with contributions from governments, corporations, and individuals, is essential to build robust, transparent AI systems that advance societal welfare. Through a commitment to research, interdisciplinary collaboration, and prioritising human well-being, AI can fulfil its transformative potential for good, guiding technological advancement while safeguarding societal values.attached to emerging technologies like AI. Hence, at this critical juncture, it is quintessential to foster more refined, coordinated and scaled-up global efforts, or more precisely, an effective global digital cooperation.
El Salvador: Blueprint for the bitcoin economy
El Salvador’s adoption of bitcoin as legal tender on 7 September 2021 marked a pioneering step in integrating cryptocurrency into national economic policy. Initially viewed as a bold experiment, this move transformed into a strategic approach with significant implications both domestically and internationally, despite concerns raised by the IMF and other institutions about potential risks. The policy aimed to address economic challenges like financial inclusion in an unbanked population, making El Salvador a global beacon for cryptocurrency. With 5,748.8 bitcoins in national reserves, the country has continued to invest in bitcoin, showcasing confidence in its long-term potential.
El Salvador’s bitcoin adoption has had mixed economic impacts. The cryptocurrency has streamlined remittances for Salvadorians abroad, reducing fees and making transactions more accessible. This policy has also attracted foreign investments and a surge in crypto tourism. However, bitcoin’s volatility remains a concern, with critics warning that reliance on such a fluctuating asset could threaten financial stability. President Nayib Bukele’s ambitious plan to establish ‘Bitcoin City’ —a tax-free, crypto-friendly zone to attract foreign investment with a projected $1.6 billion investment—aims to make El Salvador a global hub for digital finance.
Education has been a key focus, demonstrated through the government’s bitcoin certification programme spearheaded by the National Bitcoin Office (ONBTC). The initiative seeks to educate 80,000 government employees on bitcoin and blockchain, embedding cryptocurrency knowledge across state institutions. This approach ensures that bitcoin adoption is more than a policy directive and becomes ingrained in the country’s governance and administration, facilitating a foundational understanding of cryptocurrency among civil servants and extending into other sectors.
El Salvador’s pro-crypto stance has influenced other nations. Argentina, led by pro-crypto president Javier Milei, has shown interest in adopting cryptocurrencies to stabilise its economy and is closely studying El Salvador’s approach. As more countries consider cryptocurrency integration, El Salvador’s policy offers a practical example, illustrating both the opportunities and challenges of digital currency in a national economy.
However, regulatory challenges persist, with organisations like the IMF voicing concerns about financial stability and consumer protection risks. Despite this, El Salvador has continued to strengthen its regulatory frameworks and increase transparency around bitcoin activities, emphasising its commitment to maintaining its crypto leadership.
The government-backed Chivo wallet has played a crucial role in driving financial inclusion, giving citizens who previously had no access to banking a way to transact digitally. Through the Chivo platform, which offered $30 in bitcoin to each user, El Salvador has made significant strides toward an inclusive financial ecosystem, setting an example for other nations looking to reduce banking barriers for the unbanked.
El Salvador’s experiment has inspired other nations, such as the Central African Republic, to adopt bitcoin. For countries grappling with inflation or financial exclusion, bitcoin represents a potential alternative. El Salvador’s pioneering approach illustrates how digital currencies can offer a pathway to economic development and innovation, positioning the country as a leader in the emerging digital financial order.
Revolutionising medicine with AI
The integration of AI into medicine has marked a revolutionary shift, especially in diagnostics and early disease detection. Since AI was first applied to human clinical trials over four years ago, its potential to enhance healthcare has become increasingly evident. AI now aids in detecting complex diseases, often at early stages, improving diagnosis accuracy and patient outcomes. This technological advancement promises to transform individual health and broader societal well-being despite ethical concerns and questions about AI accuracy that persist in public debate.
In diagnostics, AI has shown remarkable success. A Japanese study revealed that AI-assisted tools, such as ChatGPT, outperformed experts, achieving an 80% accuracy rate in medical assessments across 150 diagnostics. These results encourage further integration of AI into medical devices and underscore the need for AI-focused training in medical education.
AI is making substantial strides in cancer detection, with companies like Imidex, whose AI algorithm has received FDA approval, working on improving early lung cancer screening. Similarly, French startup Bioptimus is targeting the European market with an AI model that can identify cancerous cells and genetic anomalies in tumours. Such developments highlight the growing competition and innovation in AI-driven healthcare, making these advancements more accessible globally.
Despite these promising advances, public scepticism remains a significant challenge. A 2023 Pew Research study found that 60% of Americans are uncomfortable with AI-assisted diagnostics, fearing it might harm the doctor-patient relationship. While 38% of respondents anticipate better outcomes with AI, 33% worry about negative impacts, reflecting mixed feelings on AI’s role in healthcare.
AI is also contributing to dementia research. By analysing large datasets and brain scans, AI systems can detect structural brain changes and early signs of dementia. The SCAN-DAN tool, developed by researchers in Edinburgh and Dundee, aims to revolutionise early dementia detection through the NEURii global collaboration, which seeks digital solutions to dementia’s challenges. Early interventions enabled by AI hold the potential to improve the life quality of dementia patients.
AI’s utility extends to breast cancer detection, where it enhances the effectiveness of mammograms, ultrasounds, and MRIs. An AI system developed in the USA refines disease staging, distinguishing between benign and malignant tumours with reduced false positives and negatives. Accurate staging aids in effective treatment, particularly for early-detected breast cancer.
The financial backing for AI in healthcare is substantial, with projections suggesting that AI could contribute nearly $20 trillion to the global economy by 2030, with healthcare potentially accounting for over 10% of this value. Major global corporations are keen to invest in AI-driven medical equipment, underlining the field’s growth potential.
The future of AI in healthcare is promising, with AI systems poised to surpass human cognitive limits in analysing vast information. As regulatory frameworks adapt, AI tools in diagnostics could lead to faster and more precise disease detection, potentially marking a significant turning point in medical science. This transformative potential aligns AI with a revolutionary trajectory in healthcare, capable of reshaping medical practice and patient outcomes.
Just-in-time reporting from the UN Security Council: Leveraging AI for diplomatic insight
On 21 and 24 October, DiploFoundation provided real-time reporting from the UN Security Council sessions on scientific development and women, peace, and security. Supported by Switzerland, this initiative aims to improve the work of the UN Security Council and the broader UN system by making session insights more accessible.
At the heart of this effort is DiploAI, a sophisticated AI platform trained on UN materials. DiploAI unlocks the knowledge embedded in the Council’s video recordings and transcripts, making it easier to access valuable diplomatic insights. This AI-driven reporting combines advanced technology with expertise in peace and security, providing in-depth analysis of UN Security Council sessions in 2023-2024 and covering the UN General Assembly (UNGA) for eight years.
A key feature of DiploAI’s success is the seamless collaboration between AI and human experts. Experts tailored the AI system to the Security Council’s needs by providing essential documents and materials, enhancing the AI’s contextual understanding. Through iterative feedback on topics and keywords, DiploAI produces accurate and diplomatically relevant outputs. A significant milestone in this partnership was DiploAI’s analysis of ‘A New Agenda for Peace,’ where experts identified over 400 key topics, forming a comprehensive taxonomy for UN peace and security issues. Additionally, a Knowledge Graph was developed to visually represent sentiment and relational analysis, adding depth to Council session insights.
Building on these advancements, DiploAI introduced a custom chatbot that goes beyond basic Q&A. By incorporating data from all 2024 sessions, the chatbot enables interactive exploration of diplomatic content, offering detailed, real-time answers.
This shift from static reports to dynamic, conversational access represents a major leap in understanding and engaging with UN Security Council materials.
DiploAI’s development process underscores the importance of human-AI collaboration. The Q&A module underwent approximately ten iterations, refined with feedback from UNSC experts, ensuring accuracy and sensitivity in diplomatic responses. This process has led to an AI system capable of addressing critical questions while adhering to diplomatic standards.
DiploAI’s suite of tools, including real-time transcription and analysis, enhances the transparency of UN reporting. By integrating advanced AI methods such as retrieval-augmented generation (RAG) and knowledge graphs, DiploAI contextualises and enriches the extracted information. Trained on a vast corpus of diplomatic knowledge, the AI generates responses tailored to UNSC topics, making complex session details accessible through transcripts, reports, and an AI-powered chatbot.
DiploAI’s work with the Security Council, supported by Switzerland, demonstrates the potential of AI in enhancing diplomacy. By blending technical prowess with human expertise, DiploAI promotes more inclusive, informed, and impactful diplomatic practices.