Digital Watch newsletter – Issue 95 – December 2024

 Advertisement, Poster, Adult, Female, Person, Woman, Face, Head

Snapshot: The developments that made waves

AI governance

California Governor Gavin Newsom has signed Assembly Bill 3030 (AB 3030) into law, which will regulate the use of generative AI (GenAI) in healthcare.

The Irish Data Protection Commission (DPC) is awaiting guidance from the European Data Protection Board (EDPB) on handling AI-related privacy issues under the EU’s General Data Protection Regulation (GDPR).

The EU Council, along with its member states, has adopted a declaration for the first time on this specific topic establishing a unified understanding of how international law applies to cyberspace.

Michael O’Flaherty, the Council of Europe’s new Commissioner for Human Rights, warned that failing to support Ukraine would be an ‘existential loss’ for Europe, while also highlighting the need for stronger AI regulations to protect human rights in the face of emerging technologies.

Technologies

Samsung has teamed up with Google and Qualcomm to develop advanced AI-powered smart glasses, set for release in Q3 2025. Initial production will feature 500,000 units, targeting a competitive edge over existing options like Meta’s and Ray-Ban’s smart glasses.

A new studio, Promise, has been launched to revolutionise filmmaking with the use of generative AI. Backed by venture capital firm Andreessen Horowitz and former News Corp President Peter Chernin, the startup is setting its sights on blending AI with Hollywood storytelling. 

Meta has started rolling out AI capabilities for its Ray-Ban Meta AR glasses in France, Italy, and Spain. Users in these countries can now access Meta AI, the company’s voice-activated assistant, which supports French, Italian, and Spanish alongside English.

OpenAI is shifting away from the ‘bigger is better’ philosophy for training models. Instead, it is developing techniques that allow algorithms to ‘think’ in more human-like ways. Its new model, o1, uses a technique called ‘test-time compute’, allowing it to consider multiple answers and choose the best option during use.

California’s sole remaining nuclear power plant, Diablo Canyon, is adopting AI to navigate the complex challenges of staying operational. Pacific Gas & Electric (PG&E) has partnered with Atomic Canyon, a local startup, to deploy an AI system called Neutron Enterprise.

President Joe Biden and China’s President Xi Jinping held a two-hour meeting on the sidelines of the APEC summit on Saturday. Both leaders reached a significant agreement to prevent AI from controlling nuclear weapons systems and made progress on securing the release of two US citizens wrongfully detained in China.

Infrastructure

The UN Development Programme (UNDP) has partnered with cBrain, a Danish digital solutions provider, to accelerate Africa’s digital transformation. The collaboration focuses on bridging the digital divide, fostering inclusive growth, and strengthening community resilience across the continent. 

President-elect Donald Trump has nominated Brendan Carr to lead the US Federal Communications Commission (FCC). Carr, an FCC commissioner since 2017, is a familiar figure within the administration and has aligned his policy views with Trump’s conservative agenda, particularly concerning free speech and deregulation.

Cybersecurity

Hackers with alleged links to China have stolen sensitive data from US telecommunications firms, targeting information intended for law enforcement agencies.

British businesses have lost an estimated £44 billion ($55 billion) in revenue over the past five years due to cyberattacks, with more than half of private sector companies experiencing at least one incident, according to a report by insurance broker Howden.

According to Morgan Adamski, executive director of US Cyber Command, Chinese hackers are embedding themselves in US critical infrastructure IT networks to prepare for a potential conflict with the United States.

The UN Office for Disarmament Affairs (ODA) will conduct a simulation exercise in early 2025 to help Member States engage with the Global Points of Contact (POC) Directory. The directory ensures quick and effective responses to cybersecurity incidents by providing a reliable channel for diplomatic and technical contacts across countries.

The UN Cybercrime Convention is moving closer to a full vote in the General Assembly following its approval at a recent meeting. Despite significant opposition from the private sector, civil society, and US congressional members, the United States and the United Kingdom defended their support of the treaty.

Digital rights

A United States federal appeals court is set to rule by 6 December on whether ByteDance, TikTok‘s Chinese parent company, must divest its US operations or face a ban.

The US Department of Justice (DOJ) alleges that Alphabet’s Google unfairly monopolised key markets, including ad servers and advertiser networks, as well as attempting to dominate ad exchanges.

Legal

Japan’s Fair Trade Commission has raided Amazon Japan over allegations of anti-monopoly violations. A government source revealed that the company is suspected of pressuring sellers to reduce prices in exchange for favourable product placement on its e-commerce platform.

Google has announced further changes to its search results in Europe in response to complaints from smaller competitors and looming the EU antitrust charges under the Digital Markets Act (DMA).

The United States Department of Justice (DOJ) is pushing for Alphabet’s Google to divest its Chrome browser, escalating efforts to curb the company’s alleged monopolistic practices in digital markets.

Internet economy

Jay Clayton, former Securities and Exchange Commission (SEC) chair, predicts that cryptocurrency legislation could be on the horizon during Donald Trump’s upcoming administration. 

The Blockchain Association has sent a letter to president-elect Donald Trump and Congress, outlining key reforms for the crypto industry during the first 100 days of Trump’s administration.

Australia is seeking advice from the Organisation for Economic Co-operation and Development (OECD) to shape its approach to taxing digital assets.

Development

The UN Development Programme (UNDP) has partnered with cBrain, a Danish digital solutions provider, to accelerate Africa’s digital transformation. The collaboration focuses on bridging the digital divide, fostering inclusive growth, and strengthening community resilience across the continent.

Sociocultural

OpenAI, in partnership with Common Sense Media, has introduced a free training course aimed at helping teachers understand AI and prompt engineering.

In Poznan, Poland, a new chapel is combining tradition with cutting-edge technology. Created by priest Radek Rakowski, the modern chapel features an AI-powered system that answers visitors’ questions about Catholicism.

Australia has approved the law which bans children under 16 from accessing social media, following a contentious debate. The new regulation targets major tech companies like Meta, TikTok, and Snapchat, which will face fines of up to A$49.5 million if they allow minors to log in.



Trump’s victory in US elections and the US tech future

Donald Trump’s return to the White House probably signals a relevant shift in tech policy, given his strategic alignment with influential figures in Silicon Valley, most notably Elon Musk. Musk, a vocal supporter and one of the wealthiest individuals on the planet, invested approximately $120 million into Trump’s campaign, clearly showing his commitment to Trump’s vision for a tech-forward, market-driven America. Trump has vowed to appoint Musk to head a government efficiency commission, suggesting an unprecedented partnership between the government and private tech giants.

AD 4nXeO8mWOaFPr3bVX7VcchXLzk4mKWvq 9 NuyHNZmX oTBRGi7mq9HejUtZotikoBZ pAaxHVWx ygPbShB8kzjUHTu6HtUG3

Trump’s ambitions in the tech arena are sweeping. He has promised a regulatory environment to ‘set free’ companies burdened by government intervention. By rolling back regulations on AI, social media, and cryptocurrency sectors, Trump aims to foster innovation by reducing oversight and promoting a more liberal market. This policy stance starkly contrasts the Biden administration’s regulatory approach, particularly in Big Tech antitrust and AI oversight, which Trump’s team views as stifling growth and innovation.

A key part of Trump’s tech agenda is his stance on digital freedom. He has consistently criticised social media platforms for what he claims is censorship of conservative voices, a sentiment echoed by Musk, especially since his acquisition of Twitter (now X). Under Trump’s leadership, there are likely to be pushes to reform Section 230, the law that protects platforms from liability for user-generated content, aiming to curb what Trump views as ‘biased censorship’ against his supporters. This approach aligns with Trump’s free-market ethos and reflects his desire to reshape the digital public square to favour unrestricted speech.

Moreover, the Government Efficiency Commission would conduct a complete financial and performance audit of the federal government. Trump also pledged to cut corporate tax rates for domestically manufactured companies, establish ‘low-tax’ zones on federal lands, encourage construction companies to build new homes and start a sovereign wealth fund. Trump’s proposal drew criticism from Everett Kelley, president of the American Federation of Government Employees, who accused Trump and Musk of wanting to weaken the nonpartisan civil service.

As Trump reclaims his influence over tech policy, his administration is expected to reassess past conflicts with Silicon Valley. Despite his previous clashes with leaders like Mark Zuckerberg, Trump’s recent statements have indicated a willingness to mend fences, especially with executives prioritising business over political engagement. For instance, Zuckerberg’s current stance of neutrality has met with Trump’s approval, signifying a potential thaw in relations that could lead to an era of cooperation rather than confrontation.

In this new chapter, Trump’s alliance with Musk and other tech elites underscores his ambition to create a tech policy that minimises governmental control while encouraging private innovation. Together, Trump and Musk represent a fusion of populism and technology, a partnership that could reshape America’s role in the global tech landscape, steering it towards a future where corporate influence on policy is stronger than ever.



The growing influence of Chinese tech firms

Chinese tech companies have emerged as critical players in the global technology landscape, with companies like Alibaba, Tencent, Baidu, ByteDance, and Huawei shaping industries across e-commerce, AI, telecommunications, and more. These firms have become central to China’s digital economy and are increasingly competing with US tech giants like Apple and Google on the global stage. Their rise has been bolstered by China’s strategic support and policies designed to foster technological innovation, often with a focus on state-led initiatives and protectionism.

The growing competition between China and the US in the tech sector is one of the defining geopolitical struggles of the 21st century. This rivalry encompasses cooperation and confrontation, with regulatory policies, national security concerns, and shifting political priorities influencing the dynamics of the tech war. While market forces drive the competition between the two tech powerhouses, it is also deeply entwined with broader geopolitical tensions.

AD 4nXdcE wwJcS6H C PWUV1R9yuTQYd9R0JI1KTdq83sWoRtosu6V2RPZSd9iB886U e7ZPRO6COOw8mOyTgMQ 65mSSZAzwAf9WyAFU4E821owGehdzlW7I2gg5iepTaaeYOb9pw2

A critical factor in the rise of Chinese tech companies has been the Chinese government’s regulatory strategies. In the early 2000s, China introduced the Golden Shield Project, designed to control media and information flow within the country while blocking foreign tech firms that did not comply with its data regulations. This led to a unique digital ecosystem where domestic companies thrived without significant competition from Western players, allowing the so-called BATX companies (Baidu, Alibaba, Tencent, Xiaomi) to dominate the market.

The major Chinese tech companies—Alibaba, Tencent, Baidu, ByteDance, Huawei, Xiaomi, JD.com, Meituan, Pinduoduo, and Didi Chuxing—have each carved out significant roles in domestic and global markets. For instance, Alibaba leads the e-commerce space with platforms like Taobao, Tmall, and AliExpress, while Tencent dominates social media and gaming with WeChat and major stakes in companies like Riot Games and Activision Blizzard. Baidu, often called China’s Google, has expanded into AI and autonomous vehicles, and ByteDance, the parent company of TikTok, has revolutionised short-form video content. Meanwhile, Huawei remains a telecommunications and 5G infrastructure leader despite geopolitical challenges.

AD 4nXdURi7RV8EsIn 7eCJAJNZihZFi6HiEsCG21HDHQp 9ytItKVHJW5riPNMpceN5Xc5nU8Zd7IAC24kk3R4FpIlpns1Hhc6SBHuFwNwHHnN6Z1it4

China’s strategy for fostering tech growth involves broad investments in state-owned enterprises and private startups. Government-led initiatives such as ‘Made in China 2025‘ and the ‘Thousand Talents Plan‘ have provided financial backing and attracted top global talent to drive AI, robotics, and semiconductors innovation. While this strategy has yielded impressive results, critics argue that it creates an uneven playing field by providing domestic companies with unfair advantages, including subsidies and protectionist measures that foreign competitors cannot access.

China’s regulatory model for tech is marked by a top-down approach, with central leadership exerting control over the actions of tech companies. Angela Zhang’s ‘dynamic pyramid model’ describes this system as hierarchical, volatile, and fragile. While regulators have allowed tech firms to flourish during periods of lenient oversight, rapid interventions and crackdowns—such as those seen in 2020—often result in market instability and significant financial losses for companies. These fluctuations highlight the unpredictability of China’s regulatory environment and have led to concerns about the long-term viability of businesses operating under such a system.

The shifting regulatory environment, exemplified by the Chinese government’s actions against firms like Alibaba and Tencent, underscores the challenges tech companies face in China. While the government seeks to address issues like antitrust violations and data security, its heavy-handed approach can stifle innovation and create uncertainty in the market. These regulatory cycles, where intense crackdowns follow periods of lax oversight, often undermine investor confidence and can dampen the growth of the very industries the government seeks to strengthen.

In response to the rapid rise of Chinese tech firms, the US has taken a more aggressive stance toward China, particularly under the Trump administration. The US has expanded export controls, blocklisting Chinese firms like Huawei and restricting critical technology exports. Additionally, tariffs have been imposed on Chinese imports, further intensifying the trade and tech conflict. Experts predict that under Trump’s leadership, the US will continue to pressure China by adding more companies to the US Entity List, which restricts American firms from selling to blocked entities. This strategy aims to limit China’s access to advanced technologies and slow its progress in AI and semiconductors.

AD 4nXfJdja4LAKwljaKPatvHoxwFMjwDFGnR 0g2ZC1yJirrvxiXgmDovZjGohmkrqvwyiIQgEXUwwEMDDCiD3mE QzP6tOLeO2xG2ROtsq JQo hlSaVJ xx4wZ1JEe mclY2t5QS7

China has retaliated against US actions by targeting American companies like Micron Technology and restricting the export of essential materials for chipmaking and electric vehicle production. These retaliatory measures underscore the interconnectedness of both economies, with the US still relying on China for critical resources such as rare earth elements. This dependency and the ongoing tech conflict have tense the situation as both countries seek to protect their national interests in emerging technologies.

The growing tensions between China and the US are not just about trade—they reflect deeper concerns about data security, military dominance, and leadership in AI and semiconductors. Both countries strive for dominance in these critical technologies, as they hold the potential to shape the future of global power. The outcome of this tech conflict will have far-reaching implications for global supply chains, innovation, and the geopolitical balance of power.

In the face of these challenges, Chinese tech companies are increasingly looking to expand overseas, navigating complex regulatory environments while continuing to grow their influence in global markets. Despite resistance in the US and other Western countries, these firms are capitalising on emerging markets and leveraging their competitive advantages, particularly in AI and telecommunications. While the US has sought to limit China’s technological ascent, Chinese companies continue to gain ground in key sectors, making the tech rivalry between the two nations one of the most significant global issues of the 21st century.

The rise of Chinese tech companies has reshaped the global tech landscape, driving innovation and competition in critical industries. The rivalry between China and the US has become a defining feature of international geopolitics, with both nations vying for technological supremacy. As Chinese tech firms expand globally and navigate complex regulatory environments, the outcome of this tech conflict will have profound implications for the future of global technology and innovation. The increasing interdependence of the two economies and the ongoing tensions will continue to shape the dynamics of the worldwide tech industry.



UN Cybercrime Convention: What does it mean, and how will it impact all of us?

The UN adopted the draft of the first globally binding cybercrime convention, following years of negotiations led by Russia since 2017. This treaty, expected to be formally adopted by the UN General Assembly later this year, aims to establish international legal frameworks to combat cybercrime. While the convention promises cross-border cooperation and mutual legal assistance in the fight against cybercrime, it has faced significant opposition from human rights groups, civil society, and tech companies, who have expressed concerns about the potential for increased surveillance and the erosion of personal freedoms.

AD 4nXdNF18L APCbyr01M55nYB4

One of the primary goals of the UN Cybercrime Convention is to facilitate cooperation between member states by offering a legal framework for mutual legal assistance requests in cybercrime cases. The treaty also seeks to harmonise criminal provisions related to cybercrime across nations, creating a more unified approach to the global threat of cybercrime. However, while the Convention promises significant steps toward international cooperation, it does not introduce new data protection standards or change the existing human rights safeguards for law enforcement and cooperation measures.

The UN Convention was particularly inspired by the Budapest Convention and, therefore, will not exclude the application of other existing international or regional instruments, nor will it take precedence over them. Countries parties to the UN Cybercrime Convention and regional conventions, such as the Malabo Convention in Africa, can choose which instrument offers a more specific basis for cooperation. Introducing new provisions in the UN Convention, like criminalising non-consensual dissemination of intimate images, marks one of its novel contributions. However, many experts agree that existing regional agreements remain crucial due to their detailed cybersecurity and national policy provisions.

AD 4nXfG 09o0oqNvM2lm pIAKTOIVp6lvoI7TAOR426UKve DO

A vital element of the Convention is Article 27, which addresses cross-border cooperation, particularly around access to electronic evidence. This provision allows states to compel individuals within their borders to provide data stored domestically or abroad if they have access to it. However, concerns have arisen regarding the potential for states to access data across borders without the host country’s consent, a contentious issue in cybercrime law. The Convention emphasises state sovereignty and encourages cooperation through mutual legal assistance mechanisms rather than unilateral actions. While some states worry that this might bypass formal procedures, the Convention stresses respecting sovereignty while enabling international cooperation.

AD 4nXeBT8wfux7XfJI9J5t 7ZfXHeWh15NAiMxdSyroX yXTsIjcsAHg wJCJ7DCbAvePvFPSEJZSvnbFkuwZI9YWnk

The Convention also addresses the issue of how individuals and entities can challenge law enforcement data requests. The treaty includes provisions for judicial review of data requests, ensuring that law enforcement must justify their actions, including the scope and duration of data access. These safeguards are designed to prevent abuses while providing law enforcement access to data crucial for investigating cybercrime. However, some experts caution that while the Convention sets a high bar for human rights protections, its effectiveness will depend on how countries implement these standards at the domestic level.

Defining and protecting ‘electronic data’ has been one of the most debated aspects of the treaty. The Convention establishes electronic data broadly, covering all types of stored digital information, including personal documents, photos, and notes. This broad definition allows states to request access to electronic data, even if it contains private information. The Convention emphasises that while such data can be accessed for law enforcement purposes, domestic legal frameworks must provide safeguards to protect individual privacy and uphold human rights. Including protections for personal data during international transfers adds a layer of security for individuals.

AD 4nXdvTt VkVHm47qBOEC pxC yejZdRhBxrwdI3ec9DI X0h5DNaPt50mWD7em0jxlZKIllI4J3QpQjgvig3FF0Z Ccjw kizaj7H5KTM5PV3w2T43

Technical assistance and capacity development are fundamental aspects of the UN Cybercrime Convention, which lays the groundwork for strengthening countries’ capabilities to fight cybercrime. The Convention provides mechanisms such as Memoranda of Understanding (MOUs) and personnel exchanges to support the development of law enforcement and judicial capacities in tackling cybercrime. It also encourages multilateral and bilateral agreements to implement technical assistance and capacity development provisions.

Looking forward, the Convention’s text uses technology-neutral language to ensure it remains relevant as technology evolves. Unlike specific treaties focusing on particular technologies, the UN Convention prioritises behaviours and actions, allowing it to stay adaptable over time. The Convention includes provisions for amendments five years after its implementation, ensuring that it can respond to emerging cyber threats and technological advancements.

AD 4nXcx FQgkjrExsY7wbi64F71SAOOPKrPH9laImMV9U2soUdxZgq2ROQvibtX5rsyQlW9BmPmXKf2E1zHNuwQu

Despite initial scepticism regarding its feasibility, the Convention’s current momentum demonstrates the potential for international cooperation to address cybercrime. Experts agree that multistakeholder participation, including civil society, NGOs, and the private sector, is essential for ensuring a comprehensive and effective implementation process. Public-private partnerships will be crucial in building trust and collaboration in cybercrime prevention, fostering a more secure cyberspace for all.

Ultimately, the UN Cybercrime Convention marks a significant step toward addressing the global challenge of cybercrime. While introducing critical new measures, particularly in cross-border cooperation and the protection of human rights, its success will ultimately depend on how effectively countries implement its provisions and safeguard individual rights. The treaty’s adoption will likely spur further discussions and refinements, particularly in addressing the evolving nature of cybercrime and balancing the need for security with protecting civil liberties.


DW Weekly #188 – 29 November 2024

 Page, Text

Dear readers,

The Australian government has recently approved a law that sets the highest age limit for social media use, with no exceptions for parental consent or pre-existing accounts. The new law is part of the government’s push to protect young users online, highlighting how excessive social media use threatens children’s physical and mental health with harmful body image portrayals, misogynistic content and cyberbullying’s devastating effects, including testimony from parents of children who self-harmed. The new regulation imposes hefty fines of up to AUD 49.5 million (USD 32 million) on platforms that fail to enforce the new age restrictions. Despite tech companies’ objections, the law has gained substantial political support in the current parliamentary year and has been ultimately approved on Friday, 29 November.

The law, which marks a significant political win for Prime Minister Anthony Albanese, has received widespread public support, with 77% of Australians backing the ban. However, it has faced opposition from privacy advocates, child rights groups, and social media companies, which argue the law was rushed through without adequate consultation. Critics also warn that it could inadvertently harm vulnerable groups, such as LGBTQIA or migrant teens, by cutting them off from supportive online communities.

 Book, Comics, Publication, People, Person, Baby, Face, Head, Art

Critics, including Google, Meta, and TikTok, argue that the new regulation lacks sufficient detail and consultation. Meta pointed out that results from an ongoing age-verification trial are necessary to fully understand the impact of the new measures on Australian users and the wider industry. TikTok, owned by ByteDance, expressed concerns over the law’s lack of clarity and criticised the limited timeframe for public feedback, warning that the legislation had not been thoroughly discussed with experts or mental health organisations. Elon Musk’s X, also raised concerns over potential human rights violations, arguing that the law infringes on children’s freedom of expression and access to information. These platforms fear the new regulation’s vague wording could have unintended consequences for users and the tech industry, particularly about privacy and data security.

Despite these oppositions, it has been approved, and starting with a trial period in January, the law is set to take full effect in 2025. While the Australian Minister of Communications stated that the legislation will feature strong privacy safeguards, making it the platforms’ responsibility to delete any collected data to protect users’ personal information.

As countries worldwide grapple with the issue of children’s access to social media, various nations are taking steps to introduce and refine regulations to protect young users. In the UK, while no immediate restrictions are planned, the Online Safety Act will enforce stricter age requirements starting in 2025. Norway has proposed raising the consent age for social media from 13 to 15, with parents still able to approve younger users, while the EU mandates parental consent for children under 16, with flexibility for member states to set lower limits. France has also pushed forward with a law requiring parental consent for children under 15, although enforcement is delayed due to technical issues, and has suggested further regulations, including banning phones for children under 11. Germany and Belgium enforce parental consent for minors under 16, but both face calls for stronger implementation. Italy has set a minimum age of 14 for parental consent, while the Netherlands focuses more on reducing distractions by banning mobile devices in classrooms. These regulatory efforts highlight a worldwide push for stricter controls, reflecting growing concerns about the safety and privacy of children online.

In other news…

Meta faces multibillion-dollar lawsuit over data scandal

The US Supreme Court has cleared the way for a multibillion-dollar class-action lawsuit against Meta, the parent company of Facebook, over its role in the Cambridge Analytica privacy scandal.

Social media fine plan dropped in Australia

Australia’s government has abandoned a proposal to fine social media platforms up to 5% of their global revenue for failing to curb online misinformation. The decision follows resistance from various political parties, making the legislation unlikely to pass the Senate.

Follow other ‘Highlights from the week’ in its section below…

Visit dig.watch now for more important updates and other topics !

Marko and the Digital Watch team


Highlights from the week of 22-29 November 2024

uk flag

Teen concerns shape the UK’s approach to social media safety.

shutterstock 1875389428

A legal battle over Play Store changes.

ff5c2175 elon musk x afp

The proposed legislation sparks a worldwide debate.

semiconductor on top of chinas flag

A strategic foresight amidst global technology tensions.

cropped students laptop studying concept

The court dismissed claims of unclear AI usage rules, stating the student knowingly violated academic integrity standards.

unesco flag painted grunge cracked wall scaled

UNESCO launches a training program on disinformation.

cybersecurity cybercrime internet scam anonymous hacker crypto currency investment digital network vpn technology computer virus attack risk protection

In a decisive move to counter rising cyber threats, Italy has unveiled plans for strict new measures targeting hackers and unauthorised database breaches. The proposed legislation signals Rome’s commitment to…

eu european union flags in front of european comission building in background brussles belgium

An ongoing EU attempt to regulate tech giants.

united nations headquarters in new york city

The initiative involves collaboration with the UN Institute for Disarmament Research (UNIDIR) and the International Telecommunications Union (ITU) and will be held in a hybrid format to familiarise nominated POCs…

trump google

The association proposes establishing a crypto advisory council to collaborate with Congress and regulators, emphasising a balanced regulatory framework that protects consumers while promoting growth.


Reading corner

Diplo BLOGS24 Insta Anita Lamprecht 29
www.diplomacy.edu

Should we allow AI to develop its own language – one that humans can’t understand? While it may enhance efficiency, it raises serious concerns. Dr Anita Lamprecht explores.

dig.watch

Chinese tech giants reshape the global market amid US rivalry and domestic regulation challenges.

BLOG featured image 08
www.diplomacy.edu

Embracing AI in diplomacy: How can Europe prepare for pivotal transformation in global affairs? On 21-22 November, we addressed the 25th European Diplomatic

Upcoming

AIatUN 1
dig.watch

The 9th substantive session of the UN OEWG 2021-2025 will focus on threats to information security and developing responsible state behavior.

Diplo Weekly Newsletter 2024 thumbnail 01 2
www.diplomacy.edu

Tech attache briefing: The road to WSIS+20 high-level review The event is part of a series of regular briefings the Geneva Internet Platform (GIP) is

DW Weekly #187 – 22 November 2024

 Page, Text

Dear readers,

The US Department of Justice (DoJ) launched an aggressive case against Google, that proposes the tech giant ends exclusive agreements in which it pays billions of dollars annually to Apple and other device vendors to make its search engine the default on their tablets and smartphones. The proposals are extensive, suggesting measures such as prohibiting Google from re-entering the browser market for five years and requiring the company to sell its Android mobile operating system if other remedies do not restore competition. Additionally, the DoJ has proposed banning Google from acquiring or investing in competitors in search, query-based AI technologies, or advertising tools. Publishers and websites would also be offered the option to exclude their content from being used to train Google’s AI systems. Such an unprecedented legal dispute resulted from a ruling earlier this year, which found that Google had illegally monopolised the online search market. The government also seeks measures to regulate how Google handles AI and Android operating systems to foster a more competitive digital marketplace.

The proposed divestiture of Chrome would significantly impact Google’s business model, as the browser plays a crucial role in channelling users to its search engine and ads platform. Chrome commands a 61% share of the US browser market and processes 90% of online searches, which are central to Google’s strategy for collecting user data to target ads. If the court agrees to the sale, it could alter how users interact with Google’s ecosystem, potentially diminishing its control over the search and advertising sectors.

 Adult, Male, Man, Person, Tripod, Face, Head, Telescope

The main claim of this federal case is the belief that Google’s monopoly on web browsing and search stifles competition, limits consumer choice, and harms innovation. Google could choose to sell the software as an alternative to complying with the requirements. Any prospective buyers would need approval from the DoJ and state antitrust authorities. Otherwise, the DoJ case wants Google to license its search results to competitors at minimal cost and provide user data it collects to competitors free of charge. The company would also be prohibited from gathering user data that cannot be shared due to privacy restrictions. The suggested measures would allow websites more freedom to control their content and create greater ad market transparency, levelling the playing field for emerging AI companies and search engines that rely on Google’s data to improve their services.

One of the most notable aspects of this case is its potential to reshape the AI landscape. Google has incorporated AI into its search function, offering AI-driven ‘overviews’ at the top of its search results. This innovation, however, has drawn criticism from website publishers who argue that it deprives them of web traffic and ad revenue. The government’s recommendation would force Google to license its search data and make it more accessible to competitors, potentially allowing AI startups to create rival search engines and AI applications that could challenge Google’s dominance.

The proposal to uncouple Android from Google’s other services, like search and the Google Play Store, could still have far-reaching consequences. By separating these products, Google would no longer be able to use its mobile operating system to push its search engine and other services onto users.

DoJ’s win in this legal dispute could mark one of the most significant antitrust actions against a major tech company since the US government failed to break up Microsoft two decades ago. The potential sale of Chrome is just one part of a broader effort to curb Google’s market power and ensure its competitors have a fair chance to succeed in the digital space. The case will not only have implications for Google but could also set the stage for future regulation of the tech industry, especially as AI and data-driven services evolve. 

Finally, it’s worth noting that the trial addressing these measures is scheduled for April 2025, with a final decision expected by August. This timeline provides both President-elect Trump and the DoJ with an opportunity to adjust their approach if desired.

In other news…

Australia introduces groundbreaking bill to ban social media for children under 16

Australia’s government introduced a bill to parliament aiming to ban social media use for children under 16, with potential fines of up to A$49.5 million ($32 million) for platforms that fail to comply.

Brendan Carr to lead FCC in Trump’s push for deregulation

President-elect Donald Trump has nominated Brendan Carr to lead the US Federal Communications Commission (FCC). Carr, an FCC commissioner since 2017, is a familiar figure within the administration and has aligned his policy views with Trump’s conservative agenda, particularly concerning free speech and deregulation..

Follow other ‘Highlights from the week’ in its section below…

More updates and other topics on our dig.watch portal!

Marko and the Digital Watch team


Highlights from the week of 15-22 November 2024

south african flag

Challenges include securing orbital slots, addressing technical skill gaps, and finalising funding sources.

republic of ireland

Ireland steps up efforts on AI privacy regulation.

Bitcoin Image

The proposal coincides with Bitcoin reaching an all-time high price of $93,477 and a market cap of over $1.7 trillion.

openai logo

New AI course helps the use of ChatGPT.

tiktok icon coming out screen mobile phone 3d illustration

ByteDance is challenging a US law requiring TikTok divestment, with court decisions expected soon.

please generate an image of openais ai text to image generator like if it was a robot or computer holding a camera and taking movies scenes of actors acting 1

Promise joins the generative AI boom, aiming to transform content creation with Hollywood stakeholders.

California Governor Gavin Newsom 2021

AB 3030 sets guidelines for AI in patient communication.

eu and china flags

Europe’s new rules could reshape trade relations with China, focusing on batteries and green technology.

meta ray ban glasses screenshot youtube

Ray-Ban Meta glasses now include voice-activated AI in Europe, though some features remain unavailable.

5cCVPZJQ UNDP 2

The UNDP and Danish digital provider cBrain are partnering to accelerate Africa’s digital transformation by improving e-governance, financial inclusion, and climate resilience.


Reading corner

1080 1080 max part 6
www.diplomacy.edu

What do string theory and AI chat models have in common? Both navigate complex, multidimensional webs – but of what? Dr Anita Lamprecht explains.

1080 1080 max Nine
www.diplomacy.edu

During a recent visit to Abu Dhabi, I explored the UAE’s innovative AI strategies emphasizing societal benefit and human-centric development. My engagements included a leadership course at the Anwar Gargash Diplomatic Academy and discussions with diplomats and tech experts. Key insights highlighted the UAE’s clarity in AI policy, proactive governmental engagement, and a user-centered governance approach. I was particularly impressed by their careful timing in AI regulation and commitment to open-source initiatives. Overall, the UAE’s comprehensive and bottom-up approach to AI left me inspired and eager to learn more about regional advancements in Riyadh.

Aldo
www.diplomacy.edu

If morality were destroyed, could we rebuild it from its fragments? Aldo Matteucci examines Alasdair MacIntyre’s theory.

Numéro 94 de la lettre d’information Digital Watch – novembre 2024

 Flag, Page, Text, Advertisement, American Flag, Poster

Clin d’oeil

Coup d’oeil : Les développements qui font des vagues

Gouvernance de l’IA

Le ministère américain de l’Energie (DOE) et le ministère américain du Commerce (DOC) ont uni leurs forces pour promouvoir le développement sûr, sécurisé et fiable de l’IA par le biais d’un protocole d’accord récemment établi.

Une récente évaluation de certains des meilleurs modèles d’IA a révélé des lacunes importantes en matière de conformité aux règlements de l’UE, notamment en ce qui concerne la résilience en cybersécurité et la prévention des résultats discriminatoires. L’étude réalisée par la startup suisse LatticeFlow, en collaboration avec des fonctionnaires de l’UE, a testé des modèles d’IA générative provenant de grandes entreprises technologiques telles que Meta, OpenAI et Alibaba.

Technologies 

Trois scientifiques, David Baker, John Jumper et Demis Hassabis, ont reçu le prix Nobel de chimie 2024 pour leurs travaux pionniers dans le domaine de la science des protéines. David Baker, de l’université de Washington, a été récompensé pour ses innovations en matière de conception computationnelle de protéines, tandis que John Jumper et Demis Hassabis, de Google DeepMind, ont été reconnus pour avoir utilisé l’IA afin de prédire la structure des protéines.

Le scientifique américain John Hopfield et le Britanno-Canadien Geoffrey Hinton ont reçu le prix Nobel de physique 2024 pour leurs travaux révolutionnaires en apprentissage automatique, qui ont largement contribué à l’essor de l’IA.

Au Japon, les entreprises se tournent de plus en plus vers l’IA pour gérer les services à la clientèle, afin de remédier à la pénurie de main-d’œuvre dans le pays. Ces systèmes d’IA sont désormais utilisés pour des tâches plus complexes, afin d’aider les travailleurs de divers secteurs.

La Russie a annoncé une intensification notable de l’utilisation de drones dotés d’une intelligence artificielle dans ses opérations militaires en Ukraine. Le ministre russe de la Défense, Andreï Belousov, a souligné l’importance de ces drones autonomes dans les tactiques de champ de bataille, affirmant qu’ils sont déjà déployés dans des régions clés et qu’ils ont fait leurs preuves dans des situations de combat.

Des chercheurs chinois de l’université de Shanghai affirment avoir fait une percée significative dans le domaine de l’informatique quantique, en revendiquant avoir percé les algorithmes de cryptage couramment utilisés dans le secteur bancaire et celui des crypto-monnaies.

Infrastructure

Un groupe de grandes entreprises technologiques, dont Microsoft, Alphabet, Meta et Amazon, a proposé de nouvelles conditions pour le paiement des besoins énergétiques des centres de données de l’Ohio.

Siemens compte sur sa plateforme numérique, Xcelerator, pour stimuler sa croissance future, en particulier dans le domaine de l’automatisation des usines, confronté à un ralentissement de la demande en Chine et en Europe.

Cybersécurité

Six sénateurs démocrates ont demandé à l’administration Biden d’aborder les questions essentielles des droits de l’Homme et de la cybersécurité lors de la prochaine convention des Nations unies sur la cybercriminalité, qui doit faire l’objet d’un vote à l’Assemblée générale des Nations unies.

Selon une récente estimation de la menace, l’agence canadienne de renseignement électromagnétique a identifié les activités de piratage de la Chine comme la plus importante menace cybernétique d’État à laquelle le pays est confronté.

La Russie utilise l’IA générative pour intensifier ses campagnes de désinformation contre l’Ukraine, a averti le vice-ministre ukrainien des Affaires étrangères, Anton Demokhin, lors d’une cyberconférence à Singapour.

Le rapport 2025 « Predictions » de Forrester présente les principaux défis en matière de cybersécurité, de risques et de protection de la vie privée à venir. Le coût de la cybercriminalité devrait s’élever à 12 000 milliards de dollars d’ici à 2025, tandis que les autorités de régulation intensifient leurs efforts pour protéger les données des consommateurs.

Droits numériques

Le Consumer Financial Protection Bureau (CFPB) a informé Meta de son intention d’envisager une « action en justice » concernant les allégations selon lesquelles le géant de la technologie aurait acquis de manière inappropriée des données financières de consommateurs auprès de tiers pour ses opérations de publicité ciblée.

Juridique

Le détaillant en ligne chinois Temu envisage de rejoindre une initiative menée par l’Union européenne pour lutter contre les produits de contrefaçon, à laquelle participent de grands distributeurs tels qu’Amazon et Alibaba, ainsi que des marques comme Adidas et Hermès.

L’agence sud-coréenne de protection des données a infligé une amende de 21,62 milliards de KRW (15,67 millions de dollars) à Meta Platforms, le propriétaire de Facebook, pour avoir collecté de manière inappropriée des données sensibles sur les utilisateurs et les avoir partagées avec des annonceurs.

Sept familles françaises poursuivent TikTok en justice, affirmant que l’algorithme de la plateforme a exposé leurs enfants adolescents à des contenus préjudiciables, ce qui a eu des conséquences tragiques, notamment le suicide de deux jeunes de 15 ans.

Le Kremlin a demandé à Google de lever les restrictions imposées aux diffuseurs russes sur YouTube, en mettant en avant les actions en justice de plus en plus nombreuses contre le géant de la technologie comme levier potentiel.

Économie de l’internet

World Liberty Financial, un projet de crypto finance décentralisée (DeFi) associé à l’ancien président Donald Trump et à ses fils, prévoit de limiter ses ventes de jetons à 30 millions de dollars à l’intérieur des États-Unis.

Le ministre italien de l’Économie, Giancarlo Giorgetti, a défendu son projet d’augmenter les taxes sur les gains en capital des crypto-monnaies dans le cadre du budget 2025 du pays, malgré l’opposition des membres de son propre parti, la Ligue.

La Banque d’État du Pakistan (SBP) a proposé un cadre important pour reconnaître les actifs numériques, y compris les crypto-monnaies, comme monnaie légale au Pakistan.

Le Thailand Board of Investment (BOI) a annoncé vendredi qu ‘il avait approuvé 2 milliards de dollars de nouveaux investissements pour soutenir les secteurs des centres de données et de la fabrication électronique du pays.

Développement

La société marocaine Panafsat et Thales Alenia Space ont signé un protocole d’accord pour la construction d’un système de télécommunications par satellite de grande envergure afin d’améliorer la connectivité numérique dans 26 pays africains, dont 23 pays francophones.

Le Kenya s’associe à Google pour améliorer son infrastructure numérique et permettre à ses citoyens de participer à l’évolution de l’économie numérique.

Socioculturel

Sept familles françaises poursuivent TikTok en justice, affirmant que l’algorithme de la plateforme a exposé leurs enfants adolescents à des contenus préjudiciables, ce qui a eu des conséquences tragiques, notamment le suicide de deux jeunes de 15 ans.

OpenAI a introduit de nouvelles fonctions de recherche dans son populaire ChatGPT, le positionnant en concurrent direct de Google, de Bing de Microsoft et d’autres outils de recherche basés sur l’IA.

Meta a annoncé une interdiction prolongée des nouvelles publicités politiques après les élections américaines, afin de lutter contre la désinformation dans la période post-électorale tendue.Le Mozambique et l’île Maurice font l’objet de critiques pour les récentes fermetures de réseaux sociaux en période de crises politiques, ce que certains considèrent comme une atteinte aux droits numériques. Au Mozambique, des plateformes comme Facebook et WhatsApp ont été bloquées après des manifestations liées à des résultats électoraux contestés.

En bref

Trump contre Harris : le rôle de l’industrie technologique en 2024

À l’approche de l’élection présidentielle américaine du 5 novembre, la course entre l’ancien président Donald Trump et la vice-présidente Kamala Harris est extrêmement serrée, rendant la mobilisation des électeurs cruciale. Le soutien de personnalités influentes du monde des affaires, notamment des grandes entreprises technologiques, pourrait s’avérer déterminant. Elon Musk, le fondateur de X, a exprimé son soutien à Trump, soulignant le rôle que les géants de la technologie, en particulier les « Sept Magnifiques » (Apple, Microsoft, Amazon, Nvidia, Meta, Tesla et Alphabet), pourraient jouer dans le résultat de l’élection. Trump et Harris courtisent tous deux le monde des entreprises, ce qui témoigne de l’influence croissante des grandes entreprises technologiques sur les politiques publiques et l’opinion des électeurs.

a man and a woman talking

Les dirigeants du secteur technologique ont de plus en plus souvent tendu la main à M. Trump. Des personnalités comme Tim Cook d’Apple et Andy Jassy d’Amazon se sont engagées avec lui, et même Mark Zuckerberg a fait preuve de respect à l’égard de Trump malgré des tensions antérieures, comme l’interdiction de Facebook à l’égard de Trump après l’émeute du Capitole. Zuckerberg a déclaré qu’il resterait neutre lors de l’élection de 2024, bien que Trump ait fait allusion à une nouvelle compréhension mutuelle. La relation entre Musk et Trump a également évolué ; malgré les critiques passées, Musk s’aligne désormais plus étroitement sur Trump, en particulier depuis qu’il a pris le contrôle de Twitter, où il promeut des questions qui résonnent avec la base de Trump, telles que le scepticisme à l’égard des médias et la censure gouvernementale.

Les contributions financières de Musk sont significatives, son America PAC offrant un million de dollars par jour aux électeurs inscrits qui soutiennent les causes du Premier et du Deuxième amendement. Toutefois, cette initiative a soulevé des questions juridiques concernant l’incitation à l’inscription sur les listes électorales, certains experts s’interrogeant sur la légalité de lier des récompenses financières à la participation politique.

Kamala Harris bénéficie par ailleurs d’un soutien important de la part de l’élite de la Silicon Valley. Ses liens avec le secteur technologique remontent à l’époque où elle était procureure générale de Californie, puis Sénatrice. Des personnalités telles que Sheryl Sandberg, ancienne PDG de Facebook, et la philanthrope Melinda French Gates la soutiennent, ainsi que plus de 800 investisseurs en capital-risque et des milliers d’employés du secteur technologique. L’attrait de Mme Harris pour la Silicon Valley est lié à sa position sur la réglementation de l’IA et la confidentialité des données, qui est perçue comme plus favorable que l’approche de déréglementation de M. Trump. Si la majorité de la Silicon Valley penche pour les démocrates, il existe des exceptions, comme David Marcus, un ancien président de PayPal qui a changé d’allégeance pour le Parti républicain.

Les grandes entreprises technologiques font l’objet d’une surveillance accrue, notamment en raison des mesures antitrust prises par l’administration Biden contre des entreprises comme Apple et Google. Le ministère de la justice a accusé ces entreprises de pratiques anticoncurrentielles. M. Trump a toutefois laissé entendre qu’il réduirait la pression réglementaire sur les entreprises technologiques s’il était élu, ce qui contraste fortement avec l’approche réglementaire de l’administration Biden.

La politique de Trump en matière de technologie met l’accent sur la déréglementation, qui, selon lui, stimulera la croissance. Il s’oppose à ce qu’il appelle la « censure illégale » par les entreprises technologiques et préconise une approche non interventionniste de l’IA et des crypto-monnaies, favorisant une surveillance gouvernementale minimale pour accroître la compétitivité des États-Unis. Il soutient également les réductions de l’impôt sur les sociétés et un allègement des charges réglementaires, s’alignant ainsi sur une vision de la croissance de la technologie axée sur le marché.

À l’inverse, Mme Harris, nommée « tsar de l’IA » par M. Biden, est favorable à une réglementation plus stricte de l’IA et de la technologie afin de garantir la sécurité publique. Elle a fait pression pour des lois sur la confidentialité des données et la protection contre les biais, alignant ainsi sa campagne sur le cadre réglementaire de M. Biden en matière de technologie. Le soutien de Mme Harris à des initiatives telles que la loi CHIPS souligne l’importance qu’elle accorde à l’indépendance technologique et à la sécurité nationale des États-Unis, en donnant la priorité à la protection des consommateurs et à un environnement technologique réglementé.Cette élection offre donc aux électeurs un choix entre deux politiques technologiques opposées : la vision de Harris d’un environnement technologique équitable et réglementé et la préférence de Trump pour une intervention gouvernementale minimale.

Analyse

L’IA et l’éthique dans la société moderne

Les progrès rapides de l’humanité dans les domaines de l’IA et de la robotique ont mis en lumière des questions éthiques et philosophiques, d’autant plus que les technologies de l’IA influencent désormais des secteurs tels que la médecine, la gouvernance et l’économie. Il incombe aux gouvernements, aux entreprises, aux organisations internationales et aux particuliers de gérer ces avancées de manière éthique, en veillant à ce que l’utilisation de l’IA respecte les droits de l’Homme et favorise le bien de la société.

L‘éthique de l’IA fait référence aux principes qui guident les bonnes et les mauvaises actions, exigeant des technologies de l’IA qu’elles respectent les valeurs sociétales et protègent la dignité humaine. L’IA, définie comme des systèmes capables d’analyser et de prendre des décisions de manière autonome, comprend diverses applications, des assistants vocaux aux véhicules autonomes. Sans cadre éthique, l’IA risque d’aggraver les inégalités, d’éroder la responsabilité et de porter atteinte à la vie privée et à l’autonomie, soulignant la nécessité d’intégrer l’équité et la responsabilité dans la conception et la réglementation de l’IA.

L’éthique de l’IA vise à minimiser les risques liés à une mauvaise utilisation, à une mauvaise conception ou à des applications nuisibles, en abordant des questions telles que la surveillance non autorisée et l’armement de l’IA. Des initiatives mondiales telles que la Recommandation de l’UNESCO sur l’éthique de l’IA de 2021 et la Loi sur l’IA de l’UE ont pour but de garantir un développement responsable de l’IA, en équilibrant le besoin d’une réglementation précoce avec la nécessité de contrôler les technologies émergentes. Ces cadres répondent aux impacts concrets tels que les biais algorithmiques, et soulignent l’importance d’une surveillance opportune et bien construite.

L’éthique de l’IA s’inspire des trois lois fictives de la robotique d’Asimov, bien que les complexités de l’IA dans le monde réel dépassent largement ce cadre de base. Les applications actuelles de l’IA, telles que les véhicules autonomes et la reconnaissance faciale, posent des questions de responsabilité, de protection de la vie privée et d’autres enjeux, nécessitant des stratégies nuancées au-delà des règles éthiques fondamentales. Les systèmes d’IA du monde réel exigent une gouvernance complexe, axée sur des domaines tels que les implications juridiques, sociales et environnementales.

La responsabilité juridique, en particulier dans les scénarios impliquant des systèmes autonomes, soulève des questions quant à la responsabilité en cas d’accident, mettant en évidence la nécessité de réformes juridiques. Sur le plan financier, l’IA risque d’accentuer les inégalités en raison des biais algorithmiques dans des secteurs tels que les crédits. Sur le plan environnemental, les besoins énergétiques importants de l’IA pour l’entraînement des modèles ont un impact sur la durabilité, ce qui rend essentiel le développement de systèmes économes en énergie pour résoudre ce problème. Sur le plan social, l’automatisation perturbe les emplois traditionnels et les algorithmes biaisés pourraient exacerber les inégalités sociales, notamment dans les domaines de l’emploi et de la justice pénale. L’utilisation de l’IA à des fins de surveillance soulève également de graves problèmes de protection de la vie privée.

Les effets psychologiques de l’IA, tels que le manque d’empathie dans le service client automatisé ou l’impact de tactiques de marketing manipulatrices sur le bien-être, nécessitent une attention particulière. La méfiance du public à l’IA, due à l’opacité des systèmes d’IA et au risque de biais algorithmiques, constitue un obstacle important à son adoption généralisée. Une IA transparente et explicable, permettant aux utilisateurs de comprendre les processus de prise de décision, ainsi que des cadres de responsabilité solides, sont essentiels pour instaurer la confiance du public et établir un paysage de l’IA équitable.

 Network, Computer, Electronics, Pc, Laptop, Person

Pour relever ces défis éthiques, il est nécessaire d’établir une coordination mondiale et une réglementation adaptable afin que l’IA soutienne les intérêts fondamentaux de l’humanité, respecte la dignité humaine et promeuve l’équité dans tous les secteurs. Les enjeux éthiques liés à l’IA touchent des domaines cruciaux comme les droits de l’Homme, l’égalité économique, la durabilité environnementale et la confiance sociale. Une approche collaborative, incluant les contributions des gouvernements, des entreprises et des citoyens, est essentielle pour construire des systèmes d’IA robustes et transparents au service du bien-être de la société. En investissant dans la recherche, la collaboration interdisciplinaire et en plaçant le bien-être humain au cœur des priorités, l’IA peut réaliser son potentiel transformateur de manière bénéfique, guidant le progrès technologique tout en préservant les valeurs sociétales.

Salvador : Plan d’action pour l’économie du bitcoin

L’adoption du bitcoin comme monnaie légale par le Salvador le 7 septembre 2021 a marqué une étape pionnière dans l’intégration des crypto-monnaies dans la politique économique nationale.  Initialement perçue comme une expérience audacieuse, cette décision est devenue une stratégie ayant des implications majeures tant au niveau national qu’international, malgré les préoccupations soulevées par le FMI et d’autres institutions quant aux risques potentiels. Cette politique visait à relever des défis économiques tels que l’inclusion financière d’une population non bancarisée, positionnant le Salvador comme un phare mondial pour les crypto-monnaies. Avec 5 748,8 bitcoins dans ses réserves nationales, le pays a continué d’investir dans cette monnaie numérique, illustrant sa confiance dans son potentiel à long terme.

 Logo, Hockey, Ice Hockey, Ice Hockey Puck, Rink, Skating, Sport

L’adoption du bitcoin par le Salvador a eu des effets économiques mitigés. La crypto-monnaie a facilité les envois de fonds des Salvadoriens vivant à l’étranger en réduisant les frais et en rendant les transactions plus accessibles. Cette politique a également attiré des investissements étrangers et stimulé le crypto-tourisme. Cependant, la volatilité du bitcoin demeure un sujet de préoccupation, les critiques avertissant que la dépendance à un actif aussi fluctuant pourrait menacer la stabilité financière. Le projet ambitieux du président Nayib Bukele de créer une « ville du bitcoin » – une zone sans impôts et favorable aux crypto-monnaies pour attirer des investissements étrangers avec un budget prévu de 1,6 milliard de dollars – vise à faire du Salvador une plaque tournante mondiale de la finance numérique.

L’éducation est au cœur de cette initiative, comme en témoigne le programme de certification en bitcoins mis en place par l’Office national du bitcoin (ONBTC). Ce programme vise à former 80 000 fonctionnaires au bitcoin et à la blockchain, intégrant la connaissance des crypto-monnaies dans les institutions de l’État. dépasse la simple directive politique et s’enracine dans la gouvernance et l’administration du pays, facilitant une compréhension approfondie des crypto-monnaies parmi les fonctionnaires et touchant d’autres secteurs.

La position pro-crypto du Salvador a influencé d’autres pays.L’ Argentine, sous la direction du président pro-crypto Javier Milei, s’intéresse à l’adoption des crypto-monnaies pour stabiliser son économie et suit de près l’approche du Salvador. Alors que de plus en plus de pays envisagent l’intégration des crypto-monnaies, la politique du Salvador offre un exemple pratique, illustrant à la fois les opportunités et les défis des monnaies numériques dans une économie nationale.

Cependant, des défis réglementaires persistent. Des organisations telles que le FMI expriment des inquiétudes concernant la stabilité financière et les risques liés à la protection des consommateurs. Malgré cela, le Salvador a continué de renforcer ses cadres réglementaires et d’accroître la transparence autour des activités liées au bitcoin, affirmant ainsi son engagement à maintenir son leadership en matière de crypto-monnaie.

Le portefeuille Chivo, soutenu par le gouvernement, a joué un rôle crucial dans la promotion de l’inclusion financière, en offrant aux citoyens non bancarisés la possibilité d’effectuer des transactions numériques. Grâce à la plateforme Chivo, qui a offert 30 dollars en bitcoins à chaque utilisateur, le Salvador a fait des progrès significatifs vers un écosystème financier inclusif, servant d’exemple à d’autres pays cherchant à réduire les barrières bancaires pour les personnes non bancarisées.

 Art, Nature, Outdoors

L’expérience du Salvador a incité d’autres pays, comme la République centrafricaine, à adopter le bitcoin. Pour les pays confrontés à des problèmes d’inflation ou d’exclusion financière, le bitcoin représente une alternative potentielle. L’approche pionnière du Salvador illustre comment les monnaies numériques peuvent offrir une voie vers le développement économique et l’innovation, positionnant le pays comme un leader dans le secteur émergent de la finance numérique.

L’IA révolutionne la médecine

L’intégration de l’IA dans la médecine a marqué un tournant révolutionnaire, en particulier dans le domaine du diagnostic et de la détection précoce des maladies. Depuis l’application de l’IA aux essais cliniques sur l’homme il y a plus de quatre ans, son potentiel d’amélioration des soins de santé est devenu de plus en plus évident. L’IA aide désormais à détecter des maladies complexes, souvent à des stades précoces, améliorant ainsi la précision des diagnostics et les résultats pour les patients. Cette avancée technologique promet de transformer la santé individuelle et le bien-être de la société dans son ensemble, bien que des préoccupations éthiques et des questions concernant la fiabilité de l’IA demeurent dans le débat public.

Dans le domaine du diagnostic, l’IA a connu un succès remarquable. Une étude japonaise a révélé que les outils assistés par l’IA, tels que ChatGPT, ont surpassé les experts, atteignant un taux de précision de 80 % dans les évaluations médicales portant sur 150 diagnostics. Ces résultats encouragent l’intégration de l’IA dans les dispositifs médicaux et soulignent la nécessité d’une formation axée sur l’IA dans l’enseignement médical.

L’IA fait également des progrès considérables dans la détection du cancer, avec des entreprises comme Imidex, dont l’algorithme d’IA a reçu l’approbation de la FDA, pour améliorer le dépistage précoce du cancer du poumon. De même, la startup française Bioptimus vise le marché européen avec un modèle d’IA capable d’identifier les cellules cancéreuses et les anomalies génétiques dans les tumeurs. Ces développements mettent en évidence la concurrence et l’innovation croissantes dans le domaine des soins de santé basés sur l’IA, rendant ces avancées plus accessibles à l’échelle mondiale.

 Adult, Female, Person, Woman, Architecture, Building, Hospital, Clinic, Computer, Electronics, Laptop, Pc, Computer Hardware, Hardware, Monitor, Screen, Jiro Taniguchi

Malgré ces avancées prometteuses, le scepticisme du public reste un défi de taille. Selon une étude de Pew Research réalisée en 2023, 60 % des Américains ne sont pas à l’aise avec les diagnostics assistés par l’IA, craignant qu’ils ne nuisent à la relation médecin-patient. Si 38 % des personnes interrogées s’attendent à de meilleurs résultats grâce à l’IA, 33 % craignent des effets négatifs, ce qui traduit des sentiments mitigés quant au rôle de l’IA dans les soins de santé.

L’IA contribue également à la recherche sur la démence. En analysant de vastes ensembles de données et des scanners cérébraux, les systèmes d’IA peuvent détecter des changements structurels du cerveau et des signes précoces de démence. L’outil SCAN-DAN, mis au point par des chercheurs d’Édimbourg et de Dundee, vise à révolutionner la détection précoce de la démence dans le cadre de la collaboration mondiale NEURii, qui qui explore des solutions numériques aux défis posés par la démence. Les interventions précoces rendues possibles par l’IA pourraient améliorer la qualité de vie des patients atteints de démence.

 Computer, Computer Hardware, Computer Keyboard, Electronics, Hardware, Monitor, Screen, Pc

L’utilité de l’IA s’étend à la détection du cancer du sein, où elle améliore l’efficacité des mammographies, des échographies et des IRM. Un système d’IA développé aux États-Unis permet de mieux évaluer la stadification de la maladie, en distinguant les tumeurs bénignes des malignes, ce qui réduit les faux positifs et faux négatifs. Une évaluation précise du stade de la maladie permet un traitement plus efficace, notamment pour les cancers du sein détectés à un stade précoce.

Le soutien financier à l’IA dans le secteur de la santé est considérable, les projections suggérant que l’IA pourrait contribuer à hauteur de près de 20 000 milliards de dollars à l’économie mondiale d’ici 2030, les soins de santé pouvant représenter plus de 10 % de cette valeur. Les grandes entreprises mondiales sont désireuses d’investir dans des équipements médicaux pilotés par l’IA, ce qui souligne le potentiel de croissance de ce domaine.

L’avenir de l’IA dans le domaine de la santé est prometteur, avec des systèmes d’IA susceptibles de dépasser les capacités cognitives humaines dans l’analyse de grandes quantités d’informations. Avec l’évolution des cadres réglementaires, les outils d’IA dans le domaine du diagnostic pourraient permettre une détection plus rapide et plus précise des maladies, marquant un tournant décisif dans la science médicale. Ce potentiel de transformation place l’IA sur une trajectoire révolutionnaire dans les soins de santé, capable de remodeler la pratique médicale et d’améliorer les résultats pour les patients.

CSNU : l’IA au service de la diplomatie

Les 21 et 24 octobre, DiploFoundation a fourni un rapport en temps réel des sessions du Conseil de sécurité de l’ONU consacrées au développement scientifique et à la thématique femmes, paix et sécurité. Soutenue par la Suisse, cette initiative vise à améliorer le travail du Conseil de sécurité de l’ONU et du système onusien dans son ensemble, en rendant les informations des sessions plus accessibles.

Au cœur de cet effort se trouve DiploAI, une plateforme d’intelligence artificielle sophistiquée, entraînée sur les documents de l’ONU. DiploAI exploite les connaissances contenues dans les enregistrements vidéo et les transcriptions du Conseil, facilitant ainsi l’accès à des informations diplomatiques précieuses. Ce rapport basé sur l’IA combine une technologie avancée avec une expertise en matière de paix et de sécurité, fournissant une analyse approfondie des sessions du Conseil de sécurité de l’ONU en 2023-2024 et couvrant l’Assemblée générale de l’ONU (AGNU) sur une période de huit ans.

L’une des principales réussites de DiploAI réside dans la collaboration fluide entre l’IA et les experts humains. Ces derniers ont adapté le système d’IA aux besoins du Conseil de sécurité en fournissant des documents et des contenus essentiels, ce qui a permis d’améliorer la compréhension du contexte par l’IA. Grâce à un retour d’information itératif sur les sujets et les mots-clés, DiploAI produit des résultats précis et pertinents sur le plan diplomatique. Une étape clé de ce partenariat a été l’analyse par DiploAI du « Nouvel agenda pour la paix », où les experts ont identifié plus de 400 sujets principaux, créant ainsi une taxonomie complète pour les questions de paix et de sécurité de l’ONU. De plus, un graphe de connaissances a été développé pour représenter visuellement le sentiment et l’analyse relationnelle, ajoutant de la profondeur aux aperçus des sessions du Conseil.

Sur la base de ces avancées, DiploAI a introduit un chatbot personnalisé qui dépasse les fonctionnalités de questions-réponses de base. En intégrant les données de toutes les sessions de 2024, le chatbot permet une exploration interactive du contenu diplomatique, offrant des réponses détaillées en temps réel.

Ce passage de rapports statiques à un accès dynamique et conversationnel marque une avancée majeure dans la compréhension et l’engagement avec les documents du Conseil de sécurité de l’ONU.

Le processus de développement de DiploAI souligne l’importance de la collaboration entre l’homme et l’IA. Le module de questions et réponses a fait l’objet d’une dizaine d’itérations, affinées grâce aux commentaires des experts du Conseil de sécurité de l’ONU, afin de garantir l’exactitude et la sensibilité des réponses diplomatiques. Ce processus a permis de créer un système d’IA capable de répondre à des questions critiques tout en respectant les normes diplomatiques.

 People, Person, Crowd, Indoors, Chair, Furniture, Adult, Male, Man, Architecture, Building, Classroom, Room, School, Accessories, Bag, Handbag, Parliament, Audience, Lecture, Electrical Device, Microphone, Kate Beaton, Jonathan Cyprien, Walter Jones, Tom Mandrake, Bill Kenney

La suite d’outils de DiploAI, incluant la transcription et l’analyse en temps réel, améliore la transparence des rapports de l’ONU. En intégrant des méthodes d’IA avancées telles que la génération augmentée par récupération (RAG) et les graphes de connaissances, DiploAI contextualise et enrichit les informations extraites. Entraînée sur un vaste corpus de connaissances diplomatiques, l’IA génère des réponses adaptées aux sujets du Conseil de sécurité de l’ONU, rendant les détails des sessions complexes accessibles par le biais de transcriptions, de rapports et d’un chatbot alimenté par l’IA.

Le travail de DiploAI avec le Conseil de sécurité, soutenu par la Suisse, démontre le potentiel de l’IA dans l’amélioration de la diplomatie. En associant les prouesses techniques à l’expertise humaine, DiploAI illustre des pratiques diplomatiques plus inclusives, mieux informées et plus percutantes.



DW Weekly #186 – 15 November 2024

 Page, Text

Dear readers,

AI companies are grappling with challenges such as energy consumption, data availability, and hardware limitations in developing advanced language models. OpenAI, one of the leaders in the field, is exploring new approaches to AI model training that could revolutionise the industry. Experts suggest that the key to this future may lie in techniques that mimic more human-like thinking patterns, offering an alternative to the current focus on scaling up data and computational power.

The recent release of the OpenAI’s OpenAI o1 model represents a breakthrough in AI development, leveraging human-like reasoning and multi-step problem-solving to improve performance. This new model represents a significant shift away from traditional AI approaches focused on feeding massive data into large models. As a result of the expansion of the pre-training plateau, AI pioneers like Ilya Sutskever have acknowledged that the era of simply scaling models may be over. Now, AI researchers are exploring techniques that allow algorithms to ‘thin’ more like humans, enabling faster and more efficient problem-solving.

 Book, Comics, Publication, Adult, Female, Person, Woman, Face, Head

The growing realisation that scaling models may not always lead to better performance has sparked a re-evaluation of the AI development process. To address these problems, AI researchers are looking at ‘test-time compute’, a method that enhances AI models during the inference phase when they are actively used. This technique enables models to process multiple possibilities in real time, providing more accurate results without scaling up the model.

OpenAI’s new model, o1, is nearing these techniques, improving its performance by allowing AI to think through problems in stages similar to human reasoning. This method has proven highly effective, as evidenced by its previous test performances. Namely, the new model outperforms older models, scoring 83% against GPT-4o at the International Mathematics Olympiad. The model is unique in its ability to provide step-by-step reasoning and show human-like patterns of hesitation during the process. It also reduces the occurrence of hallucinations. However, the models have limitations, such as not browsing the internet, needing more world knowledge, and processing files and images. 

The next step is for the models to perform similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology. The company aims to develop a product that can make decisions and take action on behalf of humans, which is estimated to cost USD 150 billion. Removing current kinks in the system will enable the models to work on complex global problems in areas like engineering and medicine. The o1 model, which builds on existing models like GPT-4, is a critical step in AI’s evolution, focusing on multi-step reasoning and incorporating expert feedback to guide AI through complex tasks.

Venture capital investors are taking notice of these changes, as the shift towards inference clouds could drastically alter the dynamics of AI research and development.

Follow the ‘Highlights from the week’ in its section below…

EU and UK universities begin metaverse classes

Universities across the EU and UK are set to introduce metaverse-based courses, where students can attend classes in digital replicas of their campuses. Meta, the company behind Facebook and Instagram, announced the launch of Europe’s first ‘metaversities,’ immersive digital twins of real university campuses.

Polish priest brings AI to faith discussions

In Poznan, Poland, a new chapel is combining tradition with cutting-edge technology. Created by priest Radek Rakowski, the modern chapel features an AI-powered system that answers visitors’ questions about Catholicism.

More updates and other topics on our dig.watch portal!

Marko and the Digital Watch team


Highlights from the week of 08-15 November 2024

bitcoin price

The rise is attributed to a 30% increase in Bitcoin’s value over the past week amidst declining silver prices, which fell by over 6%.

south korea supports chipmaking industry

President Yoon Suk Yeol has voiced concerns over Trump’s tariff threats, which could affect South Korea’s chip industry by undercutting export prices.

nuclear power plant 4529392 1280

AI is playing a key role in the future of California’s last nuclear power plant, enabling it to overcome ageing infrastructure and compliance hurdles with innovative technology.

The Guardian’s decision follows growing criticism of X’s moderation policies under Elon Musk.

cryptocurrencies

Bitcoin recently gained 11% in 24 hours, underscoring its current dominance in the crypto sphere.

x app logo front twitter blue bird symbol background 3d rendering

X faces a May 2025 hearing in France over content payments to publishers.

european union eu flag

Europe’s initiative to boost technological sovereignty.

FlagSetBlueBG 5 20

Interior minister warns of cyber threats as Germany prepares for snap elections.

Vasilica

Explore St. Peter’s Basilica like never before.

Starlink

In a significant step for digital inclusion, Chad has approved Starlink’s satellite internet services to enhance connectivity across underserved areas, marking a leap forward for the central African nation’s technological…


Reading corner

blog How and why language is hardening in modern discourse FI
www.diplomacy.edu

Language is shifting – words like ‘dialogue’ and ‘conversation’ are being replaced by ‘debate’ and ‘discussion’. Is this hardening of tone a sign of the times? Aldo Matteucci analyses.

UN Cybercrime Convention 1920x1080px oct 2024
dig.watch

DiploFoundation invited experts from participating delegations in the UN cybercrime treaty negotiations to break down the agreed draft convention and discuss its potential impact on users.

BLOG featured image 04
www.diplomacy.edu

Trump’s appointment of Elon Musk as ‘efficiency tzar’ aims to modernize federal administration, amidst critiques suggesting it targets the ‘deep state.’ However, the focus should shift to the broader AI transformation poised to reshape bureaucracies worldwide. As AI begins to automate core functions, particularly text-based tasks, public institutions must adapt to these changes. This includes proactive discussions, investing in training, and identifying irreplaceable roles. Ultimately, while Trump’s motivations may spark debate, the real challenge lies in preparing governments for a future where AI fundamentally alters their operations.

Upcoming

Diplo Weekly Newsletter 2024 thumbnail 01
www.diplomacy.edu

United Arab Emirates Diplomatic Academy: Training on Artificial Intelligence From November 17-20, 2024, Diplo will partner with the Anwar Gargash Diplomatic

G20
dig.watch

The G20 Leaders’ Summit 2024 will take place in Rio de Janeiro, Brazil on November 18 and 19. Brazil holds the Presidency from December 2023 to November 2024. The Summit…

Diplo Weekly Newsletter 2024 thumbnail 02
www.diplomacy.edu

25th European Diplomatic Programme: The use of AI in foreign affairs Diplo will participate in the 25th European Diplomatic Programme in Budapest, Hungary, on

DW Weekly #185 – 8 November 2024

 Page, Text

Dear readers,

Donald Trump’s return to the White House probably signals a relevant shift in tech policy, given his strategic alignment with influential figures in Silicon Valley, most notably Elon Musk. Musk, a vocal supporter and one of the wealthiest individuals on the planet, invested approximately $120 million into Trump’s campaign, clearly showing his commitment to Trump’s vision for a tech-forward, market-driven America. Trump has vowed to appoint Musk to head a government efficiency commission, suggesting an unprecedented partnership between the government and private tech giants.

Trump’s ambitions in the tech arena are sweeping. He has promised a regulatory environment that would ‘set free’ companies burdened by government intervention. By rolling back regulations on AI, social media, and cryptocurrency sectors, Trump aims to foster innovation by reducing oversight and promoting a more liberal market. This policy stance starkly contrasts the Biden administration’s regulatory approach, particularly in Big Tech antitrust and AI oversight, which Trump’s team views as stifling growth and innovation.

 Flag

A key part of Trump’s tech agenda is his stance on digital freedom. He has consistently criticised social media platforms for what he claims is censorship of conservative voices, a sentiment echoed by Musk, especially since his acquisition of Twitter (now X). Under Trump’s leadership, there are likely to be pushes to reform Section 230, the law that protects platforms from liability for user-generated content, aiming to curb what Trump views as ‘biased censorship’ against his supporters. This approach aligns with Trump’s free-market ethos and reflects his desire to reshape the digital public square to favour unrestricted speech.

Moreover, the Government Efficiency Commission would conduct a complete financial and performance audit of the federal government. Trump also pledged to cut corporate tax rates for domestically manufactured companies, establish ‘low-tax’ zones on federal lands, encourage construction companies to build new homes and start a sovereign wealth fund. Trump’s proposal drew criticism from Everett Kelley, president of the American Federation of Government Employees, who accused Trump and Musk of wanting to weaken the nonpartisan civil service.

As Trump reclaims his influence over tech policy, his administration is expected to reassess past conflicts with Silicon Valley. Despite his previous clashes with leaders like Mark Zuckerberg, Trump’s recent statements have indicated a willingness to mend fences, especially with executives prioritising business over political engagement. For instance, Zuckerberg’s current stance of neutrality has met with Trump’s approval, signifying a potential thaw in relations that could lead to an era of cooperation rather than confrontation.

In this new chapter, Trump’s alliance with Musk and other tech elites underscores his ambition to create a tech policy that minimises governmental control while encouraging private innovation. Together, Trump and Musk represent a fusion of populism and technology, a partnership that could reshape America’s role in the global tech landscape, steering it towards a future where corporate influence on policy is stronger than ever.

Follow the ‘Highlights from the week’ in its section below…

EU unveils new transparency rules under DSA for intermediary service providers

The European Commission has introduced an Implementing Regulation that standardises transparency reporting for providers of intermediary services under the Digital Services Act (DSA). That regulation aims to ensure consistency and comparability of data shared with the public by requiring providers to disclose specific information about their content moderation practices.

Apple faces first EU fine under Digital Markets Act

Apple is set to face its first fine under the European Union‘s Digital Markets Act (DMA) for breaching the bloc’s antitrust regulations. The case comes after  EU regulators charged Apple in June for violating the new tech rules designed to curb the dominance of big tech companies.

More updates and other topics on our dig.watch portal!

Marko and the Digital Watch team


Highlights from the week of 01-08 November 2024

trump google

The new president favouring moderate reforms over drastic measures.

australia flag is depicted on a sports cloth fabric with many folds sport team waving banner

A landmark initiative in regulating children’s access to social media.

icrc logo

The 34th International Conference of the Red Cross and Red Crescent has adopted a resolution to protect civilians and essential infrastructure from the risks of cyber activities in armed conflicts,…

TikTok

An alleged harmful impact on teenagers’ mental health.

spaceX

A manufacturing shift to Southeast Asia.

Donald Trump 29347022846

A decentralised finance (DeFi) crypto project linked to former President Donald Trump and his sons, plans to restrict its token sales to $30 million within the United States.

DALL%C2%B7E 2023 11 22 22.33.01 A photo realistic image representing a conceptual conflict in semiconductor technology between China and the United States. The image features a large

GlobalFoundries, a major US chipmaker, faces a hefty fine for shipping chips to a sanctioned Chinese affiliate.

italy taxes cryptocurrency

Economy minister Giancarlo Giorgetti argues higher cryptocurrency taxes are needed, citing their risk and disconnection from tangible assets.

5cCVPZJQ UNDP 2

The NHDR will benchmark Bahrain’s digital landscape against regional and international standards, offering insights and recommendations to enhance digital inclusion and infrastructure.

Meta

Critics question Meta’s choice to use Llama AI for military applications.


ICYMI



Reading corner

Diplo Insta DW Analysis 4 11
dig.watch

The major role that technology industry leaders might play in influencing the election outcome.

Diplo BLOGS24 Insta Anita Lamprecht 8 11 1080x1080px 1
www.diplomacy.edu

How do AI’s cognitive mechanisms actually work? Just like human cognition, AI relies on schemas to process and interpret data – yet it lacks the depth and context that human understanding brings. Dr Anita Lamprecht explores.

MicrosoftTeams image 17.png
www.diplomacy.edu

Valencia, recognised as advanced smart city, failed to effectively warn residents of imminent floods, resulting in devastating consequences. Despite advanced technology, the local authorities sent emergency alerts eight hours late, after severe rainfall caused substantial destruction and over 200 fatalities in the region.

Diplo INSTA Diplo Event Unpacking Global Digital
www.diplomacy.edu

On November 8, Sorina Teleanu will launch her book, “Unpacking Global Digital Compact,” a crucial resource for understanding the newly adopted Global Digital Compact (GDC). Published shortly after the GDC’s approval at the UN Summit, it offers in-depth analysis of its negotiations, clarity on complex language, and insights into broader digital governance. The book emphasizes public interest and aims to bridge gaps in digital policy discussions, particularly for underrepresented nations.

Upcoming

Diplo AI CAMPUS Cybercrime
www.diplomacy.edu

#NEW Combating Cybercrime 2024 online course | Diplo Academy Diplo Academy is excited to announce the start of the Combating Cybercrime online course, aimed

Diplo Event Digital Trade for Africa s
www.diplomacy.edu

Digital Trade for Africa’s Prosperity Digital trade is becoming a key driver of economic growth across the globe, reshaping how goods and services are

2015 10 10 12.22.35 1
www.diplomacy.edu

Visit from participants of the Digital Policy Leadership Program, University of St. Gallen Under the framework of the Digital Policy Leadership Program

Digital Watch newsletter – Issue 94 – November 2024

 Flag, American Flag, Advertisement, Poster, Page, Text

Snapshot: The developments that made waves

AI governance

The US Department of Energy (DOE) and the US Department of Commerce (DOC) have joined forces to promote the safe, secure, and trustworthy development of AI through a newly established Memorandum of Understanding (MOU).

A recent assessment of some of the top AI models has revealed significant gaps in compliance with the EU regulations, particularly in cybersecurity resilience and preventing discriminatory outputs. The study by Swiss startup LatticeFlow, in collaboration with EU officials, tested generative AI models from major tech companies like Meta, OpenAI, and Alibaba.

Technologies

Three scientists, David Baker, John Jumper, and Demis Hassabis, have been awarded the 2024 Nobel Prize in Chemistry for their pioneering work in protein science. David Baker, of the University of Washington, was acknowledged for his innovations in computational protein design, while John Jumper and Demis Hassabis of Google DeepMind were recognised for using AI to predict protein structures. 

American scientist John Hopfield and British-Canadian Geoffrey Hinton have been awarded the 2024 Nobel Prize in Physics for their groundbreaking work in machine learning, which has significantly contributed to the rise of AI.

Companies in Japan are increasingly turning to AI to manage customer service roles, addressing the country’s ongoing labour shortage. These AI systems are now being used for more complex tasks, assisting workers across various industries.

Russia has announced a substantial increase in the use of AI-powered drones in its military operations in Ukraine. Russian Defence Minister Andrei Belousov emphasised the importance of these autonomous drones in battlefield tactics, saying they are already deployed in key regions and proved successful in combat situations.

Chinese researchers from Shanghai University claim to have made a significant breakthrough in quantum computing, asserting they have breached encryption algorithms commonly used in banking and cryptocurrency.

Infrastructure

The competition between Elon Musk and Mukesh Ambani is intensifying as they vie for dominance in India’s emerging satellite broadband market

A group of major tech companies, including Microsoft, Alphabet, Meta, and Amazon, has proposed new terms for how data centres in Ohio should pay for their energy needs.

Siemens relies on its digital platform, Xcelerator, to drive future growth, especially in its factory automation business, which has faced slowing demand in China and Europe.

Cybersecurity

Six Democratic senators have urged the Biden administration to address critical concerns about human rights and cybersecurity in the upcoming United Nations Cybercrime Convention, which is set for a vote at the UN General Assembly.

According to a new threat assessment, Canada’s signals intelligence agency has identified China’s hacking activities as the most significant state-sponsored cyber threat facing the country.

Russia is using generative AI to ramp up disinformation campaigns against Ukraine, warned Ukraine’s Deputy Foreign Minister, Anton Demokhin, during a cyber conference in Singapore.

Forrester’s 2025 Predictions report outlines critical cybersecurity, risk, and privacy challenges on the horizon. Cybercrime costs are expected to cost $12 trillion by 2025, with regulators stepping up efforts to protect consumer data.

Digital rights

The EU‘s voluntary code of practice on disinformation will soon become a formal set of rules under the Digital Services Act (DSA).

The Consumer Financial Protection Bureau (CFPB) has informed Meta of its intention to consider ‘legal action’ concerning allegations that the tech giant improperly acquired consumer financial data from third parties for its targeted advertising operations.

Legal

Chinese online retailer Temu is exploring joining a European Union-led initiative to combat counterfeit goods, which includes major retailers such as Amazon, Alibaba, and brands like Adidas and Hermes. 

South Korea’s data protection agency has fined Meta Platforms, the owner of Facebook, KRW 21.62 billion ($15.67 million) for improperly collecting and sharing sensitive user data with advertisers.

Seven families in France are suing TikTok, alleging that the platform’s algorithm exposed their teenage children to harmful content, leading to tragic consequences, including the suicides of two 15-year-olds.

The Kremlin has called on Google to lift its restrictions on Russian broadcasters on YouTube, highlighting mounting legal claims against the tech giant as potential leverage.

Internet economy

World Liberty Financial, a decentralised finance (DeFi) crypto project associated with former President Donald Trump and his sons, plans to limit its token sales to $30 million within the USA

Italy‘s Economy Minister Giancarlo Giorgetti has defended plans to raise taxes on cryptocurrency capital gains as part of the country’s 2025 budget, despite facing opposition from members of his own League party.

The State Bank of Pakistan (SBP) has proposed a significant framework to recognise digital assets, including cryptocurrency, as legal currency in Pakistan.

Thailand Board of Investment (BOI) announced on Friday it has approved $2 billion in new investments aimed at bolstering the nation’s data centre and electronics manufacturing sectors.

Development

Morocco’s Panafsat and Thales Alenia Space have signed a memorandum of understanding (MoU) to build a high-capacity satellite telecommunications system to advance digital connectivity across 26 African countries, including 23 French-speaking nations.

Kenya partners with Google to enhance its digital infrastructure and empower its citizens in the evolving digital economy.

Sociocultural

Seven families in France are suing TikTok, alleging that the platform’s algorithm exposed their teenage children to harmful content, leading to tragic consequences, including the suicides of two 15-year-olds.

OpenAI has introduced new search functions to its popular ChatGPT, making it a direct competitor to Google, Microsoft’s Bing, and other emerging AI-driven search tools.

Meta has announced an extended ban on new political ads following the United States election, aiming to counter misinformation in the tense post-election period.

Mozambique and Mauritius are facing criticism for recent social media shutdowns amid political crises, with many arguing these actions infringe on digital rights. In Mozambique, platforms like Facebook and WhatsApp were blocked following protests over disputed election results.


Trump vs Harris: The tech industry’s role in 2024

As the 5 November US presidential election nears, the race between former President Donald Trump and Vice President Kamala Harris is extremely close, making voter mobilisation critical. The support of influential business figures, particularly from Big Tech, could prove pivotal. Elon Musk, the founder of X, has voiced strong support for Trump, spotlighting the role that tech giants, especially the ‘Magnificent Seven’ (Apple, Microsoft, Amazon, Nvidia, Meta, Tesla, and Alphabet), could play in the election outcome. Both Trump and Harris are courting corporate America, reflecting Big Tech’s growing influence over public policy and voter sentiment.

AD 4nXezlMgnnON4a9kNG3O3vBQTqfkBZrp 33 v JcfXrFC ty0MGIJENSf64661cZkIU F8K2lsQ1aSoaVCWGfB4Cn1jV1A0jiYZxiPyP3uDL8IiMTMS bPouE

Tech leaders have increasingly reached out to Trump. Figures like Apple’s Tim Cook and Amazon’s Andy Jassy have engaged with him, and even Mark Zuckerberg has shown respect toward Trump despite previous tensions, such as Facebook’s ban on Trump after the Capitol riot. Zuckerberg has stated he will remain neutral in the 2024 election, though Trump has hinted at a newfound mutual understanding. Musk’s relationship with Trump has also evolved; despite past criticism, Musk now aligns more closely with Trump, particularly since taking over Twitter, where he promotes issues resonant with Trump’s base, such as scepticism of the media and government censorship.

Musk’s financial contributions are significant, with his America PAC offering $1 million daily to registered voters who support First and Second Amendment causes. However, this initiative has raised legal concerns over incentivising voter registration, with experts questioning the legality of tying financial rewards to political participation.

On the other hand, Kamala Harris enjoys substantial support from Silicon Valley’s elite. Her connections to tech stem from her time as California’s attorney general and later as a US senator. Figures like former Facebook CEO Sheryl Sandberg and philanthropist Melinda French Gates are backing her, along with over 800 venture capitalists and thousands of tech employees. Harris’s appeal to Silicon Valley aligns with her stance on AI regulation and data privacy, which is seen as more favourable than Trump’s deregulation approach. While most of Silicon Valley leans Democratic, there are exceptions, such as David Marcus, a former PayPal president who has shifted allegiance to the Republican Party.

Big Tech is under regulatory scrutiny, especially from the Biden administration’s antitrust actions against companies like Apple and Google. The Department of Justice has accused these companies of anti-competitive practices. Trump, however, has suggested he would lessen regulatory pressure on tech firms if elected, contrasting sharply with the Biden administration’s regulatory approach.

Trump’s tech policy emphasises deregulation, which he believes will stimulate growth. He opposes what he calls ‘illegal censorship’ by tech companies and advocates for a hands-off approach to AI and cryptocurrencies, favouring minimal government oversight to drive US competitiveness. He also supports corporate tax cuts and reduced regulatory burdens, aligning with a market-driven vision for tech growth.

Conversely, Harris, as Biden’s appointed AI czar, supports stronger regulations on AI and tech to ensure public safety. She has pushed for data privacy and bias protection laws, aligning her campaign with Biden’s regulatory framework on technology. Harris’s support for initiatives like the CHIPS Act highlights her focus on US tech independence and national security, prioritising consumer protection and a controlled tech landscape.


AI and ethics in modern society

Humanity’s rapid advancements in AI and robotics have brought ethical and philosophical issues into urgent focus, especially as AI technologies now shape areas like medicine, governance, and the economy. Governments, corporations, international organisations, and individuals are responsible for navigating these advancements ethically, ensuring that AI use respects human rights and fosters societal good.

Ethics in AI refers to principles guiding right and wrong actions, requiring AI technologies to respect societal values and protect human dignity. AI, defined as systems that autonomously analyse and make decisions, spans various forms, from voice assistants to autonomous vehicles. Without an ethical framework, AI risks worsening inequality, eroding accountability, and infringing on privacy and autonomy, highlighting the necessity of embedding fairness and responsibility into AI’s design and regulation.

AI ethics aims to minimise risks from misuse, poor design, or harmful applications, addressing issues like unauthorised surveillance and AI weaponisation. Global initiatives like UNESCO’s 2021 Recommendation on the Ethics of AI and the EU’s AI Act seek to ensure responsible AI development, balancing the challenge of early regulation against the entrenchment of unregulated technologies. These frameworks respond to real-world impacts like algorithmic bias, emphasising the need for timely, well-constructed oversight.

AI ethics draws inspiration from Asimov’s fictional Three Laws of Robotics, although real-world AI complexities extend far beyond this basic framework. Current AI applications, such as autonomous vehicles and facial recognition, introduce accountability, privacy, and other issues, demanding nuanced strategies beyond foundational ethical rules. Real-world AI systems require complex governance, focusing on areas such as legal, social, and environmental impacts.

Legal accountability, particularly in autonomous systems scenarios, raises questions about responsibility in accidents, stressing the need for legal reforms. Financially, AI risks worsening inequality due to algorithmic biases in areas like lending. Environmentally, AI’s large energy requirements for training models impact sustainability, and it is crucial to develop energy-efficient systems to address this issue. Socially, automation disrupts traditional jobs, and biased algorithms could deepen social inequality, especially in employment and criminal justice. The use of AI in surveillance also raises serious privacy concerns.

The psychological effects of AI, such as how AI-driven customer service may lack empathy or how manipulative marketing tactics may impact well-being, require careful attention. Public mistrust in AI, stemming from the opacity of AI systems and the potential for algorithmic bias, is a significant barrier to widespread AI adoption. Transparent, explainable AI that allows users to understand decision-making processes, along with strong accountability frameworks, is essential for fostering public trust and establishing a fair AI landscape.

AD 4nXeeWJX7noZ mfmV55Vu0q3198 xCE94DGzzkchmilmhw6MGz8WpK1Sq4TyGbu4 OUkRgHGcoSR3Iudzvj9WNChUKQlpD9uqlrsZckewgmiYEC0ITBpT1FHn9Qwa4kfvvxOW2iPbJiQtqSx6awqNRy8rE

Addressing these ethical challenges demands global coordination and adaptable regulation to ensure AI supports humanity’s best interests, respects human dignity, and promotes fairness across all sectors. The ethical challenges surrounding AI impact fundamental human rights, economic equality, environmental sustainability, and social trust. A collaborative approach, with contributions from governments, corporations, and individuals, is essential to build robust, transparent AI systems that advance societal welfare. Through a commitment to research, interdisciplinary collaboration, and prioritising human well-being, AI can fulfil its transformative potential for good, guiding technological advancement while safeguarding societal values.attached to emerging technologies like AI. Hence, at this critical juncture, it is quintessential to foster more refined, coordinated and scaled-up global efforts, or more precisely, an effective global digital cooperation.



El Salvador: Blueprint for the bitcoin economy

El Salvador’s adoption of bitcoin as legal tender on 7 September 2021 marked a pioneering step in integrating cryptocurrency into national economic policy. Initially viewed as a bold experiment, this move transformed into a strategic approach with significant implications both domestically and internationally, despite concerns raised by the IMF and other institutions about potential risks. The policy aimed to address economic challenges like financial inclusion in an unbanked population, making El Salvador a global beacon for cryptocurrency. With 5,748.8 bitcoins in national reserves, the country has continued to invest in bitcoin, showcasing confidence in its long-term potential.

AD 4nXePbRGdRG WZItq C

El Salvador’s bitcoin adoption has had mixed economic impacts. The cryptocurrency has streamlined remittances for Salvadorians abroad, reducing fees and making transactions more accessible. This policy has also attracted foreign investments and a surge in crypto tourism. However, bitcoin’s volatility remains a concern, with critics warning that reliance on such a fluctuating asset could threaten financial stability. President Nayib Bukele’s ambitious plan to establish ‘Bitcoin City’ —a tax-free, crypto-friendly zone to attract foreign investment with a projected $1.6 billion investment—aims to make El Salvador a global hub for digital finance.

Education has been a key focus, demonstrated through the government’s bitcoin certification programme spearheaded by the National Bitcoin Office (ONBTC). The initiative seeks to educate 80,000 government employees on bitcoin and blockchain, embedding cryptocurrency knowledge across state institutions. This approach ensures that bitcoin adoption is more than a policy directive and becomes ingrained in the country’s governance and administration, facilitating a foundational understanding of cryptocurrency among civil servants and extending into other sectors.

El Salvador’s pro-crypto stance has influenced other nations. Argentina, led by pro-crypto president Javier Milei, has shown interest in adopting cryptocurrencies to stabilise its economy and is closely studying El Salvador’s approach. As more countries consider cryptocurrency integration, El Salvador’s policy offers a practical example, illustrating both the opportunities and challenges of digital currency in a national economy.

However, regulatory challenges persist, with organisations like the IMF voicing concerns about financial stability and consumer protection risks. Despite this, El Salvador has continued to strengthen its regulatory frameworks and increase transparency around bitcoin activities, emphasising its commitment to maintaining its crypto leadership.

The government-backed Chivo wallet has played a crucial role in driving financial inclusion, giving citizens who previously had no access to banking a way to transact digitally. Through the Chivo platform, which offered $30 in bitcoin to each user, El Salvador has made significant strides toward an inclusive financial ecosystem, setting an example for other nations looking to reduce banking barriers for the unbanked.

AD 4nXcLVyD3HUfZumEeXmdkCzUYSMQ3UhRiKayMHvaLGULBAf2wiq4g8xkYxiYX3Pyda8Jhd5uRxQaZuQysrfA GrSuQrh8x96ynl80skuD3xDK2aXCzNWx4XeDVmb1k1bJDI7zh0x48dX p8Hi9yoyzBF61rP

El Salvador’s experiment has inspired other nations, such as the Central African Republic, to adopt bitcoin. For countries grappling with inflation or financial exclusion, bitcoin represents a potential alternative. El Salvador’s pioneering approach illustrates how digital currencies can offer a pathway to economic development and innovation, positioning the country as a leader in the emerging digital financial order.



Revolutionising medicine with AI

The integration of AI into medicine has marked a revolutionary shift, especially in diagnostics and early disease detection. Since AI was first applied to human clinical trials over four years ago, its potential to enhance healthcare has become increasingly evident. AI now aids in detecting complex diseases, often at early stages, improving diagnosis accuracy and patient outcomes. This technological advancement promises to transform individual health and broader societal well-being despite ethical concerns and questions about AI accuracy that persist in public debate.

In diagnostics, AI has shown remarkable success. A Japanese study revealed that AI-assisted tools, such as ChatGPT, outperformed experts, achieving an 80% accuracy rate in medical assessments across 150 diagnostics. These results encourage further integration of AI into medical devices and underscore the need for AI-focused training in medical education.

AI is making substantial strides in cancer detection, with companies like Imidex, whose AI algorithm has received FDA approval, working on improving early lung cancer screening. Similarly, French startup Bioptimus is targeting the European market with an AI model that can identify cancerous cells and genetic anomalies in tumours. Such developments highlight the growing competition and innovation in AI-driven healthcare, making these advancements more accessible globally.

AD 4nXeVZM6NpIcBjn7v LT61SlkHFHeNZPSrCiVIKM6CpKX2KfQl0BLzQkhfHjKY VmrY6 216rJv6QuOdcBNr38QMTuFXcHfNi6Lxah99NfkOdCEx1b6wB gw K1sIHUrHNEvH GzyO3o9

Despite these promising advances, public scepticism remains a significant challenge. A 2023 Pew Research study found that 60% of Americans are uncomfortable with AI-assisted diagnostics, fearing it might harm the doctor-patient relationship. While 38% of respondents anticipate better outcomes with AI, 33% worry about negative impacts, reflecting mixed feelings on AI’s role in healthcare.

AI is also contributing to dementia research. By analysing large datasets and brain scans, AI systems can detect structural brain changes and early signs of dementia. The SCAN-DAN tool, developed by researchers in Edinburgh and Dundee, aims to revolutionise early dementia detection through the NEURii global collaboration, which seeks digital solutions to dementia’s challenges. Early interventions enabled by AI hold the potential to improve the life quality of dementia patients.

AD 4nXdgWaMEvnKQfuGis 6uilCguNQs7OFJ2GqkjLtALGCxIzsl6LjTAZOO9yAD8NTc qLw0JMV2CKpRTw xpo14

AI’s utility extends to breast cancer detection, where it enhances the effectiveness of mammograms, ultrasounds, and MRIs. An AI system developed in the USA refines disease staging, distinguishing between benign and malignant tumours with reduced false positives and negatives. Accurate staging aids in effective treatment, particularly for early-detected breast cancer.

The financial backing for AI in healthcare is substantial, with projections suggesting that AI could contribute nearly $20 trillion to the global economy by 2030, with healthcare potentially accounting for over 10% of this value. Major global corporations are keen to invest in AI-driven medical equipment, underlining the field’s growth potential.

The future of AI in healthcare is promising, with AI systems poised to surpass human cognitive limits in analysing vast information. As regulatory frameworks adapt, AI tools in diagnostics could lead to faster and more precise disease detection, potentially marking a significant turning point in medical science. This transformative potential aligns AI with a revolutionary trajectory in healthcare, capable of reshaping medical practice and patient outcomes.



Just-in-time reporting from the UN Security Council: Leveraging AI for diplomatic insight

On 21 and 24 October, DiploFoundation provided real-time reporting from the UN Security Council sessions on scientific development and women, peace, and security. Supported by Switzerland, this initiative aims to improve the work of the UN Security Council and the broader UN system by making session insights more accessible.

At the heart of this effort is DiploAI, a sophisticated AI platform trained on UN materials. DiploAI unlocks the knowledge embedded in the Council’s video recordings and transcripts, making it easier to access valuable diplomatic insights. This AI-driven reporting combines advanced technology with expertise in peace and security, providing in-depth analysis of UN Security Council sessions in 2023-2024 and covering the UN General Assembly (UNGA) for eight years.

A key feature of DiploAI’s success is the seamless collaboration between AI and human experts. Experts tailored the AI system to the Security Council’s needs by providing essential documents and materials, enhancing the AI’s contextual understanding. Through iterative feedback on topics and keywords, DiploAI produces accurate and diplomatically relevant outputs. A significant milestone in this partnership was DiploAI’s analysis of ‘A New Agenda for Peace,’ where experts identified over 400 key topics, forming a comprehensive taxonomy for UN peace and security issues. Additionally, a Knowledge Graph was developed to visually represent sentiment and relational analysis, adding depth to Council session insights.

Building on these advancements, DiploAI introduced a custom chatbot that goes beyond basic Q&A. By incorporating data from all 2024 sessions, the chatbot enables interactive exploration of diplomatic content, offering detailed, real-time answers. 

This shift from static reports to dynamic, conversational access represents a major leap in understanding and engaging with UN Security Council materials.

DiploAI’s development process underscores the importance of human-AI collaboration. The Q&A module underwent approximately ten iterations, refined with feedback from UNSC experts, ensuring accuracy and sensitivity in diplomatic responses. This process has led to an AI system capable of addressing critical questions while adhering to diplomatic standards.

AD 4nXdbGxkcotkSOtkz8jfpQF7BAtttjAolRt4Wig PU0bgekVpujmb1WOLFKjh8R6BmOIoJ8eZ23ydEjSyJml8n1OkvPsTBOgojM

DiploAI’s suite of tools, including real-time transcription and analysis, enhances the transparency of UN reporting. By integrating advanced AI methods such as retrieval-augmented generation (RAG) and knowledge graphs, DiploAI contextualises and enriches the extracted information. Trained on a vast corpus of diplomatic knowledge, the AI generates responses tailored to UNSC topics, making complex session details accessible through transcripts, reports, and an AI-powered chatbot.

DiploAI’s work with the Security Council, supported by Switzerland, demonstrates the potential of AI in enhancing diplomacy. By blending technical prowess with human expertise, DiploAI promotes more inclusive, informed, and impactful diplomatic practices.


DW Weekly #184 – 1 November 2024

 Page, Text

Dear readers,

In the past week, Meta Platforms unveiled its partnership with Reuters to integrate Reuters’ news content into its AI chatbot. The collaboration across Meta’s platforms, including Facebook, WhatsApp, and Instagram, allows Meta’s chatbot to respond to real-time news inquiries using Reuters’ trusted reporting. Following Meta’s scaled-back news operations amid content disputes with regulators, this deal marks a notable return to licensed news distribution. It marks the company’s aim to balance AI-driven content with verified information, compensating Reuters through a multi-year agreement and establishing a promising model for AI and media partnerships.

Yet, the path to collaboration has not been smooth for all. Earlier in 2024, News Corp sued Perplexity AI for alleged copyright violations, arguing that the AI company used News Corp’s content without authorisation. The lawsuit was soon echoed by Dow Jones and the New York Post, both accusing Perplexity of bypassing sources. Perplexity defended itself by citing fair use, stressing that its summaries only replicated small portions of articles.

Meanwhile, in August 2024, the French news agency AFP filed a lawsuit against X (formerly Twitter), demanding compensation for using AFP’s content to train AI models. The legal action stresses the global demand for fairer treatment of newsrooms by tech companies and reflects growing concerns that the intellectual property rights of news providers are often sidelined in favour of AI innovation.

 Advertisement, Poster, Book, Comics, Publication, Person, Adult, Female, Woman, Face, Head, Art

However, over the past year, other AI giants like OpenAI have chosen to formalise relationships with media, establishing partnerships with publishers such as Hearst, Conde Nast, and Axel Springer. OpenAI’s ChatGPT now features licensed news content, a strategic move to avoid copyright disputes while providing high-quality, fact-based summaries to users. These partnerships also provide publishers with new avenues for traffic and revenue, showcasing a balanced approach where AI enhances access to reliable news and publishers are compensated. 

Other companies like Microsoft and Apple have entered the AI news space, each establishing robust collaborations with news organisations. Microsoft’s approach centres on supporting AI-driven innovation within newsrooms, while Apple plans to utilise publisher archives to improve its AI training data. These initiatives signal a trend toward structured partnerships and the emergence of Big Tech’s role in reshaping news consumption. However, as these tech giants build AI models on news content, pressure grows to respect news publishers’ copyrights, reflecting a delicate balance between AI advancement and content ownership.

As AI becomes increasingly central to media, industry leaders and advocates call for equitable policies to protect newsrooms’ intellectual property and revenue. With studies estimating that Big Tech may owe news publishers billions annually, the push for fair compensation intensifies. But, given the above cases of legal disputes and successful digital business models on the other side, the evolution of AI-news partnerships will likely hinge on transparent standards that ensure newsrooms receive due credit and financial benefit, creating a sustainable, equitable future for AI-driven media. However, these arrangements also raise questions about AI’s long-term impact on traditional newsrooms and revenue structures.

In other news…

UK man sentenced to 18 years for using AI to create child sexual abuse material

In a case spotlighting the misuse of AI in criminal activity, Hugh Nelson, a 27-year-old from Bolton, UK, was sentenced to 18 years in prison for creating child sexual abuse material (CSAM) using AI. Nelson utilised the app Daz 3D to turn ordinary photos of children into exploitative 3D images, some based on photos provided by acquaintances of the victims.

Chinese military adapts Meta’s Llama for AI tool

China’s People’s Liberation Army (PLA) has utilised Meta’s open-source AI model, Llama, to develop a military-adapted AI tool, ChatBIT, focusing on military decision-making and intelligence tasks.

More updates and other topics on our dig.watch portal!

Marko and the Digital Watch team


Highlights from the week of 25-01 November 2024

AIatUN 1

Six Democratic senators are urging the Biden administration to address human rights and cybersecurity concerns in the upcoming UN Cybercrime Convention, warning it could enable authoritarian surveillance and weaken privacy…

ff5c2175 elon musk x afp

Scrutiny intensifies over X’s handling of misinformation.

llama3

PLA researchers use the tech giant’s AI for military innovations.

flag2nd 2 2 01

The lawsuits highlight a growing debate over social media regulation in Brazil, especially after a high-profile legal dispute between Elon Musk’s X platform and a Brazilian Supreme Court justice led…

Temu logo.svg

In response to rising concerns over illegal product sales, the European Commission is preparing to investigate Chinese e-commerce platform Temu for potential regulatory breaches under the DSA.

artificial intelligence ai and machine learning ml

Masayoshi Son predicts that artificial super intelligence could surpass human brainpower by 10,000 times by 2035.

restaurant with robotic waiters

By 2040, a world with 10B humanoid robots could become reality, with prices set to make them accessible for both personal and business use globally.

ai biotech drug making companies investment

A new AI model from biotech firm Iambic Therapeutics could revolutionise drug development, potentially cutting costs in half by identifying effective drugs early in the testing process.

linkedin 689760 1280

New developments in hiring, ‘Hiring Assistant’ LinkedIn’s latest AI tool, seeks to ease recruiters’ workloads by automating job listings and candidate searches, marking a new milestone in the platform’s AI…


ICYMI



Reading corner

unsc meeting united nations shut 1
dig.watch

By partnering with the UN Security Council, DiploAI is transforming session reporting with AI-driven insights that go beyond traditional methods.

Diplo BLOGS24 Insta Anita Lamprecht 30
www.diplomacy.edu

Cognitive proximity is key to human-centred AI. Discover how AI can be aligned with human intuition and values, allowing for more harmonious human–AI collaboration. Dr Anita Lamprecht explains.

Diplo BLOGS24 Insta Jovan Kurbalija 5 Nov 1080x1080px 1
www.diplomacy.edu

In the age of AI, understanding its workings is essential for us to shift from being passive passengers to active copilots. While many view AI as a complex tool shrouded in mystery, basic knowledge of its foundational concepts—patterns, probability, hardware, data, and algorithms—can empower us. Recognizing the influence of biases in AI and advocating for ethical practices and diversity in its development are crucial steps. By engaging in discussions around AI’s governance, we can navigate our AI-driven reality, ensuring that technology serves the common good rather than merely accepting its outcomes.

Upcoming

Diplo INSTA Diplo Event Unpacking Global Digital
www.diplomacy.edu

Unpacking Global Digital Compact | Book launch Join us online on 8th November for the launch of Unpacking Global Digital Compact, a new publication written by

DW Weekly #183 – 25 October 2024

 Page, Text

Dear readers,

Over the past week, the Internet Archive has been caught in a series of cyberattacks that have disrupted its operations and raised alarming questions about the cybersecurity of its systems. What began two weeks ago as a temporary outage due to distributed denial-of-service (DDoS) attacks has evolved into a deeper breach, revealing the fragility of even the most widely respected online resources.

The first wave of attacks started with DDoS assaults, a tactic often used to flood a website with traffic, rendering it temporarily inaccessible. The pro-Palestinian hacktivist group BlackMeta claimed responsibility for these attacks, indicating a political motivation behind the disruptions. However, it quickly became evident that this was only the beginning of the Archive’s troubles. Soon after, the organisation suffered a JavaScript-based website defacement, followed by a more insidious data breach. The hackers’ persistence and varied attack methods suggest a sophisticated operation designed to probe multiple vulnerabilities within the Archive’s system.

As if these attacks were not damaging enough, 20 October brought another crisis. Internet Archive users and media outlets began receiving unauthorised emails, seemingly from the organisation. The emails included a stolen access token for the Archive’s Zendesk account, a platform for managing customer service requests. More concerningly, the message claimed that over 800,000 support tickets—dating back to 2018—had been compromised. The hackers alleged that the Internet Archive had failed to rotate API keys exposed in their GitLab secrets, leaving sensitive data vulnerable. Although the email was unauthorised, it had passed security checks, indicating it may have come from an authorised Zendesk server, adding a layer of complexity to the incident.

 Book, Publication, Advertisement, Poster, Comics

The source of the data breach appears to have been an exposed GitLab configuration file, which the hacker reportedly obtained from one of the Archive’s development servers. This file likely contained authentication tokens, granting access to the Archive’s source code and the Zendesk API. The theft of such information could allow bad actors to manipulate support tickets, create false narratives, or even gain unauthorised access to personal information. 

In the wake of these attacks, security experts like Jake Moore of ESET have emphasised the importance of swift action. Moore advised that in the aftermath of such incidents, organisations must conduct thorough audits to identify and address vulnerabilities, as malicious actors often return to test newly implemented defences. The need for proactive security measures was further underlined by Ev Kontsevoy, CEO of Teleport, who pointed out the challenge of securing access relationships after an attack. Without immediate, comprehensive action, breaches like these can lead to further exploitation.

The silence from the Internet Archive and its founder, Brewster Kahle, has only fuelled speculation about the extent of the breach and the organisation’s next steps. Neither the Archive nor GitLab has publicly commented on the stolen access tokens or the implications of the compromised Zendesk account, leaving users and stakeholders in the dark about the potential risks. What is clear, however, is that the Internet Archive must bolster its defences and reconsider its approach to API key rotation and data protection.

In other news…

News Corp sues AI firm Perplexity over copyright violations

News Corp has filed a lawsuit against the AI search engine Perplexity, accusing it of copying and summarising its copyrighted content without permission. The lawsuit claims that Perplexity’s practices divert revenue from original publishers by discouraging users from visiting full articles, harming the financial interests of news outlets like The Wall Street Journal and the New York Post.

Musk discusses XRP and crypto’s potential at Pittsburgh event

Speaking at a town hall in Pittsburgh, Elon Musk discussed the potential of cryptocurrency in protecting individual freedom, although he did not explicitly endorse XRP. He emphasised the importance of cryptocurrencies in resisting centralised control, a statement welcomed by XRP supporters amid Ripple’s ongoing legal issues with the SEC.

More updates and other topics on our dig.watch portal!

Marko and the Digital Watch team


Highlights from the week of 18-25 October 2024

bitcoin etf coin in gold

Experts predict this growing institutional demand could push Bitcoin’s price beyond $100,000 by early 2025, despite anticipated short-term volatility.

doj logo

The US Justice Department’s new rules could affect companies like TikTok, which may face penalties if they transfer sensitive data to foreign parent companies.

flag of usa and china on cracked concrete wall background

The tech war with China will intensify no matter the US election outcome.

V 1 Google

Google argues allowing greater competition on its Play Store could harm the company and introduce security risks and is appealing the 9th US Circuit Court of Appeals decision.

perplexity ai lawsuit nyt

Perplexity AI faces legal action over claims it bypasses traditional search engines, using copyrighted material to generate summaries and answers without permission from publishers.

elon musk

While not directly endorsing XRP, he underscored the importance of digital currencies in resisting centralised control.

microsoft headquarters fdi

These agents, distinct from chatbots, can handle tasks such as client inquiries and sales lead identification with little human intervention.

xAI logo

The model’s exact version is still being determined, but it is part of xAI’s strategy to rival major AI players like OpenAI and Anthropic.

3d illustration folder focus tab with word infringement conceptual image copyright law scaled

Other media entities, including Wired and Forbes, have similarly accused Perplexity of content scraping and plagiarism.

mobile phone with google icon screen computer

A judge has paused Google’s Play Store overhaul to allow more time for an appeal.



Reading corner

q1p7bh3shj8
dig.watch

This summer, the UN finalised a draft of its first international convention against cybercrime, raising questions about how it will coexist with the long-standing Budapest Convention, and in this analysis,…

Diplo BLOGS24 Insta Jovan Kurbalija 25 Oct
www.diplomacy.edu

The book “231 Shades of Diplomacy” catalogs an extensive array of diplomatic types, revealing a significant expansion in terminology, particularly in the digital age. While phrases like “cyber diplomacy” and “Facebook diplomacy” illustrate this evolution, the respect for diplomacy itself appears to be diminishing. Despite its growing prevalence in discourse, the concept of diplomacy often fails to receive the acknowledgment it deserves, overshadowed by military power and simplistic national narratives. The author advocates for a reevaluation of diplomacy’s role and the courage inherent in its practice, essential for fostering societal solutions and recognizing the importance of compromise.

Diplo BLOGS24 Insta Jovan Kurbalija 29
www.diplomacy.edu

How can the UN ensure the impartiality of its AI platform? As the UN celebrates its 79th birthday on October 24, it faces many familiar and new challenges.

Diplo BLOGS24 Insta Anita Lamprecht 28
www.diplomacy.edu

What are the key steps in building chatbots for diplomacy and governance? Dr Anita Lamprecht writes about the essential tools to craft effective AI solutions tailored for diplomatic contexts.

DW Weekly #182 – 18 October 2024

 Page, Text

Dear readers, 

In recent years, when technological advancements have become increasingly demanding regarding energy supply, sustainable development has become a mainstream topic for governments and industries seeking to balance growth with environmental responsibility. At the centre of the topic are AI and the energy sector, where innovative solutions are emerging to support the ever-growing demand for power driven by the rapid evolution of AI. Tech giants, which rely heavily on continuous energy supply to fuel data centres and AI-driven technologies, are now at the forefront of the push toward cleaner, more sustainable energy sources.

Such a Big Tech race for innovation and sustainable models powerful enough to supply energy for the growing demands of AI-powered data centres has prompted Google to sign the world’s first official corporate agreement to purchase nuclear energy. Namely, Google’s agreement with Kairos Power implies it will source energy from small modular reactors (SMRs), which have to be deployed by Kairos after the approval for the project by the US Nuclear Regulatory Commission (NRC) and local agencies. However, Kairos achieved a key milestone last year by obtaining a construction permit to build a demonstration reactor in Tennessee, signalling progress toward deploying SMRs. 

Smaller and potentially safer than traditional nuclear reactors, SMRs offer a new frontier in clean energy, particularly for industries like tech that require a constant, reliable energy supply. The agreement is poised to bring 500 MW of carbon-free power to US grids by 2030, a substantial contribution to the decarbonisation efforts of electricity systems while directly supporting the growing power needs of AI technologies.

 Adult, Female, Person, Woman, Book, Comics, Publication, machine, Bride, Wedding, Gas Pump, Pump, Face, Head, Gas Station

However, Google is not alone in pursuing renewable and sustainable energy sources. In September, Microsoft signed a similar agreement with the Three Mile Island energy plant to secure energy for its data centres. The plant, infamous for the worst nuclear accident in US history, is preparing to reopen for a 20-year deal with Microsoft to purchase power from the facility. It is scheduled to restart in 2028 following upgrades, and it will supply clean energy to support Microsoft’s growing data centres, especially those focused on AI.

Another tech giant, Amazon, is also moving towards nuclear power technology by signing three agreements to develop SMRs to address the growing demand for electricity from its data centres. In collaboration with X-Energy, Amazon will fund a feasibility study for an SMR project near a Northwest Energy site in Washington state, positioning itself as a centre forward in the shift toward renewable energy sources. The deal allows Amazon to purchase power from four SMR modules, with the potential for up to eight additional modules capable of producing enough energy to power more than 770,000 homes.

Furthermore, beyond ensuring a reliable power supply for tech companies, these initiatives reshape the energy landscape by fostering innovation and economic growth. The US Department of Energy has highlighted the financial benefits of nuclear power, citing its potential to generate high-paying, long-term jobs and stimulate local economies. With advanced nuclear reactors estimated to create hundreds of thousands of jobs by 2050, the tech sector’s investments in nuclear energy could also contribute to a broader economic transformation.

Thus, by backing advanced cutting-edge nuclear technologies and other clean energy solutions, companies like Google, Microsoft, and Amazon have set a precedent for how industries can align economic growth with environmental responsibility.

In other news…

Big Tech’s AI models fall short of new EU AI Act’s standards

A recent evaluation of top AI models by Swiss startup LatticeFlow has uncovered significant gaps in compliance with the upcoming EU AI Act, particularly in cybersecurity and bias prevention. While some models, like Anthropic’s Claude 3 Opus, scored highly in various tests, others struggled, such as OpenAI’s GPT-3.5 Turbo and Alibaba’s Qwen1.5 72B Chat, which revealed vulnerabilities in preventing discriminatory outputs.

Australia and the social media ban for younger users

The Australian government is moving toward a social media ban for younger users, sparking concerns among youth and experts about the potential negative impacts on vulnerable communities. The proposed restrictions, intended to combat issues such as addiction and online harm, may sever vital social connections for teens from migrant, LGBTQIA+, and other minority backgrounds.

More updates and other topics on our dig.watch portal!

Marko and the Digital Watch team


Highlights from the week of 11-18 October 2024

eu ai act

Prominent AI models fail to meet the EU regulations, particularly in cybersecurity resilience and non-discriminatory output.

Xl3NMbOY autonomous drones covid19

AI-powered drones are being used in Russia in the ongoing conflict with Ukraine. Defense Minister Andrei Belousov confirmed the deployment of advanced drone units and highlighted plans for further expansion.

flag of usa and china on cracked concrete wall background

The US remains China’s third-largest trading partner, emphasising the importance of ongoing collaboration amid global competition.

V 1 Google

Google argues allowing greater competition on its Play Store could harm the company and introduce security risks and is appealing the 9th US Circuit Court of Appeals decision.

european union eu flag

Businesses anxious over delayed cybersecurity regulations.

enter new era computing with large quantum computer generative ai

These algorithms are crucial to the security of advanced encryption standards, including AES-256, which is widely used in banking and cryptocurrency.

elon musk cybercab printscreen x sawyermerritt

Tesla reveals the Cybercab, aiming for production by 2026, as it moves towards autonomous vehicles.

TikTok

Hundreds of TikTok employees are facing layoffs as the company moves towards automated moderation.

Meta and multilingual

The job cuts are part of the effort to reallocate resources and align with Meta’s long-term strategic goals.

gavel and european union flag on black background

MiCA is expected to become a global benchmark, encouraging other jurisdictions to align their regulatory frameworks for cross-border compatibility.



Reading corner

revolutionising medicine with ai from early detection to precision care
dig.watch

AI is transforming medicine by enabling early disease detection, improving diagnostics, and personalising care.

Diplo BLOGS24 Insta Jovan Kurbalija 16
www.diplomacy.edu

Diplomacy is undergoing a significant transformation in the age of artificial intelligence. Rather than becoming obsolete, it is poised to thrive—and here’s why.

Diplo BLOGS24 Insta Anita Lamprecht 18
www.diplomacy.edu

Week 2 of the AI Apprenticeship course: While it processes data and evolves with us, AI still lacks the human ability to grasp context and meaning. Will AI always be an apprentice, or can it truly master understanding?

Upcoming

un headquaters cybercrime un logo
www.diplomacy.edu

UN Cybercrime Convention: What does it mean and how will it impact all of us? Once formally adopted, how will the UN cybercrime convention impact the security