In his op-ed, From Hammurabi to ChatGPT, Jovan Kurbalija draws on the ancient Code of Hammurabi to argue for a principle of legal accountability in modern AI regulation and governance. Dating back 4,000 years, Hammurabi’s Code established that builders were responsible for damages caused by their work—a principle Kurbalija believes should apply to AI developers, deployers, and beneficiaries today.
While this may seem like common sense, current legal frameworks, particularly Section 230 of the 1996 US Communications Decency Act, have created a loophole. The provision, designed to protect early internet platforms, grants them immunity for user-generated content, allowing AI companies nowadays to evade responsibility for causing harm such as deepfakes, fraud, and cyber crimes. The legal anomaly complicates global AI governance and digital diplomacy efforts, as inconsistent accountability standards hinder international cooperation.
Kurbalija emphasises that existing legal rules—applied by courts, as seen in internet regulation—should suffice for AI governance. New AI-specific rules should only be introduced in exceptional cases, such as when addressing apparent legal gaps, similar to how cybercrime and data protection laws emerged in the internet era.
He concludes that AI, like hammers, is ultimately a tool—albeit a powerful one. Legal responsibility must lie with humans, not machines. By discarding the immunity shield of Section 230 and reaffirming principles of accountability, transparency, and justice, policymakers can draw on 4,000 years of legal wisdom to govern AI effectively. That approach strengthens AI governance and advances digital diplomacy by creating a foundation for global norms and cooperation in the digital age.
For more information on these topics, visit diplomacy.edu.
The Italian government is under increasing pressure to explain its links to Israeli spyware firm Paragon, following reports that the company severed ties with Rome over allegations of misuse. The controversy erupted after WhatsApp revealed that Paragon spyware had been used to target multiple users, including a journalist and a human rights activist critical of Prime Minister Giorgia Meloni.
While the government has confirmed that seven people in Italy were affected, it denies any involvement in the hacking and has called for an investigation. However, reports from The Guardian and Haaretz claim Paragon cut ties with Italy due to doubts over the government’s denial. Opposition politicians have demanded clarity, with former Prime Minister Matteo Renzi insisting that those responsible be held accountable.
Deputy Prime Minister Matteo Salvini initially suggested that internal disputes within the intelligence services might be behind the scandal, though he later retracted his comment, claiming he was referring to unrelated cases. Meanwhile, critics argue that the government cannot ignore the growing concerns over the potential misuse of surveillance tools against political opponents.
With mounting calls for transparency, the affair has intensified debate over government accountability and digital surveillance, raising broader questions about the ethical use of spyware within democratic nations.
French prosecutors have launched an investigation into X, formerly known as Twitter, over alleged algorithmic bias. The probe was initiated after a lawmaker raised concerns that biased algorithms on the platform may have distorted automated data processing. The Paris prosecutor’s office confirmed that cybercrime specialists are analysing the issue and conducting technical checks.
The investigation comes just days before a major AI summit in Paris, where global leaders and tech executives from companies like Microsoft and Alphabet will gather. X has not responded to requests for comment. The case highlights growing scrutiny of the platform, which has been criticised for its role in shaping political discourse. Elon Musk’s vocal support for right-wing parties in Europe has raised fears of foreign interference.
France‘s J3 cybercrime unit, which is leading the investigation, has previously targeted major tech platforms, including Telegram. Last year, it played a key role in the arrest of Telegram’s founder and pressured the platform to remove illegal content. X has also faced legal challenges in other countries, including Brazil, where it was temporarily blocked for failing to curb misinformation.
Gambling companies are under investigation for covertly sharing visitors’ data with Facebook’s parent company, Meta, without proper consent, breaching data protection laws. A hidden tracking tool embedded in numerous UK gambling websites has been sending data, such as the web pages users visit and the buttons they click, to Meta, which then uses this information to profile individuals as gamblers. This data is then used to target users with gambling-related ads, violating the legal requirement for explicit consent before sharing such information.
Testing of 150 gambling websites revealed that 52 automatically transmitted user data to Meta, including large brands like Hollywoodbets, Sporting Index, and Bet442. This data sharing occurred without users having the opportunity to consent, resulting in targeted ads for gambling websites shortly after visiting these sites. Experts have raised concerns about the industry’s unlawful practices and called for immediate regulatory action.
The Information Commissioner’s Office (ICO) is reviewing the use of tracking tools like Meta Pixel and has warned that enforcement action could be taken, including significant fines. Some gambling companies have updated their websites to prevent automatic data sharing, while others have removed the tracking tool altogether in response to the findings. However, the Gambling Commission has yet to address the issue of third-party profiling used to recruit new customers.
The misuse of data in this way highlights the risks of unregulated marketing, particularly for vulnerable individuals. Data privacy experts have stressed that these practices not only breach privacy laws but could also exacerbate gambling problems by targeting individuals who may already be at risk.
PlayStation Plus subscribers will receive an automatic five-day extension after a global outage disrupted the PlayStation Network for around 18 hours on Friday and Saturday. Sony confirmed on Sunday that network services had been fully restored and apologised for the inconvenience but did not specify the cause of the disruption.
The outage, which started late on Friday, left users unable to sign in, play online games or access the PlayStation Store. By Saturday evening, Sony announced that services were back online. At its peak, Downdetector.com recorded nearly 8,000 affected users in the US and over 7,300 in the UK.
PlayStation Network plays a vital role in Sony’s gaming division, supporting millions of users worldwide. Previous disruptions have been more severe, including a cyberattack in 2014 that shut down services for several days and a major 2011 data breach affecting 77 million users, leading to a month-long shutdown and regulatory scrutiny.
South Korea’s National Intelligence Service (NIS) has raised concerns about the Chinese AI app DeepSeek, accusing it of excessively collecting personal data and using it for training purposes. The agency warned government bodies last week to take security measures, highlighting that unlike other AI services, DeepSeek collects sensitive data such as keyboard input patterns and transfers it to Chinese servers. Some South Korean government ministries have already blocked access to the app due to these security concerns.
The NIS also pointed out that DeepSeek grants advertisers unrestricted access to user data and stores South Korean users’ data in China, where it could be accessed by the Chinese government under local laws. The agency also noted discrepancies in the app’s responses to sensitive questions, such as the origin of kimchi, which DeepSeek claimed was Chinese when asked in Chinese, but Korean when asked in Korean.
DeepSeek has also been accused of censoring political topics, such as the 1989 Tiananmen Square crackdown, prompting the app to suggest changing the subject. In response to these concerns, China’s foreign ministry stated that the country values data privacy and security and complies with relevant laws, denying that it pressures companies to violate privacy. DeepSeek has not yet commented on the allegations.
The Central African Republic made waves on 10 February by announcing the launch of its meme coin, CAR. The news came directly from President Faustin-Archange Touadéra’s official X account, presenting the token as an experiment to unite people and boost national development. The meme coin, launched on the Solana-based Pump.fun platform, saw its value surge rapidly as traders rushed to invest in what was described as the first-ever national meme coin.
However, excitement soon turned to scepticism. AI detection tools flagged the president’s announcement video as potentially AI-generated, raising concerns about its authenticity. The project’s official X account was swiftly suspended, and further scrutiny revealed that its domain had been registered just days before the announcement using Namecheap, a budget-friendly provider. Shortly after, Namecheap took the website offline, citing it as an ‘abusive service.’
Despite these red flags, the CAR token initially reached a peak valuation of $527 million before dropping to $460 million. The controversy comes amid a rise in fraudulent memecoin launches, with recent cases involving hacked X accounts of high-profile figures. While there is still no clear confirmation on whether CAR is an official government-backed initiative or an elaborate scam, the crypto community remains on high alert.
South Korea has temporarily blocked employee access to Chinese AI startup DeepSeek over security concerns. A government notice urged ministries and agencies to exercise caution when using AI services, including DeepSeek and ChatGPT. Korea Hydro & Nuclear Power, the defence ministry, and the foreign ministry have all imposed restrictions on DeepSeek access.
Australia and Taiwan have already banned DeepSeek from government devices, citing security risks. Italy previously ordered the company to block its chatbot over privacy concerns. Authorities in the US, India, and parts of Europe are also reviewing the implications of using the AI service. South Korea’s privacy watchdog plans to question DeepSeek on its handling of user data.
Korean businesses are also tightening restrictions on generative AI. Kakao Corp advised employees to avoid using DeepSeek, despite its recent partnership with OpenAI. SK Hynix has limited access to generative AI services, and Naver has asked employees not to use AI tools that store data externally.
DeepSeek has not yet responded to requests for comment. The company’s latest AI models, released last month, have drawn attention for their capabilities and cost efficiency. However, growing security concerns are leading governments and corporations to impose stricter controls on their use.
OpenAI is set to air its first-ever television advert during the upcoming Super Bowl, marking its entry into commercial advertising. The Wall Street Journal reported that the AI company will join other major tech firms in leveraging the massive Super Bowl audience to promote its brand. Google previously used the event to highlight its AI capabilities.
The Super Bowl is one of the most sought-after advertising platforms, with high costs reflecting its enormous reach. A 30-second slot for the 2025 game has sold for up to $8 million, an increase from $7 million last year.
The 2024 Super Bowl attracted an estimated 210 million viewers, and this year’s event will take place in New Orleans on 9 February at the Caesars Superdome.
OpenAI has seen rapid growth since launching ChatGPT in 2022, reaching over 300 million weekly active users. The company is in talks to raise up to $40 billion at a $300 billion valuation and recently appointed Kate Rouch as its first chief marketing officer. Microsoft holds a significant stake in the AI firm.
Luca Casarini, a prominent Italian migrant rescue activist, was warned by Meta that his phone had been targeted with spyware. The alert was received through WhatsApp, the same day Meta accused surveillance firm Paragon Solutions of using advanced hacking methods to steal user data. Paragon, reportedly American-owned, has not responded to the allegations.
Casarini, who co-founded the Mediterranea Saving Humans charity, has faced legal action in Italy over his rescue work. He has also been a target of anti-migrant media and previously had his communications intercepted in a case related to alleged illegal immigration. He remains unaware of who attempted to hack his device or whether the attack had judicial approval.
The revelation follows a similar warning issued to Italian journalist Francesco Cancellato, whose investigative news outlet, Fanpage, recently exposed far-right sympathies within Prime Minister Giorgia Meloni’s political youth wing. Italy’s interior ministry has yet to comment on the situation.