UAE Ministry of Interior uses AI and modern laws to fight crime

The UAE Ministry of Interior states that AI, surveillance, and modern laws are key to fighting crime. Offences are economic, traditional, or cyber, with data tools and legal updates improving investigations. Cybercrime is on the rise as digital technology expands.

Current measures include AI monitoring, intelligent surveillance, and new laws. Economic crimes like fraud and tax evasion are addressed through analytics and banking cooperation. Cross-border cases and digital evidence tampering continue to be significant challenges.

Traditional crimes, such as theft and assault, are addressed through cameras, patrols, and awareness drives. Some offences persist in remote or crowded areas. Technology and global cooperation have improved results in several categories.

UAE officials warn that AI and the internet of Things will lead to more sophisticated cyberattacks. Future risks include evolving criminal tactics, privacy threats, skills shortages, and balancing security and individual rights.

Opportunities include AI-powered security, stronger global ties, and better cybersecurity. Dubai Police have launched a bilingual platform to educate the public, viewing awareness as the first defence against online threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Colorado’s AI law under review amid budget crisis

Colorado lawmakers face a dual challenge as they return to the State Capitol on 21 August for a special session: closing a $1.2 billion budget shortfall and revisiting a pioneering yet controversial law regulating AI.

Senate Bill 24-205, signed into law in May 2024, aims to reduce bias in AI decision-making affecting areas such as lending, insurance, education, and healthcare. While not due for implementation until February 2026, critics and supporters now expect that deadline to be extended.

Representative Brianna Titone, one of the bill’s sponsors, emphasised the importance of transparency and consumer safeguards, warning of the risks associated with unregulated AI. However, unexpected costs have emerged. State agencies estimate implementation could cost up to $5 million, a far cry from the bill’s original fiscal note.

Governor Polis has called for amendments to prevent excessive financial and administrative burdens on state agencies and businesses. The Judicial Department now expects costs to double from initial projections, requiring supplementary budget requests.

Industry concerns centre on data-sharing requirements and vague regulatory definitions. Critics argue the law could erode competitive advantage and stall innovation in the United States. Developers are urging clarity and more time before compliance is enforced.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Apple pledges $100 billion more to boost US chip production

Apple is increasing its domestic investment by an additional $100 billion, bringing its total commitment to US manufacturing to $600 billion over the next four years.

The announcement was made by CEO Tim Cook during a joint appearance with President Donald Trump at the White House, as the administration signals plans to impose steep tariffs on foreign-made semiconductors.

The investment includes a new American Manufacturing Program aimed at expanding US production of key Apple components, such as AI servers and rare earth magnets. Facilities are already under development in states including Texas, Kentucky, and Arizona.

Apple says the initiative will support 450,000 jobs across all 50 states and reduce reliance on overseas supply chains.

Apple’s expanded spending arrives amid criticism of its slow progress in AI. With its ‘Apple Intelligence’ software struggling for traction, and the recent departure of foundation model head Rouming Pang to Meta, the company is now shifting focus.

Cook confirmed that investment in AI infrastructure is accelerating, with data centres expanding in five states.

While Apple’s move has drawn praise for supporting American jobs, it has also stirred controversy. Some users expressed discontent with Cook’s public alignment with Trump, despite the strategic importance of avoiding tariffs.

Trump stated that companies investing in the US would not face the proposed import charges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China warns over biometric data risks linked to crypto schemes

China’s Ministry of State Security has warned of foreign attempts to collect sensitive biometric data via crypto schemes. The ministry warned that foreign agents are illegally harvesting iris scans and facial data, risking personal privacy and national security.

The advisory noted recent cases in which foreign intelligence services exploited biometric technologies to spy on individuals within China. Cryptocurrencies incentivised people worldwide to submit iris scans, which were sent overseas.

Although no specific companies were named, the description resembled the approach of the crypto firm World, formerly known as Worldcoin.

Biometric identification methods have proliferated across many sectors due to their accuracy and convenience. However, the ministry stressed the vulnerability of such systems to data breaches and misuse.

Iris patterns, unique and challenging to replicate, are prized by malicious actors.

Citizens are urged to remain cautious, carefully review privacy policies, and question how their biometric information is handled.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU proposal to scan private messages gains support

The European Union’s ‘Chat Control’ proposal is gaining traction, with 19 member states now supporting a plan to scan all private messages on encrypted apps. From October, apps like WhatsApp, Signal, and Telegram must scan all messages, photos, and videos on users’ devices before encryption.

France, Denmark, Belgium, Hungary, Sweden, Italy, and Spain back the measure, while Germany has yet to decide. The proposal could pass by mid-October under the EU’s qualified majority voting system if Germany joins.

The initiative aims to prevent child sexual abuse material (CSAM) but has sparked concerns over mass surveillance and the erosion of digital privacy.

In addition to scanning, the proposal would introduce mandatory age verification, which could remove anonymity on messaging platforms. Critics argue the plan amounts to real-time surveillance of private conversations and threatens fundamental freedoms.

Telegram founder Pavel Durov recently warned of societal collapse in France due to censorship and regulatory pressure. He disclosed attempts by French officials to censor political content on his platform, which he refused to comply with.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp shuts down 6.8 million scam accounts

As part of its anti-scam efforts, WhatsApp has removed 6.8 million accounts linked to fraudulent activity, according to its parent company, Meta.

The crackdown follows the discovery that organised criminal groups are operating scam centres across Southeast Asia, hacking WhatsApp accounts or adding users to group chats to lure victims into fake investment schemes and other types of fraud.

In one case, WhatsApp, Meta, and OpenAI collaborated to disrupt a Cambodian cybercrime group that used ChatGPT to generate fake instructions for a rent-a-scooter pyramid scheme.

Victims were enticed with offers of cash for social media engagement before being moved to private chats and pressured to make upfront payments via cryptocurrency platforms.

Meta warned that these scams often stem from well-organised networks in Southeast Asia, some exploiting forced labour. Authorities continue to urge the public to remain vigilant, enable features such as WhatsApp’s two-step verification, and be wary of suspicious or unsolicited messages.

It should be mentioned that these scams have also drawn political attention in the USA. Namely, US Senator Maggie Hassan has urged SpaceX CEO Elon Musk to act against transnational criminal groups in Southeast Asia that use Starlink satellite internet to run massive online fraud schemes targeting Americans.

Despite SpaceX’s policies allowing service termination for fraud, Starlink remains active in regions where these scams, often linked to forced labour and human trafficking, operate.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Chinese nationals accused of bypassing US export controls on AI chips

Two Chinese nationals have been charged in the US with illegally exporting millions of dollars’ worth of advanced Nvidia AI chips to China, violating the export controls.

The Department of Justice (DOJ) said Chuan Geng and Shiwei Yang operated California-based ALX Solutions, which allegedly shipped restricted hardware without the required licences over the past three years.

The DOJ claims that the company exported Nvidia’s H100 and GeForce RTX 4090 graphics processing units to China via transit hubs in Singapore and Malaysia, concealing their ultimate destination.

Payments for the shipments allegedly came from firms in Hong Kong and mainland China, including a $1 million transfer in January 2024.

Court documents state that ALX falsely declared shipments to Singapore-based customers, but US export control officers could not confirm the deliveries.

One 2023 invoice for over $28 million reportedly misrepresented the buyer’s identity. Neither Geng nor Yang had sought export licences from the US Commerce Department.

Yang was arrested on Saturday, and Geng surrendered soon after. Both appeared in a Los Angeles federal court on Monday and could face up to 20 years in prison if convicted.

Nvidia and Super Micro, a supplier, said they comply with all export regulations and will cooperate with authorities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Law curbs AI use in mental health services across US state

A new law in a US state has banned the use of AI for delivering mental health care, drawing a firm line between digital tools and licensed professionals. The legislation limits AI systems to administrative tasks such as note-taking and scheduling, explicitly prohibiting them from offering therapy or clinical advice.

The move comes as concerns grow over the use of AI chatbots in sensitive care roles. Lawmakers in the midwestern state of Illinois approved the measure, citing the need to protect residents from potentially harmful or misleading AI-generated responses.

Fines of up to $10,000 may be imposed on companies or individuals who violate the ban. Officials stressed that AI lacks the empathy, accountability and clinical oversight necessary to ensure safe and ethical mental health treatment.

One infamous case saw an AI-powered chatbot suggest drug use to a fictional recovering addict, a warning signal, experts say, of what can go wrong without strict safeguards. The law is named the Wellness and Oversight for Psychological Resources Act.

Other parts of the United States are considering similar steps. Florida’s governor recently described AI as ‘the biggest issue’ facing modern society and pledged new state-level regulations within months.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

X challenges India’s expanded social media censorship in court

Tensions have escalated between Elon Musk’s social media platform, X, and the Indian government over extensive online content censorship measures.

Triggered by a seemingly harmless post describing a senior politician as ‘useless,’ the incident quickly spiralled into a significant legal confrontation.

X has accused Prime Minister Narendra Modi’s administration of overstepping constitutional bounds by empowering numerous government bodies to issue content-removal orders, significantly expanding the scope of India’s digital censorship.

At the heart of the dispute lies India’s increased social media content regulation since 2023, including launching the Sahyog platform, a centralised portal facilitating direct content-removal orders from officials to tech firms.

X rejected participating in Sahyog, labelling it a ‘censorship portal,’ and subsequently filed a lawsuit in Karnataka High Court earlier this year, contesting the legality of India’s directives and website, which it claims undermine free speech.

Indian authorities justify their intensified oversight by pointing to the need to control misinformation, safeguard national security, and prevent societal discord. They argue that the measures have broad support within the tech community. Indeed, major players like Google and Meta have reportedly complied without public protest, though both companies have declined to comment on their stance.

However, the court documents reveal that the scope of India’s censorship requests extends far beyond misinformation.

Authorities have reportedly targeted satirical cartoons depicting politicians unfavorably, criticism regarding government preparedness for natural disasters, and even media coverage of serious public incidents like a deadly stampede at a railway station.

While Musk and Prime Minister Modi maintain an outwardly amicable relationship, the conflict presents significant implications for X’s operations in India, one of its largest user bases.

Musk, a self-proclaimed free speech advocate, finds himself at a critical juncture, navigating between principles and the imperative to expand his business ventures within India’s substantial market.

Source: Reuters

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Cloudflare claims Perplexity circumvented website scraping blocks

Cloudflare has accused AI startup Perplexity of ignoring explicit website instructions not to scrape their content.

According to the internet infrastructure company, Perplexity has allegedly disguised its identity and used technical workarounds to bypass restrictions set out in Robots.txt files, which tell bots which pages they may or may not access.

The behaviour was reportedly detected after multiple Cloudflare customers complained about unauthorised scraping attempts.

Instead of respecting these rules, Cloudflare claims Perplexity altered its bots’ user agent to appear as a Google Chrome browser on macOS and switched its network identifiers to avoid detection.

The company says these tactics were seen across tens of thousands of domains and millions of daily requests, and that it used machine learning and network analysis to identify the activity.

Perplexity has denied the allegations, calling Cloudflare’s report a ‘sales pitch’ and disputing that the bot named in the findings belongs to the company. Cloudflare has since removed Perplexity’s bots from its verified list and introduced new blocking measures.

The dispute arises as Cloudflare intensifies its efforts to grant website owners greater control over AI crawlers. Last month, it launched a marketplace enabling publishers to charge AI firms for scraping, alongside free tools to block unauthorised data collection.

Perplexity has previously faced criticism over content use, with outlets such as Wired accusing it of plagiarism in 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!