EU AI Act challenges 68% of European businesses, AWS report finds

As AI becomes integral to digital transformation, European businesses struggle to adapt to new regulations like the EU AI Act.

A report commissioned by AWS and Strand Partners revealed that 68% of surveyed companies find the EU AI Act difficult to interpret, with compliance absorbing around 40% of IT budgets.

Businesses unsure of regulatory obligations are expected to invest nearly 30% less in AI over the coming year, risking a slowdown in innovation across the continent.

The EU AI Act, effective since August 2024, introduces a phased risk-based framework to regulate AI in the EU. Some key provisions, including banned practices and AI literacy rules, are already enforceable.

Over the next year, further requirements will roll out, affecting AI system providers, users, distributors, and non-EU companies operating within the EU. The law prohibits exploitative AI applications and imposes strict rules on high-risk systems while promoting transparency in low-risk deployments.

AWS has reaffirmed its commitment to responsible AI, which is aligned with the EU AI Act. The company supports customers through initiatives like AI Service Cards, its Responsible AI Guide, and Bedrock Guardrails.

AWS was the first primary cloud provider to receive ISO/IEC 42001 certification for its AI offerings and continues to engage with the EU institutions to align on best practices. Amazon’s AI Ready Commitment also offers free education on responsible AI development.

Despite the regulatory complexity, AWS encourages its customers to assess how their AI usage fits within the EU AI Act and adopt safeguards accordingly.

As compliance remains a shared responsibility, AWS provides tools and guidance, but customers must ensure their applications meet the legal requirements. The company updates customers as enforcement advances and new guidance is issued.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

North Korea’s BlueNoroff uses deepfakes in Zoom calls to hack crypto workers

The North Korea-linked threat group BlueNoroff has been caught deploying deepfake Zoom meetings to target an employee at a cryptocurrency foundation, aiming to install malware on macOS systems.

According to cybersecurity firm Huntress, the attack began through a Telegram message that redirected the victim to a fake Zoom site. Over several weeks, the employee was lured into a group video call featuring AI-generated replicas of company executives.

When the employee encountered microphone issues during the meeting, the fake participants instructed them to download a Zoom extension, which instead executed a malicious AppleScript.

The script covertly fetched multiple payloads, installed Rosetta 2, and prompted for the system password while wiping command histories to hide forensic traces. Eight malicious binaries were uncovered on the compromised machine, including keyloggers, information stealers, and remote access tools.

BlueNoroff, also known as APT38 and part of the Lazarus Group, has a track record of targeting financial and blockchain organisations for monetary gain. The group’s past operations include the Bybit and Axie Infinity breaches.

Their campaigns often combine deep social engineering with sophisticated multi-stage malware tailored for macOS, with new tactics now mimicking audio and camera malfunctions to trick remote workers.

Cybersecurity analysts have noted that BlueNoroff has fractured into subgroups like TraderTraitor and CryptoCore, specialising in cryptocurrency theft.

Recent offshoot campaigns involve fake job interview portals and dual-platform malware, such as the Python-based PylangGhost and GolangGhost trojans, which harvest sensitive data from victims across operating systems.

The attackers have impersonated firms like Coinbase and Uniswap, mainly targeting users in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated photo falsely claims to show a downed Israeli jet

Following Iranian state media claims that its forces shot down two Israeli fighter jets, an image circulated online falsely purporting to show the wreckage of an F-35.

The photo, which shows a large jet crash-landing in a desert, quickly spread across platforms like Threads and South Korean forums, including Aagag and Ruliweb. An Israeli official dismissed the shootdown claim as ‘fake news’.

The image’s caption in Korean read: ‘The F-35 shot down by Iran. Much bigger than I thought.’ However, a detailed AFP analysis found the photo contained several hallmarks of AI generation.

People near the aircraft appear the same size as buses, and one vehicle appears to merge with the road — visual anomalies common in synthetic images.

In addition to size distortions, the aircraft’s markings did not match those used on actual Israeli F-35s. Lockheed Martin specifications confirm the F-35 is just under 16 metres long, unlike the oversized version shown in the image.

Furthermore, the wing insignia in the image differed from the Israeli Air Force’s authentic emblem.

Amid escalating tensions between Iran and Israel, such misinformation continues to spread rapidly. Although AI-generated content is becoming more sophisticated, inconsistencies in scale, symbols, and composition remain key indicators of digital fabrication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France 24 partners with Mediagenix to streamline on-demand programming

Mediagenix has entered a collaboration with French international broadcaster France 24, operated by France Médias Monde, to support its content scheduling modernisation programme.

As part of the upgrade, France 24 will adopt Mediagenix’s AI-powered, cloud-based scheduling solution to manage content across its on-demand platforms. The system promises improved operational flexibility, enabling rapid adjustments to programming in response to major events and shifting editorial priorities.

Pamela David, Engineering Manager for TV and Systems Integration at France Médias Monde, said: ‘This partnership with Mediagenix is a critical part of equipping our France 24 channels with the best scheduling and content management solutions.’

‘The system gives our staff the ultimate flexibility to adjust schedules as major events happen and react to changing news priorities.’

Françoise Semin, Chief Commercial Officer at Mediagenix, added: ‘France Médias Monde is a truly global broadcaster. We are delighted to support France 24’s evolving scheduling needs with our award-winning solution.’

Training for France 24 staff will be provided by Lapins Bleus Formation, based in Paris, ahead of the system’s planned rollout next year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Viper Technology sponsors rising AI talent for IOAI 2025 in China

Pakistani student Muhammad Ayan Abdullah has been selected to represent the country at the prestigious International Olympiad in Artificial Intelligence (IOAI), set to take place in Beijing, China, from 2–9 August 2025.

To support his journey, Viper Technology—a leading Pakistani IT hardware manufacturer—has partnered with the Punjab Information Technology Board (PITB) to provide Ayan with its flagship ‘PLUTO AI PC’.

Built locally for advanced AI and machine learning workloads, the high-performance computer reflects Viper’s mission to promote homegrown innovation and empower young tech talent on global platforms.

‘This is part of our commitment to backing the next generation of technology leaders,’ said Faisal Sheikh, Co-Founder and COO of Viper Technology. ‘We are honoured to support Muhammad Ayan Abdullah and showcase the strength of Pakistani talent and hardware.’

The PLUTO AI PC, developed and assembled in Pakistan, is a key part of Viper’s latest AI-focused product line—marking the country’s growing presence in competitive, global technology arenas.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sam Altman claims OpenAI team rejecting Meta’s mega offers

Meta is intensifying efforts to recruit AI talent from OpenAI by offering signing bonuses worth up to $100 million and multi-million-pound annual salaries. However, OpenAI CEO Sam Altman claims none of the company’s top researchers have accepted the offers.

Speaking on the Uncapped podcast, Altman said Meta had approached his team with ‘giant offers’, but OpenAI’s researchers stayed loyal, believing the company has a better chance of achieving superintelligence—AI that surpasses human capabilities.

OpenAI, where the average employee reportedly earns around $1.13 million a year, fosters a mission-driven culture focused on building AI for the benefit of humanity, Altman said.

Meta, meanwhile, is assembling a 50-person Superintelligence Lab, with CEO Mark Zuckerberg personally overseeing recruitment. Bloomberg reported that offers from Meta have reached seven to nine figures in total compensation.

Despite the aggressive approach, Meta appears to be losing some of its own researchers to rivals. VC principal Deedy Das recently said Meta lost three AI researchers to OpenAI and Anthropic, even after offering over $2 million annually.

In a bid to acquire more talent, Meta has also invested $14.3 billion in Scale AI, securing a 49% stake and bringing CEO Alexandr Wang into its Superintelligence Lab leadership.

Meta says its AI assistant now reaches one billion monthly users, while OpenAI reports 500 million weekly active users globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IBM combines watsonx and Guardium to tackle AI compliance

IBM has unveiled new software capabilities that integrate AI security and governance, claiming the industry’s first unified solution to manage the risks of agentic AI.

The enhancements merge IBM’s watsonx.governance platform—which supports oversight, transparency, and lifecycle management of AI systems—with Guardium AI Security, a tool built to protect AI models, data, and operational usage.

By unifying these tools, IBM’s solution offers enterprises the ability to oversee both governance and security across AI deployments from a single interface. It also supports compliance with 12 major frameworks, including the EU AI Act and ISO 42001.

The launch aims to address growing concerns around AI safety, regulation, and accountability as businesses scale AI-driven operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MIT study links AI chatbot use to reduced brain activity and learning

A new preprint study from MIT has revealed that using AI chatbots for writing tasks significantly reduces brain activity and impairs memory retention.

The research, led by Dr Nataliya Kosmyna at the MIT Media Lab, involved Boston-area students writing essays under three conditions: unaided, using a search engine, or assisted by OpenAI’s GPT-4o. Participants wore EEG headsets to monitor brain activity throughout.

Results indicated that those relying on AI exhibited the weakest neural connectivity, with up to 55% lower cognitive engagement than the unaided group. Those using search engines showed a moderate drop of up to 48%.

The researchers used Dynamic Directed Transfer Function (dDTF) to assess cognitive load and information flow across brain regions. They found that while the unaided group activated broad neural networks, AI users primarily engaged in procedural tasks with shallow encoding of information.

Participants using GPT-4o also performed worst in recall and perceived ownership of their written work. In follow-up sessions, students previously reliant on AI struggled more when the tool was removed, suggesting diminished internal processing skills.

Meanwhile, those who used their own cognitive skills earlier showed improved performance when later given AI support.

The findings suggest that early AI use in education may hinder deeper learning and critical thinking. Researchers recommend that students first engage in self-driven learning before incorporating AI tools to enhance understanding.

Dr Kosmyna emphasised that while the results are preliminary and not yet peer-reviewed, the study highlights the need for careful consideration of AI’s cognitive impact.

MIT’s team now plans to explore similar effects in coding tasks, studying how AI tools like code generators influence brain function and learning outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI diplomacy enters the spotlight with Gulf region partnerships

In a groundbreaking shift in global diplomacy, recent US-brokered AI partnerships in the Gulf region have propelled AI to the centre of international strategy. As highlighted by Slobodan Kovrlija, this development transforms the Gulf into a key AI hub, alongside the US and China.

Countries like Saudi Arabia, the UAE, and Qatar are investing heavily in AI infrastructure—from quantum computing to sprawling data centres—as part of a calculated effort to integrate more deeply into a US-led technological sphere and counter China’s Digital Silk Road ambitions. That movement is already reshaping global dynamics.

China is racing to deepen its AI alliances with developing nations, while Russia is leveraging the expanded BRICS bloc to build alternative AI systems and promote its AI Code of Ethics. On the other hand, Europe is stepping up efforts to internationalise its ‘human-centric AI’ regulatory approach under the EU AI Act.

These divergent paths underscore how AI capabilities are now as essential to diplomacy as traditional military or economic tools, forming emerging ‘AI blocs’ that may redefine geopolitics for decades. Kovrlija emphasises that AI diplomacy is no longer a theoretical concept but a practical necessity.

Being a technological front-runner now means possessing enhanced diplomatic influence, with partnerships based on AI potentially replacing older alliance models. However, this new terrain also presents serious challenges, such as ensuring ethical standards, data privacy, and equitable access. The Gulf deals, while strategic, also open a space for joint efforts in responsible AI governance.

Why does it matter?

As the era of AI diplomacy dawns, institutions like Diplo are stepping in to prepare diplomats for this rapidly evolving landscape. Kovrlija concludes that understanding and engaging with AI diplomacy is now essential for any nation wishing to maintain its relevance and influence in global affairs.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

MSU launches first robotics and AI degree programs in Minnesota

Minnesota State University is set to break new ground this fall by launching two pioneering academic programs in robotics and AI. The university will introduce the state’s only undergraduate robotics engineering degree and the first graduate-level AI program within the Minnesota State system.

With these offerings, MSU aims to meet the fast-growing industry demand for skilled professionals in these cutting-edge fields. The programs have already drawn significant interest, with 13 students applying for the AI master’s and more expected in both tracks.

MSU officials say the curriculum combines strong theoretical foundations with hands-on learning to prepare students for careers in sectors like agriculture, healthcare, finance, construction, and manufacturing. Students will engage in real-world projects, building and deploying AI and robotics solutions while exploring ethical and societal implications.

University leaders emphasise that these programs are tailored to Minnesota’s economy’s needs, including a high concentration of Fortune 500 companies and a growing base of smaller firms eager to adopt AI technologies. Robotics also enjoys strong interest at the high school level, and MSU hopes to offer local students an in-state option for further study, competing with institutions in neighbouring states.

Why does it matter?

According to faculty, graduates of these programs will be well-positioned in the job market. The university sees the initiative as part of its broader mission to deliver education aligned with emerging technological trends and societal needs, ensuring Minnesota’s workforce remains competitive in an increasingly automated and AI-driven world.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!