The European Union has finalised its AI Act, a significant regulatory framework aimed at governing the use of AI within its member states. Published in the EU’s Official Journal, the law will officially come into effect on 1 August, with a phased implementation set to unfold over the next several years. By mid-2026, all provisions are expected to be fully applicable, marking a gradual rollout to accommodate various deadlines and compliance requirements.
Under the AI Act, different obligations are imposed on AI developers based on the perceived risk of their applications. Low-risk uses of AI will generally remain unregulated, while high-risk applications—such as biometric uses in law enforcement and critical infrastructure—will face stringent requirements around data quality and anti-bias measures. The law also introduces transparency requirements for developers of general-purpose AI models, like OpenAI’s GPT, ensuring that the most powerful AI systems undergo systemic risk assessments.
The phased approach begins with a list of prohibited AI uses becoming effective six months after the law’s enactment in early 2025. That includes bans on practices such as social credit scoring and unrestricted compilation of facial recognition databases. Subsequently, codes of practice for AI developers will be implemented nine months after the law takes effect to guide compliance with the new regulations. Concerns have been raised about the influence of AI industry players in shaping these guidelines, prompting efforts to ensure an inclusive drafting process overseen by the newly established EU AI Office.
By August 2025, transparency requirements will apply to general-purpose AI models, with additional time granted to comply with some high-risk AI systems until 2027. These measures reflect the EU’s proactive stance in balancing innovation with robust regulation to foster a competitive AI landscape while safeguarding societal values and interests.
Representative Cathy McMorris Rodgers stated that intelligence officials at the March hearing warned of dangers from foreign-controlled apps like TikTok, which could misuse American data. Despite the law, China has not intended to relinquish control over such applications, suggesting potential nefarious uses against Americans.
TikTok criticised the legislative process, claiming it was secretive and rushed. The Justice Department is set to respond to the legal challenges by 26 July, with a court hearing scheduled for 16 September.
The courts halted a previous attempt to ban TikTok by former President Trump in 2020. The current efforts focus on national security concerns, citing the app’s extensive data collection and the risks posed by Chinese ownership.
Amazon Web Services (AWS) has announced AWS App Studio, a new generative AI service designed to enable financial institutions, fintech firms, and other organisations to create applications in minutes, a task that would typically take professional developers days.
Revealed at the AWS Summit New York, the service is intended for IT project managers, data engineers, and enterprise architects without software development skills, allowing them to quickly develop and manage internal apps using AWS.
Development resources for custom applications are often scarce, pushing users towards low-code tools, which can have a steep learning curve and may not meet security requirements. AWS App Studio addresses these issues by enabling users to describe the desired application, its functions, and the data sources it should integrate with. Users can make modifications through a point-and-click interface, guided by a generative AI-powered assistant.
AWS App Studio empowers individuals with some technical experience to build enterprise-grade applications without needing to write underlying code. The service generates an outline to verify the user’s intent, creating a multi-page UI, a data model, and business logic.
Dilip Kumar, vice president of applications at AWS, stated that AWS App Studio opens application development to a new set of builders, enhancing productivity for businesses of all sizes by allowing technical professionals to create custom applications tailored to their unique needs.
Cloudflare has revealed that the most active AI web crawler over the past year is Bytespider, operated by Bytedance, which uses it to gather training data for its AI models, including the ChatGPT rival Doubao. Amazonbot, which indexes content for Alexa, and ClaudeBot, training the Claude chatbot, rank second and third, respectively. OpenAI’s GPTBot comes in fourth place.
Interestingly, while Bytespider leads in requests and blocking frequency, GPTBot ranks second in both areas. Despite this, many website operators remain unaware of these popular AI crawlers visiting their sites.
Cloudflare’s analysis shows that only a small percentage of websites, around 2.98% of the top one million, take measures to block or challenge AI bot requests. The despite the fact that more popular websites are both more frequently targeted by and more likely to block such crawlers.
The study also highlights that although many sites reference GPTBot, CCBot, and Google in their robots.txt files, they do not specifically disallow popular AI crawlers like Bytespider and ClaudeBot. The effectiveness of blocking relies on bot operators respecting these instructions.
Residents of Akishima city in western Tokyo are petitioning to block the construction of a large logistics and data centre by Singaporean developer GLP. Over 220 residents have expressed concerns that the centre would harm local wildlife, cause pollution, increase electricity usage, and deplete the city’s groundwater supply.
The group has filed a petition to review the urban planning process that approved GLP’s 3.63-million-megawatt data centre, which is estimated to emit around 1.8 million tons of carbon dioxide annually. They also worry that the project would require cutting down 3,000 of the 4,800 trees on the site, threatening the habitat of Eurasian goshawks and badgers.
The residents are considering arbitration to force GLP to reconsider its plans, with construction set to begin in February and completion expected by early 2029. The opposition comes amidst growing demand for data centres in Japan, where the market is projected to grow significantly over the next few years. GLP has declined to comment on the matter.
Samsung showcased its commitment to AI amidst internal challenges in South Korea, where workers are on an indefinite strike. Despite these issues, the tech giant presented AI integration across its consumer electronics at a Paris event, aiming to reinforce its global smartphone sales leadership.
At the presentation, Samsung executives focused on the deployment of AI applications, including Galaxy AI featured in their flagship S24 smartphone. They also announced plans to extend AI capabilities across all consumer products, from headphones to smartwatches and even connected rings.
TM Roh, head of Samsung’s mobile unit, highlighted their accelerated progress, aiming to bring Galaxy AI to 200 million devices by year-end, double their initial target. Samsung has invested heavily in AI, with more than a billion dollars allocated to its mobile unit alone.
Despite these advancements, analysts note AI’s current role in smartphones remains more about showcasing innovation than being a decisive factor in consumer choice. Samsung’s strategy includes focusing on premium markets with AI-driven innovations like their sixth-generation folding phones and health-related products, such as the Galaxy Ring set to launch later this month.
Amazon has announced significant updates to its AI technologies aimed at addressing hallucinations, a pervasive challenge hindering adoption across industries. Vasi Philomin, Amazon’s vice president of GenAI, highlighted enhancements including increased memory for GenAI agents. That upgrade promises more personalised and seamless user experiences, particularly for complex tasks.
The global AI market, projected to reach £909 billion by 2030, continues to attract substantial investments. GenAI revenues alone are forecasted to surge from £1.8 billion in 2022 to £33 billion by 2027, underlining its transformative impact on sectors like machine learning and computer vision.
In response to ongoing issues with misinformation and accuracy, Amazon has also refined its Bedrock service. The platform empowers businesses to integrate AI models into their applications, now bolstered with improved capabilities to detect and mitigate hallucinations effectively.
Matt Wood, vice president of AI products at Amazon Web Services, emphasised that these updates aim to significantly reduce hallucinations in specific scenarios by up to 75%. That move comes amidst recent incidents, such as Google’s AI generating inaccurate responses, underscoring the critical need for robust AI technologies capable of ensuring reliability and trustworthiness.
Amazon’s commitment to advancing AI capabilities underscores its strategic efforts to address challenges in the evolving landscape of artificial intelligence, reinforcing its role as a leader in the industry.
Vimeo has joined TikTok, YouTube, and Meta in requiring creators to label AI-generated content. Announced on Wednesday, this new policy mandates that creators disclose when realistic content is produced using AI. The updated terms of service aim to prevent confusion between genuine and AI-created videos, addressing the challenge of distinguishing real from fake content due to advanced generative AI tools.
Not all AI usage requires labelling; animated content, videos with obvious visual effects, or minor AI production assistance are exempt. However, videos that feature altered depictions of celebrities or events must include an AI content label. Vimeo’s AI tools, such as those that edit out long pauses, will also prompt labelling.
Creators can manually indicate AI usage when uploading or editing videos, specifying whether AI was used for audio, visuals, or both. Vimeo plans to develop automated systems to detect and label AI-generated content to enhance transparency and reduce the burden on creators. CEO Philip Moyer emphasised the importance of protecting user-generated content from AI training models, aligning Vimeo with similar policies at YouTube.
Several Macau government websites were hacked, prompting a criminal investigation, Chinese state media reported on Wednesday. The hacked sites included those of the office of the secretary for security, the public security police, the fire services department, and the security forces services bureau, causing service disruptions.
Security officials in Macau’s Special Administrative Region believe the cyberattack originated from overseas. However, no further details have been disclosed at this time.
In response, authorities collaborated with telecommunications operators to restore the affected services as quickly as possible. The investigation into the source of the intrusion is ongoing.
As deepfake pornography becomes an increasing threat to women online, both international and domestic lawmakers face difficulties in creating effective protections for victims. The issue has gained prominence through cases like that of Amy Smith, a student in Paris who was targeted with manipulated nude images and harassed by an anonymous perpetrator. Despite reporting the crime to multiple authorities, Smith found little support due to the complexities of tracking faceless offenders across borders.
Recent data shows that deepfake pornography is predominantly used for malicious purposes, with 98% of such videos being explicit. The FBI has identified a rise in “sextortion schemes,” where altered images are used for blackmail. Public awareness of these crimes is often heightened by high-profile cases, but many victims are not celebrities and face immense challenges in seeking justice.
Efforts are underway to address these issues through new legislation. In the US, proposed bills aim to hold perpetrators accountable and require prompt removal of deepfake content from the internet. Additionally, President Biden’s recent executive order seeks to develop technology for detecting and tracking deepfake images. In Europe, the AI Act introduces regulations for AI systems but faces criticism for its limited scope. While these measures represent progress, experts caution that they may not fully prevent future misuse of deepfake technology.