Amazon’s AI partnership with Anthropic cleared by UK regulator

The United Kingdom‘s Competition and Markets Authority (CMA) has confirmed that Amazon’s $4 billion partnership with AI startup Anthropic will not be subject to a more in-depth investigation. The regulator determined that the deal did not raise competition concerns under Britain’s merger regulations.

Amazon expressed support for the CMA’s decision, noting that it acknowledged the regulator’s lack of jurisdiction over the collaboration. The CMA also cleared a similar partnership between Microsoft and Inflection AI, while a deal between Alphabet and Anthropic remains under review.

Anthropic, which was co-founded by siblings Dario and Daniela Amodei, former OpenAI executives, reiterated that its partnerships with major tech firms do not compromise its independence or governance. The startup has received billions in investments from several large companies.

Amid growing antitrust scrutiny of deals between startups and big tech firms, regulators are closely monitoring collaborations like those involving Anthropic and its partners.

US Commerce Department tightens AI chip exports to Middle East and Central Asia

The US Commerce Department has tightened export restrictions on advanced chip shipments to specific Middle East and Central Asia regions, reflecting heightened concerns over national security and potential misuse by adversarial nations. That policy requires US companies to obtain special licenses for shipping advanced AI chips and introduces a ‘Validated End User’ status for select data centres, allowing them to receive chips under general authorisation.

The department also emphasises that any data centre seeking this status will undergo rigorous scrutiny, including inspections of business operations and cybersecurity measures, to ensure sensitive technology remains secure. In parallel with these export restrictions, the US Commerce Department is significantly increasing financial support for allies such as Israel, which includes a substantial-tech funding package.

Critics contend that this dual approach raises pressing ethical concerns, particularly as this funding is perceived to enhance Israel’s military capabilities amidst ongoing conflicts in Lebanon and Gaza. The intersection of technology exports and military aid underscores a broader trend where economic advantages stemming from global conflicts align with national security interests.

AI and ethics in modern society

Humanity’s rapid advancements in robotics and AI have shifted many ethical and philosophical dilemmas from the realm of science fiction into pressing real-world issues. AI technologies now permeate areas such as medicine, public governance, and the economy, making it critical to ensure their ethical use. Multiple actors, including governments, multinational corporations, international organisations, and individual citizens, share the responsibility to navigate these developments thoughtfully.

What is ethics?

Ethics refers to the moral principles that guide individual behaviour or the conduct of activities, determining what is considered right or wrong. In AI, ethics ensures that technologies are developed and used in ways that respect societal values, human dignity, and fairness. For example, one ethical principle is respect for others, which means ensuring that AI systems respect the rights and privacy of individuals.

What is AI?

Artificial Intelligence (AI) refers to systems that analyse their environment and make decisions autonomously to achieve specific goals. These systems can be software-based, like voice assistants and facial recognition software, or hardware-based, such as robots, drones, and autonomous cars. AI has the potential to reshape society profoundly. Without an ethical framework, AI could perpetuate inequalities, reduce accountability, and pose risks to privacy, security, and human autonomy. Embedding ethics in the design, regulation, and use of AI is essential to ensuring that this technology advances in a way that promotes fairness, responsibility, and respect for human rights.

AI ethics and its importance

AI ethics focuses on minimising risks related to poor design, inappropriate applications, and misuse of AI. Problems such as surveillance without consent and the weaponisation of AI have already emerged. This calls for ethical guidelines that protect individual rights and ensure that AI benefits society as a whole.

a person in a white suit

Global and regional efforts to regulate AI ethics

There are international initiatives to regulate AI ethically. For example, UNESCO‘s 2021 Recommendation on the Ethics of AI offers guidelines for countries to develop AI responsibly, focusing on human rights, inclusion, and transparency. The European Union’s AI Act is another pioneering legislative effort, which categorises AI systems by their risk level. The higher the risk, the stricter the regulatory requirements.

The Collingridge dilemma and AI

The Collingridge dilemma points to the challenge of regulating new technologies like AI. Early regulation is difficult due to limited knowledge of the technology’s long-term effects, but once the technology becomes entrenched, regulation faces opposition from stakeholders. AI is currently in a dual phase: while its long-term implications are uncertain, we already have enough examples of its immediate impact—such as algorithmic bias and privacy violations—to justify regulation in key areas.

Asimov’s Three Laws of Robotics: Ethical inspiration for AI

Isaac Asimov’s Three Laws of Robotics, while fictional, resonate with many of the ethical concerns that modern AI systems face today. These laws—designed to prevent harm to humans, ensure obedience to human commands, and prioritise the self-preservation of robots—provide a foundational, if simplistic, framework for responsible AI behaviour.

 Page, Text, Chart, Plot

Modern ethical challenges in AI

However, real-world AI introduces a range of complex challenges that cannot be adequately managed by simple rules. Issues such as algorithmic bias, privacy violations, accountability in decision-making, and unintended consequences complicate the ethical landscape, necessitating more nuanced and adaptive strategies for effectively governing AI systems.

As AI continues to develop, it raises new ethical dilemmas, including the need for transparency in decision-making, accountability in cases of accidents, and the possibility of AI systems acting in ways that conflict with their initial programming. Additionally, there are deeper questions regarding whether AI systems should have the capacity for moral reasoning and how their autonomy might conflict with human values.

Categorising AI and ethics

Modern AI systems exhibit a spectrum of ethical complexities that reflect their varying capabilities and applications. Basic AI operates by executing tasks based purely on algorithms and pre-programmed instructions, devoid of any moral reasoning or ethical considerations. These systems may efficiently sort data, recognise patterns, or automate simple processes, yet they do not engage in any form of ethical deliberation.

In contrast, more advanced AI systems are designed to incorporate limited ethical decision-making. These systems are increasingly being deployed in critical areas such as healthcare, where they help diagnose diseases, recommend treatments, and manage patient care. Similarly, in autonomous vehicles, AI must navigate complex moral scenarios, such as how to prioritise the safety of passengers versus pedestrians in unavoidable accident situations. While these advanced systems can make decisions that involve some level of ethical consideration, their ability to fully grasp and navigate complex moral landscapes remains constrained.

The variety of ethical dilemmas

 Logo, Nature, Outdoors, Person

Legal impacts

The question of AI accountability is increasingly relevant in our technologically driven society, particularly in scenarios involving autonomous vehicles, where determining liability in the event of an accident is fraught with complications. For instance, if an autonomous car is involved in a collision, should the manufacturer, software developer, or vehicle owner be held responsible? As AI systems become more autonomous, existing legal frameworks may struggle to keep pace with these advancements, leading to legal grey areas that can result in injustices. Additionally, AI technologies are vulnerable to misuse for criminal activities, such as identity theft, fraud, or cyberattacks. This underscores the urgent need for comprehensive legal reforms that not only address accountability issues but also develop robust regulations to mitigate the potential for abuse.

Financial impacts

The integration of AI into financial markets introduces significant risks, including the potential for market manipulation and exacerbation of financial inequalities. For instance, algorithms designed to optimise trading strategies may inadvertently favour wealthy investors, perpetuating a cycle of inequality. Furthermore, biased decision-making algorithms can lead to unfair lending practices or discriminatory hiring processes, limiting opportunities for marginalised groups. As AI continues to shape financial systems, it is crucial to implement safeguards and oversight mechanisms that promote fairness and equitable access to financial resources.

Environmental impacts

The environmental implications of AI cannot be overlooked, particularly given the substantial energy consumption associated with training and deploying large AI models. The computational power required for these processes contributes significantly to carbon emissions, raising concerns about the sustainability of AI technologies. In addition, the rapid expansion of AI applications in various industries may lead to increased electronic waste, as outdated hardware is discarded in favour of more advanced systems. To address these challenges, stakeholders must prioritise the development of energy-efficient algorithms and sustainable practices that minimise the ecological footprint of AI technologies.

Social impacts

AI-driven automation poses a profound threat to traditional job markets, particularly in sectors that rely heavily on routine tasks, such as manufacturing and customer service. As machines become capable of performing these jobs more efficiently, human workers may face displacement, leading to economic instability and social unrest. Moreover, the deployment of biassed algorithms can deepen existing social inequalities, especially when applied in sensitive areas like hiring, loan approvals, or criminal justice. The use of AI in surveillance systems also raises significant privacy concerns, as individuals may be monitored without their consent, leading to a chilling effect on free expression and civil liberties.

Psychological impacts

The interaction between humans and AI systems can have far-reaching implications for emotional well-being. For example, AI-driven customer service chatbots may struggle to provide the empathetic responses that human agents can offer, leading to frustration among users. Additionally, emotionally manipulative AI applications in marketing may exploit psychological vulnerabilities, promoting unhealthy consumer behaviours or contributing to feelings of inadequacy. As AI systems become more integrated into everyday life, understanding and mitigating their psychological effects will be essential for promoting healthy human-computer interactions.

Trust issues

Public mistrust of AI technologies is a significant barrier to their widespread adoption. This mistrust is largely rooted in the opacity of AI systems and the potential for algorithmic bias, which can lead to unjust outcomes. To foster trust, it is crucial to establish transparent practices and accountability measures that ensure AI systems operate fairly and ethically. This can include the development of explainable AI, which allows users to understand how decisions are made, as well as the implementation of regulatory frameworks that promote responsible AI development. By addressing these trust issues, stakeholders can work toward creating a more equitable and trustworthy AI landscape.

These complex ethical challenges require global coordination and thoughtful, adaptable regulation to ensure that AI serves humanity’s best interests, respects human dignity, and promotes fairness across all sectors of society. The ethical considerations around AI extend far beyond individual technologies or industries, impacting fundamental human rights, economic equality, environmental sustainability, and societal trust.

As AI continues to advance, the collective responsibility of governments, corporations, and individuals is to build robust, transparent systems that not only push the boundaries of innovation but also safeguard society. Only through an ethical framework can AI fulfil its potential as a transformative force for good rather than deepening existing divides or creating new dangers. The journey towards creating ethically aware AI systems necessitates ongoing research, interdisciplinary collaboration, and a commitment to prioritising human well-being in all technological advancements.

EU GPAI Code of Practice drafting sparks disagreements

The European Commission has revealed ongoing disagreements between general-purpose AI providers and other stakeholders during the first Code of Practice plenary on 30 September. The Code will play a key role in interpreting the EU AI Act’s risk and transparency requirements until formal standards are finalised in 2026.

Nearly 1,000 participants, including industry, civil society, and academia, attended the virtual plenary. Feedback from a multi-stakeholder consultation and workshops will guide the drafting of the Code, with the first AI provider workshop scheduled for mid-October. A draft of the Code is expected by early November.

Key disagreements include how much data transparency should be required. Non-provider stakeholders support disclosing data sources such as licensed content and scraped data, while AI providers are less inclined to share information about open datasets. Differences also emerged on strict risk measures like third-party audits.

Given the large number of participants, including experts from academia, the drafting process will need careful management to ensure smooth progress. The final version of the Code of Practice is expected in April 2025.

Oracle announces $6.5 billion cloud investment in Malaysia

Oracle has committed to investing over $6.5 billion to build its first public cloud region in Malaysia. The move marks one of the largest tech investments in the country, joining a series of major digital infrastructure developments by global companies like Google, Nvidia, and Microsoft. These investments, driven by demand for AI, have been transforming Malaysia’s digital landscape.

Oracle’s planned cloud region will support various organisations in Malaysia, including government bodies, financial institutions, and airlines, by enabling them to migrate to the cloud and utilise cutting-edge AI and data analytics. This shift will modernise operations, increase efficiency, and reduce costs for businesses.

With this investment, Malaysian customers will benefit from locally based cloud services, improving service speed and security. Oracle’s Executive Vice President for Japan and Asia Pacific, Garrett Ilg, emphasised the importance of helping clients innovate and adopt standardised processes.

The company’s expansion is part of a broader strategy to extend its footprint across Asia, aiming for growth in countries from Japan to India.

ChatGPT shows superior diagnostic skills over radiologists

A study from Osaka Metropolitan University revealed that ChatGPT, based on OpenAI’s GPT-4, has surpassed radiologists in diagnosing brain tumours. Researchers compared the diagnostic abilities of ChatGPT and radiologists using 150 MRI reports. ChatGPT achieved a 73% accuracy rate, slightly ahead of neuroradiologists at 72% and general radiologists at 68%.

The AI’s accuracy varied depending on the report’s author. It performed best with neuroradiologist reports, reaching 80% accuracy, while general radiologist reports saw the AI’s accuracy drop to 60%.

Researchers aim to explore its use in other diagnostic fields. They hope to enhance diagnostic precision and reduce the burden on medical professionals through AI integration. The study points to a future where AI might play a crucial role in preoperative tumour diagnoses.

Lead researcher Yasuhito Mitsuyama believes that these results indicate the potential of AI in improving diagnostic processes. The team is optimistic about its future applications in medical education and imaging technologies.

Academics to shape EU GPAI Code of Practice

A diverse group of academics will lead the drafting of a Code of Practice on general-purpose AI (GPAI). The Code is crucial for AI systems like ChatGPT and will outline the AI Act’s risk management and transparency requirements. The list of leaders includes renowned AI expert Yoshua Bengio and a range of other professionals with expertise in technical, legal, and social aspects of AI.

The announcement follows concerns from three influential MEPs who questioned the timing and international expertise of the working group leaders. Despite these concerns, the group comprises academics and researchers from institutions across the globe. The Code’s first draft is expected in November, with a workshop for GPAI providers scheduled in mid-October.

Yoshua Bengio, often called a ‘godfather of AI,’ will chair the group for technical risk mitigation. Other notable figures include law professor Alexander Peukert and AI governance expert Marietje Schaake. The working groups will address various aspects of risk management and transparency in AI development.

The EU AI Act will heavily rely on the Code of Practice until official standards are finalised by 2026. Leaders in AI and related fields are expected to shape guidelines that support innovation while ensuring AI safety.

Former OpenAI leader Durk Kingma joins Anthropic

Durk Kingma, one of the lesser-known co-founders of OpenAI, has announced he is joining AI research company Anthropic. Kingma, who will work remotely from the Netherlands, expressed excitement about aligning with Anthropic’s mission to develop AI systems responsibly. However, he did not specify his exact role within the company.

Kingma, with a PhD in machine learning from the University of Amsterdam, played a crucial role in developing generative AI models like DALL-E and ChatGPT during his time at OpenAI. After leaving OpenAI in 2018, he rejoined Google Brain before its merger with DeepMind in 2023, while also working as an angel investor for AI startups.

Kingma’s hiring is part of a broader trend, with several key figures from OpenAI moving to Anthropic in recent months, including safety lead Jan Leike and co-founder John Schulman. Anthropic, led by former OpenAI VP Dario Amodei, continues to attract top talent as it positions itself as a more safety-conscious alternative in the AI industry.

New AI tools transform Microsoft Paint and Photos apps

Microsoft is introducing AI-powered updates for its Paint and Photos apps, available on Copilot Plus PCs. The new features, Generative Fill and Generative Erase, are designed to simplify image editing without requiring professional software. These tools allow users to remove or add elements to images easily, much like advanced functions in Adobe Photoshop.

Generative Fill and Erase come with adjustable brushes for precise editing. Generative Erase is ideal for removing unwanted objects, while Generative Fill enables users to add AI-created elements by typing a description. These new functions are similar to popular features like Google’s Magic Eraser.

The new tools are an expansion of Microsoft’s Cocreator feature for Paint, launched earlier this year. Cocreator generates images using text prompts and sketches. Microsoft has also upgraded the diffusion-based model behind these tools to improve speed and output quality, while adding moderation features to avoid misuse.

Microsoft’s Photos app will now include Generative Erase and a Super-Resolution feature. The latter uses AI to enhance blurry images, allowing users to boost image resolution up to eight times, with options for fine-tuning the result using a slider.

AI-powered Bing generative search rolls out to US users

Microsoft has officially launched ‘Bing Generative Search,’ a new AI-powered feature that generates summaries of search results, aiming to enhance how users interact with search engines. After a pilot in July, the feature is now being rolled out to US users. To try it, users can search “Bing generative search,” or trigger it through informational queries. Bing generative search uses a blend of AI models to compile information from across the web, offering an easy-to-read summary alongside traditional search links.

This feature evolves from Bing’s AI chat integration launched in February 2023, but now provides search results in a fresh, AI-generated format that aims to better fulfill user intent. For example, a search like ‘What’s a spaghetti western?’ would display a detailed overview of the genre’s history and examples, accompanied by relevant sources. However, users can opt out of the AI summaries if they prefer traditional search results.

While Microsoft promises that Bing’s AI-powered search still maintains website traffic, concerns have risen across the industry. Competitor Google’s AI Overviews have already been criticized for diverting traffic from publishers and, at times, delivering inaccurate results. Although Bing holds a smaller portion of the global search market compared to Google, Microsoft is keen to monitor the impact of generative AI on web traffic.