Microsoft has announced plans to invest €4.3 billion over the next two years to expand its artificial intelligence (AI) and cloud infrastructure in northern Italy. The tech giant’s investment will establish the ItalyNorth cloud region as one of Microsoft’s largest data hubs in Europe, serving both the Mediterranean and North Africa. The move marks Microsoft’s largest-ever investment in Italy and is expected to significantly strengthen the country’s digital presence in the region.
Microsoft’s Vice Chair and President, Brad Smith, discussed the investment with Italian Prime Minister Giorgia Meloni, who welcomed the project, seeing it as a key development for Italy’s role in the Mediterranean’s digital landscape. This initiative follows broader discussions between the Italian government and global investors, including BlackRock, which is also looking at potential investments in data and energy infrastructure.
The surge in demand for AI and cloud services across industries, from gaming to e-commerce, is driving Microsoft’s global expansion efforts. In partnership with BlackRock, Microsoft had already launched a $30 billion fund aimed at AI-focused data centers and related infrastructure, initially targeting the U.S. and its partner countries.
Equinix has announced a joint venture with Singapore’s GIC and the Canada Pension Plan Investment Board, aiming to raise over $15 billion to expand its hyperscale data centres in the US. This initiative comes at a time when the demand for data centres is surging due to the increasing deployment of AI technologies across various industries. Hyperscale data centres are crucial for major tech companies like Amazon, Microsoft, and Google, providing the extensive computing power and storage necessary for their operations.
The newly formed joint venture will greatly expand Equinix’s hyperscale data centre program by enabling the purchase of land for new facilities and adding more than 1.5 gigawatts of capacity. GIC and the Canada Pension Plan Investment Board will each hold a 37.5% equity stake in the venture, while Equinix will retain a 25% share. Additionally, the partnership plans to leverage debt to increase the total available investment capital.
Equinix has experienced robust growth recently, prompting the company to raise its annual core earnings forecast. With a keen eye on expansion, particularly in Southeast Asia, Equinix has already acquired three data centres in the Philippines this year and continues to explore opportunities in the high-growth region. The new partnership with GIC underscores Equinix’s commitment to scaling its operations in response to the rising demand for data centre services.
Ello, an AI reading companion designed to help children struggling with reading, has introduced a new feature called ‘Storytime’. This feature enables kids to create their own stories by choosing from a range of settings, characters, and plots. Story options are tailored to the child’s reading level and current lessons, helping them practise essential reading skills.
Ello’s AI, represented by a bright blue elephant, listens to children as they read aloud and helps correct mispronunciations. The tool uses phonics-based strategies to adapt stories based on the child’s responses, ensuring personalised and engaging experiences. It also offers two reading modes: one where the child and Ello take turns reading and another, more supportive mode for younger readers.
The Storytime feature distinguishes itself from other AI-assisted story creation tools by focusing on reading development. The technology has been tested with teachers and children, and includes safeguards to ensure age-appropriate content. Future versions of the product may allow even more creative input from children, while maintaining helpful structure to avoid overwhelming them.
Ello’s subscription costs $14.99 per month, with discounted pricing for low-income families. The company also partners with schools to offer its services for free, and has recently made its collection of decodable children’s books available online at no cost.
The US Commerce Department has tightened export restrictions on advanced chip shipments to specific Middle East and Central Asia regions, reflecting heightened concerns over national security and potential misuse by adversarial nations. That policy requires US companies to obtain special licenses for shipping advanced AI chips and introduces a ‘Validated End User’ status for select data centres, allowing them to receive chips under general authorisation.
The department also emphasises that any data centre seeking this status will undergo rigorous scrutiny, including inspections of business operations and cybersecurity measures, to ensure sensitive technology remains secure. In parallel with these export restrictions, the US Commerce Department is significantly increasing financial support for allies such as Israel, which includes a substantial-tech funding package.
Critics contend that this dual approach raises pressing ethical concerns, particularly as this funding is perceived to enhance Israel’s military capabilities amidst ongoing conflicts in Lebanon and Gaza. The intersection of technology exports and military aid underscores a broader trend where economic advantages stemming from global conflicts align with national security interests.
Humanity’s rapid advancements in robotics and AI have shifted many ethical and philosophical dilemmas from the realm of science fiction into pressing real-world issues. AI technologies now permeate areas such as medicine, public governance, and the economy, making it critical to ensure their ethical use. Multiple actors, including governments, multinational corporations, international organisations, and individual citizens, share the responsibility to navigate these developments thoughtfully.
What is ethics?
Ethics refers to the moral principles that guide individual behaviour or the conduct of activities, determining what is considered right or wrong. In AI, ethics ensures that technologies are developed and used in ways that respect societal values, human dignity, and fairness. For example, one ethical principle is respect for others, which means ensuring that AI systems respect the rights and privacy of individuals.
What is AI?
Artificial Intelligence (AI) refers to systems that analyse their environment and make decisions autonomously to achieve specific goals. These systems can be software-based, like voice assistants and facial recognition software, or hardware-based, such as robots, drones, and autonomous cars. AI has the potential to reshape society profoundly. Without an ethical framework, AI could perpetuate inequalities, reduce accountability, and pose risks to privacy, security, and human autonomy. Embedding ethics in the design, regulation, and use of AI is essential to ensuring that this technology advances in a way that promotes fairness, responsibility, and respect for human rights.
AI ethics and its importance
AI ethics focuses on minimising risks related to poor design, inappropriate applications, and misuse of AI. Problems such as surveillance without consent and the weaponisation of AI have already emerged. This calls for ethical guidelines that protect individual rights and ensure that AI benefits society as a whole.
Global and regional efforts to regulate AI ethics
There are international initiatives to regulate AI ethically. For example, UNESCO‘s 2021 Recommendation on the Ethics of AI offers guidelines for countries to develop AI responsibly, focusing on human rights, inclusion, and transparency. The European Union’s AI Act is another pioneering legislative effort, which categorises AI systems by their risk level. The higher the risk, the stricter the regulatory requirements.
The Collingridge dilemma and AI
The Collingridge dilemma points to the challenge of regulating new technologies like AI. Early regulation is difficult due to limited knowledge of the technology’s long-term effects, but once the technology becomes entrenched, regulation faces opposition from stakeholders. AI is currently in a dual phase: while its long-term implications are uncertain, we already have enough examples of its immediate impact—such as algorithmic bias and privacy violations—to justify regulation in key areas.
Asimov’s Three Laws of Robotics: Ethical inspiration for AI
Isaac Asimov’s Three Laws of Robotics, while fictional, resonate with many of the ethical concerns that modern AI systems face today. These laws—designed to prevent harm to humans, ensure obedience to human commands, and prioritise the self-preservation of robots—provide a foundational, if simplistic, framework for responsible AI behaviour.
Modern ethical challenges in AI
However, real-world AI introduces a range of complex challenges that cannot be adequately managed by simple rules. Issues such as algorithmic bias, privacy violations, accountability in decision-making, and unintended consequences complicate the ethical landscape, necessitating more nuanced and adaptive strategies for effectively governing AI systems.
As AI continues to develop, it raises new ethical dilemmas, including the need for transparency in decision-making, accountability in cases of accidents, and the possibility of AI systems acting in ways that conflict with their initial programming. Additionally, there are deeper questions regarding whether AI systems should have the capacity for moral reasoning and how their autonomy might conflict with human values.
Categorising AI and ethics
Modern AI systems exhibit a spectrum of ethical complexities that reflect their varying capabilities and applications. Basic AI operates by executing tasks based purely on algorithms and pre-programmed instructions, devoid of any moral reasoning or ethical considerations. These systems may efficiently sort data, recognise patterns, or automate simple processes, yet they do not engage in any form of ethical deliberation.
In contrast, more advanced AI systems are designed to incorporate limited ethical decision-making. These systems are increasingly being deployed in critical areas such as healthcare, where they help diagnose diseases, recommend treatments, and manage patient care. Similarly, in autonomous vehicles, AI must navigate complex moral scenarios, such as how to prioritise the safety of passengers versus pedestrians in unavoidable accident situations. While these advanced systems can make decisions that involve some level of ethical consideration, their ability to fully grasp and navigate complex moral landscapes remains constrained.
The variety of ethical dilemmas
Legal impacts
The question of AI accountability is increasingly relevant in our technologically driven society, particularly in scenarios involving autonomous vehicles, where determining liability in the event of an accident is fraught with complications. For instance, if an autonomous car is involved in a collision, should the manufacturer, software developer, or vehicle owner be held responsible? As AI systems become more autonomous, existing legal frameworks may struggle to keep pace with these advancements, leading to legal grey areas that can result in injustices. Additionally, AI technologies are vulnerable to misuse for criminal activities, such as identity theft, fraud, or cyberattacks. This underscores the urgent need for comprehensive legal reforms that not only address accountability issues but also develop robust regulations to mitigate the potential for abuse.
Financial impacts
The integration of AI into financial markets introduces significant risks, including the potential for market manipulation and exacerbation of financial inequalities. For instance, algorithms designed to optimise trading strategies may inadvertently favour wealthy investors, perpetuating a cycle of inequality. Furthermore, biased decision-making algorithms can lead to unfair lending practices or discriminatory hiring processes, limiting opportunities for marginalised groups. As AI continues to shape financial systems, it is crucial to implement safeguards and oversight mechanisms that promote fairness and equitable access to financial resources.
Environmental impacts
The environmental implications of AI cannot be overlooked, particularly given the substantial energy consumption associated with training and deploying large AI models. The computational power required for these processes contributes significantly to carbon emissions, raising concerns about the sustainability of AI technologies. In addition, the rapid expansion of AI applications in various industries may lead to increased electronic waste, as outdated hardware is discarded in favour of more advanced systems. To address these challenges, stakeholders must prioritise the development of energy-efficient algorithms and sustainable practices that minimise the ecological footprint of AI technologies.
Social impacts
AI-driven automation poses a profound threat to traditional job markets, particularly in sectors that rely heavily on routine tasks, such as manufacturing and customer service. As machines become capable of performing these jobs more efficiently, human workers may face displacement, leading to economic instability and social unrest. Moreover, the deployment of biassed algorithms can deepen existing social inequalities, especially when applied in sensitive areas like hiring, loan approvals, or criminal justice. The use of AI in surveillance systems also raises significant privacy concerns, as individuals may be monitored without their consent, leading to a chilling effect on free expression and civil liberties.
Psychological impacts
The interaction between humans and AI systems can have far-reaching implications for emotional well-being. For example, AI-driven customer service chatbots may struggle to provide the empathetic responses that human agents can offer, leading to frustration among users. Additionally, emotionally manipulative AI applications in marketing may exploit psychological vulnerabilities, promoting unhealthy consumer behaviours or contributing to feelings of inadequacy. As AI systems become more integrated into everyday life, understanding and mitigating their psychological effects will be essential for promoting healthy human-computer interactions.
Trust issues
Public mistrust of AI technologies is a significant barrier to their widespread adoption. This mistrust is largely rooted in the opacity of AI systems and the potential for algorithmic bias, which can lead to unjust outcomes. To foster trust, it is crucial to establish transparent practices and accountability measures that ensure AI systems operate fairly and ethically. This can include the development of explainable AI, which allows users to understand how decisions are made, as well as the implementation of regulatory frameworks that promote responsible AI development. By addressing these trust issues, stakeholders can work toward creating a more equitable and trustworthy AI landscape.
These complex ethical challenges require global coordination and thoughtful, adaptable regulation to ensure that AI serves humanity’s best interests, respects human dignity, and promotes fairness across all sectors of society. The ethical considerations around AI extend far beyond individual technologies or industries, impacting fundamental human rights, economic equality, environmental sustainability, and societal trust.
As AI continues to advance, the collective responsibility of governments, corporations, and individuals is to build robust, transparent systems that not only push the boundaries of innovation but also safeguard society. Only through an ethical framework can AI fulfil its potential as a transformative force for good rather than deepening existing divides or creating new dangers. The journey towards creating ethically aware AI systems necessitates ongoing research, interdisciplinary collaboration, and a commitment to prioritising human well-being in all technological advancements.
A study from Osaka Metropolitan University revealed that ChatGPT, based on OpenAI’s GPT-4, has surpassed radiologists in diagnosing brain tumours. Researchers compared the diagnostic abilities of ChatGPT and radiologists using 150 MRI reports. ChatGPT achieved a 73% accuracy rate, slightly ahead of neuroradiologists at 72% and general radiologists at 68%.
The AI’s accuracy varied depending on the report’s author. It performed best with neuroradiologist reports, reaching 80% accuracy, while general radiologist reports saw the AI’s accuracy drop to 60%.
Researchers aim to explore its use in other diagnostic fields. They hope to enhance diagnostic precision and reduce the burden on medical professionals through AI integration. The study points to a future where AI might play a crucial role in preoperative tumour diagnoses.
Lead researcher Yasuhito Mitsuyama believes that these results indicate the potential of AI in improving diagnostic processes. The team is optimistic about its future applications in medical education and imaging technologies.
Microsoft has updated its consumer AI assistant, Copilot, giving it a friendlier voice and the ability to analyse web pages while users browse. This enhancement is part of a broader initiative led by Mustafa Suleyman, CEO of Microsoft AI, who noted that a diverse team of creative professionals, including psychologists and comedians, is refining Copilot’s tone and style to set it apart in the crowded AI market.
In a demonstration of the upgraded Copilot, the AI suggested a housewarming gift by recommending popular olive oils after engaging in a conversation about the user’s preferences. This update, which started rolling out on Tuesday, marks one of the initial efforts from Suleyman’s newly established division dedicated to consumer products and technology research.
Although Microsoft is well-known for its business software, it is encountering significant competition in the consumer market, particularly from Google’s search engine. Launched last year, Copilot seeks to attract more users with its improved voice capabilities, making interactions feel more engaging and responsive. New features for Copilot Pro subscribers, such as ‘Think Deeper,’ will enable users to assess their options, while the upcoming ‘Copilot Vision’ function will allow users to interact with content in their Microsoft Edge browser without retaining any data.
Suleyman envisions Copilot as a digital companion that continuously learns from users’ interactions across different Microsoft platforms, such as Word and Windows, with their consent. He noted that Bill Gates is excited about the AI’s capabilities, especially the potential for Copilot to read and parse emails, suggesting that these features are on the horizon.
A diverse group of academics will lead the drafting of a Code of Practice on general-purpose AI (GPAI). The Code is crucial for AI systems like ChatGPT and will outline the AI Act’s risk management and transparency requirements. The list of leaders includes renowned AI expert Yoshua Bengio and a range of other professionals with expertise in technical, legal, and social aspects of AI.
The announcement follows concerns from three influential MEPs who questioned the timing and international expertise of the working group leaders. Despite these concerns, the group comprises academics and researchers from institutions across the globe. The Code’s first draft is expected in November, with a workshop for GPAI providers scheduled in mid-October.
Yoshua Bengio, often called a ‘godfather of AI,’ will chair the group for technical risk mitigation. Other notable figures include law professor Alexander Peukert and AI governance expert Marietje Schaake. The working groups will address various aspects of risk management and transparency in AI development.
The EU AI Act will heavily rely on the Code of Practice until official standards are finalised by 2026. Leaders in AI and related fields are expected to shape guidelines that support innovation while ensuring AI safety.
Microsoft is introducing AI-powered updates for its Paint and Photos apps, available on Copilot Plus PCs. The new features, Generative Fill and Generative Erase, are designed to simplify image editing without requiring professional software. These tools allow users to remove or add elements to images easily, much like advanced functions in Adobe Photoshop.
Generative Fill and Erase come with adjustable brushes for precise editing. Generative Erase is ideal for removing unwanted objects, while Generative Fill enables users to add AI-created elements by typing a description. These new functions are similar to popular features like Google’s Magic Eraser.
The new tools are an expansion of Microsoft’s Cocreator feature for Paint, launched earlier this year. Cocreator generates images using text prompts and sketches. Microsoft has also upgraded the diffusion-based model behind these tools to improve speed and output quality, while adding moderation features to avoid misuse.
Microsoft’s Photos app will now include Generative Erase and a Super-Resolution feature. The latter uses AI to enhance blurry images, allowing users to boost image resolution up to eight times, with options for fine-tuning the result using a slider.
Microsoft has officially launched ‘Bing Generative Search,’ a new AI-powered feature that generates summaries of search results, aiming to enhance how users interact with search engines. After a pilot in July, the feature is now being rolled out to US users. To try it, users can search “Bing generative search,” or trigger it through informational queries. Bing generative search uses a blend of AI models to compile information from across the web, offering an easy-to-read summary alongside traditional search links.
This feature evolves from Bing’s AI chat integration launched in February 2023, but now provides search results in a fresh, AI-generated format that aims to better fulfill user intent. For example, a search like ‘What’s a spaghetti western?’ would display a detailed overview of the genre’s history and examples, accompanied by relevant sources. However, users can opt out of the AI summaries if they prefer traditional search results.
While Microsoft promises that Bing’s AI-powered search still maintains website traffic, concerns have risen across the industry. Competitor Google’s AI Overviews have already been criticized for diverting traffic from publishers and, at times, delivering inaccurate results. Although Bing holds a smaller portion of the global search market compared to Google, Microsoft is keen to monitor the impact of generative AI on web traffic.