US to fund AI-driven semiconductor research with $100 million

The US Commerce Department announced its plan to allocate $100 million to promote the use of AI in developing sustainable semiconductor materials. This funding initiative is part of a broader effort overseeing $52.7 billion designated for US chip manufacturing and research, aimed at strengthening the country’s position in the semiconductor industry.

The new funding will support universities, national laboratories, and private sector companies in creating AI-driven autonomous experimentation methods. By harnessing the capabilities of AI, the initiative seeks to streamline and expedite the development of innovative semiconductor materials that are less resource-intensive, ultimately contributing to a more sustainable manufacturing process.

With the semiconductor industry facing increasing pressure to reduce environmental impact, this investment represents a significant step towards integrating advanced technologies to foster sustainable practices. The Commerce Department’s focus on AI in this sector underscores the potential for transformative advancements that can meet both economic and environmental goals, helping to secure a more resilient supply chain for the future.

OpenAI seeks investor commitment against competitors

As global investors like Thrive Capital and Tiger Global invested $6.6 billion in OpenAI, the company is seeking more than just capital; it wants assurances that these investors will avoid funding five perceived competitors. The list includes rivals such as Anthropic, Elon Musk’s xAI, and Safe Superintelligence (SSI), co-founded by OpenAI’s Ilya Sutskever. These companies are in a race to develop large language models, which require substantial financial backing.

OpenAI is also focusing on AI applications, with firms like the search startup Perplexity and enterprise search company Glean highlighted as part of its strategy. This move reflects OpenAI’s intent to broaden its offerings for enterprises and end users. The company has ambitious revenue targets, aiming to increase its earnings from $3.7 billion this year to $11.6 billion by 2025, signalling a strong push for growth in the competitive AI landscape.

While OpenAI’s request for exclusive commitments from investors is not legally binding, it underscores the company’s strategy to capitalise on its strong market position in a highly competitive environment where securing funding is crucial. Typically, venture capitalists steer clear of investing in direct competitors, but OpenAI’s approach is somewhat atypical. The situation is further complicated by late-stage investors like SoftBank and Fidelity, which have invested in both xAI and OpenAI, blurring the lines in the competitive landscape. This dynamic highlights the challenges and complexities investors face in navigating the rapidly evolving AI sector.

OpenAI’s request does not affect its past investors or their existing investments but could influence future fundraising efforts for both OpenAI and its listed competitors. The Financial Times and Wall Street Journal were among the first to report on the names of the companies involved.

AI at Europe’s borders sparks human rights concerns

As the European Union implements the world’s first comprehensive regulations on artificial intelligence (AI), human rights groups are raising alarms over exemptions for AI use at Europe’s borders. The EU’s AI Act, which categorises AI systems by risk level and imposes stricter rules for those with higher potential for harm, is set to take full effect by February 2025. While it promises to regulate AI across industries, controversial technologies like facial and emotion recognition are still permitted for border and police authorities, sparking concern over surveillance and discrimination.

With Europe investing heavily in border security, deploying AI-driven watchtowers and algorithms to monitor migration flows, critics argue these technologies could criminalise migrants and violate their rights. Human Rights activists warn that AI may reinforce biases and lead to unlawful pushbacks of asylum seekers. Countries like Greece are testing ground for these technologies and have been accused of using AI for surveillance and discrimination, despite denials from the government.

Campaigners also point out that the EU’s regulations allow European companies to develop and export harmful AI systems abroad, potentially fueling human rights abuses in other countries. While the AI Act represents a step forward in global regulation, activists believe it falls short of protecting vulnerable groups at Europe’s borders and beyond. They anticipate that legal challenges and public opposition will eventually close these regulatory gaps.

BMO names new chief AI and data officer

Bank of Montreal (BMO) has appointed Kristin Milchanowski as its chief AI and data officer, effective October 15. Formerly with EY, Milchanowski will lead the bank’s AI initiatives, focusing on data, robotics, and analytics. This new role builds on BMO’s ongoing investments in AI, aiming to enhance data management and governance while fostering a culture of innovation.

The financial sector views AI as a major opportunity, with potential uses like streamlining compliance tasks and enhancing customer service. However, integrating AI brings challenges, especially for firms managing sensitive data. Analysts suggest that AI-driven solutions could simplify processes and improve data-driven decision-making across the industry, offering significant benefits to financial services.

As AI adoption expands, US regulators seek public feedback to ensure these technologies foster fair and equitable access to financial services. Earlier this year, Morgan Stanley emphasised AI’s transformative potential, noting it could save financial advisers up to 15 hours of work per week, highlighting the significant impact AI could have on the industry.

Microsoft boosts AI, cloud investments in Italy with $4.8 billion plan

Microsoft has announced plans to invest €4.3 billion over the next two years to expand its artificial intelligence (AI) and cloud infrastructure in northern Italy. The tech giant’s investment will establish the ItalyNorth cloud region as one of Microsoft’s largest data hubs in Europe, serving both the Mediterranean and North Africa. The move marks Microsoft’s largest-ever investment in Italy and is expected to significantly strengthen the country’s digital presence in the region.

Microsoft’s Vice Chair and President, Brad Smith, discussed the investment with Italian Prime Minister Giorgia Meloni, who welcomed the project, seeing it as a key development for Italy’s role in the Mediterranean’s digital landscape. This initiative follows broader discussions between the Italian government and global investors, including BlackRock, which is also looking at potential investments in data and energy infrastructure.

The surge in demand for AI and cloud services across industries, from gaming to e-commerce, is driving Microsoft’s global expansion efforts. In partnership with BlackRock, Microsoft had already launched a $30 billion fund aimed at AI-focused data centers and related infrastructure, initially targeting the U.S. and its partner countries.

Equinix partners with GIC and CPP Investments for major data centre expansion

Equinix has announced a joint venture with Singapore’s GIC and the Canada Pension Plan Investment Board, aiming to raise over $15 billion to expand its hyperscale data centres in the US. This initiative comes at a time when the demand for data centres is surging due to the increasing deployment of AI technologies across various industries. Hyperscale data centres are crucial for major tech companies like Amazon, Microsoft, and Google, providing the extensive computing power and storage necessary for their operations.

The newly formed joint venture will greatly expand Equinix’s hyperscale data centre program by enabling the purchase of land for new facilities and adding more than 1.5 gigawatts of capacity. GIC and the Canada Pension Plan Investment Board will each hold a 37.5% equity stake in the venture, while Equinix will retain a 25% share. Additionally, the partnership plans to leverage debt to increase the total available investment capital.

Equinix has experienced robust growth recently, prompting the company to raise its annual core earnings forecast. With a keen eye on expansion, particularly in Southeast Asia, Equinix has already acquired three data centres in the Philippines this year and continues to explore opportunities in the high-growth region. The new partnership with GIC underscores Equinix’s commitment to scaling its operations in response to the rising demand for data centre services.

Ello’s new AI tool lets kids create their own stories

Ello, an AI reading companion designed to help children struggling with reading, has introduced a new feature called ‘Storytime’. This feature enables kids to create their own stories by choosing from a range of settings, characters, and plots. Story options are tailored to the child’s reading level and current lessons, helping them practise essential reading skills.

Ello’s AI, represented by a bright blue elephant, listens to children as they read aloud and helps correct mispronunciations. The tool uses phonics-based strategies to adapt stories based on the child’s responses, ensuring personalised and engaging experiences. It also offers two reading modes: one where the child and Ello take turns reading and another, more supportive mode for younger readers.

The Storytime feature distinguishes itself from other AI-assisted story creation tools by focusing on reading development. The technology has been tested with teachers and children, and includes safeguards to ensure age-appropriate content. Future versions of the product may allow even more creative input from children, while maintaining helpful structure to avoid overwhelming them.

Ello’s subscription costs $14.99 per month, with discounted pricing for low-income families. The company also partners with schools to offer its services for free, and has recently made its collection of decodable children’s books available online at no cost.

US Commerce Department tightens AI chip exports to Middle East and Central Asia

The US Commerce Department has tightened export restrictions on advanced chip shipments to specific Middle East and Central Asia regions, reflecting heightened concerns over national security and potential misuse by adversarial nations. That policy requires US companies to obtain special licenses for shipping advanced AI chips and introduces a ‘Validated End User’ status for select data centres, allowing them to receive chips under general authorisation.

The department also emphasises that any data centre seeking this status will undergo rigorous scrutiny, including inspections of business operations and cybersecurity measures, to ensure sensitive technology remains secure. In parallel with these export restrictions, the US Commerce Department is significantly increasing financial support for allies such as Israel, which includes a substantial-tech funding package.

Critics contend that this dual approach raises pressing ethical concerns, particularly as this funding is perceived to enhance Israel’s military capabilities amidst ongoing conflicts in Lebanon and Gaza. The intersection of technology exports and military aid underscores a broader trend where economic advantages stemming from global conflicts align with national security interests.

AI and ethics in modern society

Humanity’s rapid advancements in robotics and AI have shifted many ethical and philosophical dilemmas from the realm of science fiction into pressing real-world issues. AI technologies now permeate areas such as medicine, public governance, and the economy, making it critical to ensure their ethical use. Multiple actors, including governments, multinational corporations, international organisations, and individual citizens, share the responsibility to navigate these developments thoughtfully.

What is ethics?

Ethics refers to the moral principles that guide individual behaviour or the conduct of activities, determining what is considered right or wrong. In AI, ethics ensures that technologies are developed and used in ways that respect societal values, human dignity, and fairness. For example, one ethical principle is respect for others, which means ensuring that AI systems respect the rights and privacy of individuals.

What is AI?

Artificial Intelligence (AI) refers to systems that analyse their environment and make decisions autonomously to achieve specific goals. These systems can be software-based, like voice assistants and facial recognition software, or hardware-based, such as robots, drones, and autonomous cars. AI has the potential to reshape society profoundly. Without an ethical framework, AI could perpetuate inequalities, reduce accountability, and pose risks to privacy, security, and human autonomy. Embedding ethics in the design, regulation, and use of AI is essential to ensuring that this technology advances in a way that promotes fairness, responsibility, and respect for human rights.

AI ethics and its importance

AI ethics focuses on minimising risks related to poor design, inappropriate applications, and misuse of AI. Problems such as surveillance without consent and the weaponisation of AI have already emerged. This calls for ethical guidelines that protect individual rights and ensure that AI benefits society as a whole.

a person in a white suit

Global and regional efforts to regulate AI ethics

There are international initiatives to regulate AI ethically. For example, UNESCO‘s 2021 Recommendation on the Ethics of AI offers guidelines for countries to develop AI responsibly, focusing on human rights, inclusion, and transparency. The European Union’s AI Act is another pioneering legislative effort, which categorises AI systems by their risk level. The higher the risk, the stricter the regulatory requirements.

The Collingridge dilemma and AI

The Collingridge dilemma points to the challenge of regulating new technologies like AI. Early regulation is difficult due to limited knowledge of the technology’s long-term effects, but once the technology becomes entrenched, regulation faces opposition from stakeholders. AI is currently in a dual phase: while its long-term implications are uncertain, we already have enough examples of its immediate impact—such as algorithmic bias and privacy violations—to justify regulation in key areas.

Asimov’s Three Laws of Robotics: Ethical inspiration for AI

Isaac Asimov’s Three Laws of Robotics, while fictional, resonate with many of the ethical concerns that modern AI systems face today. These laws—designed to prevent harm to humans, ensure obedience to human commands, and prioritise the self-preservation of robots—provide a foundational, if simplistic, framework for responsible AI behaviour.

 Page, Text, Chart, Plot

Modern ethical challenges in AI

However, real-world AI introduces a range of complex challenges that cannot be adequately managed by simple rules. Issues such as algorithmic bias, privacy violations, accountability in decision-making, and unintended consequences complicate the ethical landscape, necessitating more nuanced and adaptive strategies for effectively governing AI systems.

As AI continues to develop, it raises new ethical dilemmas, including the need for transparency in decision-making, accountability in cases of accidents, and the possibility of AI systems acting in ways that conflict with their initial programming. Additionally, there are deeper questions regarding whether AI systems should have the capacity for moral reasoning and how their autonomy might conflict with human values.

Categorising AI and ethics

Modern AI systems exhibit a spectrum of ethical complexities that reflect their varying capabilities and applications. Basic AI operates by executing tasks based purely on algorithms and pre-programmed instructions, devoid of any moral reasoning or ethical considerations. These systems may efficiently sort data, recognise patterns, or automate simple processes, yet they do not engage in any form of ethical deliberation.

In contrast, more advanced AI systems are designed to incorporate limited ethical decision-making. These systems are increasingly being deployed in critical areas such as healthcare, where they help diagnose diseases, recommend treatments, and manage patient care. Similarly, in autonomous vehicles, AI must navigate complex moral scenarios, such as how to prioritise the safety of passengers versus pedestrians in unavoidable accident situations. While these advanced systems can make decisions that involve some level of ethical consideration, their ability to fully grasp and navigate complex moral landscapes remains constrained.

The variety of ethical dilemmas

 Logo, Nature, Outdoors, Person

Legal impacts

The question of AI accountability is increasingly relevant in our technologically driven society, particularly in scenarios involving autonomous vehicles, where determining liability in the event of an accident is fraught with complications. For instance, if an autonomous car is involved in a collision, should the manufacturer, software developer, or vehicle owner be held responsible? As AI systems become more autonomous, existing legal frameworks may struggle to keep pace with these advancements, leading to legal grey areas that can result in injustices. Additionally, AI technologies are vulnerable to misuse for criminal activities, such as identity theft, fraud, or cyberattacks. This underscores the urgent need for comprehensive legal reforms that not only address accountability issues but also develop robust regulations to mitigate the potential for abuse.

Financial impacts

The integration of AI into financial markets introduces significant risks, including the potential for market manipulation and exacerbation of financial inequalities. For instance, algorithms designed to optimise trading strategies may inadvertently favour wealthy investors, perpetuating a cycle of inequality. Furthermore, biased decision-making algorithms can lead to unfair lending practices or discriminatory hiring processes, limiting opportunities for marginalised groups. As AI continues to shape financial systems, it is crucial to implement safeguards and oversight mechanisms that promote fairness and equitable access to financial resources.

Environmental impacts

The environmental implications of AI cannot be overlooked, particularly given the substantial energy consumption associated with training and deploying large AI models. The computational power required for these processes contributes significantly to carbon emissions, raising concerns about the sustainability of AI technologies. In addition, the rapid expansion of AI applications in various industries may lead to increased electronic waste, as outdated hardware is discarded in favour of more advanced systems. To address these challenges, stakeholders must prioritise the development of energy-efficient algorithms and sustainable practices that minimise the ecological footprint of AI technologies.

Social impacts

AI-driven automation poses a profound threat to traditional job markets, particularly in sectors that rely heavily on routine tasks, such as manufacturing and customer service. As machines become capable of performing these jobs more efficiently, human workers may face displacement, leading to economic instability and social unrest. Moreover, the deployment of biassed algorithms can deepen existing social inequalities, especially when applied in sensitive areas like hiring, loan approvals, or criminal justice. The use of AI in surveillance systems also raises significant privacy concerns, as individuals may be monitored without their consent, leading to a chilling effect on free expression and civil liberties.

Psychological impacts

The interaction between humans and AI systems can have far-reaching implications for emotional well-being. For example, AI-driven customer service chatbots may struggle to provide the empathetic responses that human agents can offer, leading to frustration among users. Additionally, emotionally manipulative AI applications in marketing may exploit psychological vulnerabilities, promoting unhealthy consumer behaviours or contributing to feelings of inadequacy. As AI systems become more integrated into everyday life, understanding and mitigating their psychological effects will be essential for promoting healthy human-computer interactions.

Trust issues

Public mistrust of AI technologies is a significant barrier to their widespread adoption. This mistrust is largely rooted in the opacity of AI systems and the potential for algorithmic bias, which can lead to unjust outcomes. To foster trust, it is crucial to establish transparent practices and accountability measures that ensure AI systems operate fairly and ethically. This can include the development of explainable AI, which allows users to understand how decisions are made, as well as the implementation of regulatory frameworks that promote responsible AI development. By addressing these trust issues, stakeholders can work toward creating a more equitable and trustworthy AI landscape.

These complex ethical challenges require global coordination and thoughtful, adaptable regulation to ensure that AI serves humanity’s best interests, respects human dignity, and promotes fairness across all sectors of society. The ethical considerations around AI extend far beyond individual technologies or industries, impacting fundamental human rights, economic equality, environmental sustainability, and societal trust.

As AI continues to advance, the collective responsibility of governments, corporations, and individuals is to build robust, transparent systems that not only push the boundaries of innovation but also safeguard society. Only through an ethical framework can AI fulfil its potential as a transformative force for good rather than deepening existing divides or creating new dangers. The journey towards creating ethically aware AI systems necessitates ongoing research, interdisciplinary collaboration, and a commitment to prioritising human well-being in all technological advancements.

ChatGPT shows superior diagnostic skills over radiologists

A study from Osaka Metropolitan University revealed that ChatGPT, based on OpenAI’s GPT-4, has surpassed radiologists in diagnosing brain tumours. Researchers compared the diagnostic abilities of ChatGPT and radiologists using 150 MRI reports. ChatGPT achieved a 73% accuracy rate, slightly ahead of neuroradiologists at 72% and general radiologists at 68%.

The AI’s accuracy varied depending on the report’s author. It performed best with neuroradiologist reports, reaching 80% accuracy, while general radiologist reports saw the AI’s accuracy drop to 60%.

Researchers aim to explore its use in other diagnostic fields. They hope to enhance diagnostic precision and reduce the burden on medical professionals through AI integration. The study points to a future where AI might play a crucial role in preoperative tumour diagnoses.

Lead researcher Yasuhito Mitsuyama believes that these results indicate the potential of AI in improving diagnostic processes. The team is optimistic about its future applications in medical education and imaging technologies.