Infosys and Microsoft are expanding their collaboration to drive the global adoption of generative AI and Microsoft Azure. The partnership is set to enhance customer experiences and increase the value of their technology investments across various industries such as finance, healthcare, and telecommunications.
Infosys, an early adopter of GitHub Copilot, currently has over 18,000 developers who have generated more than seven million lines of code through the tool. The company has also launched a GitHub Centre of Excellence to support AI and Cloud solutions like Infosys Topaz, Cobalt, and Aster, aimed at transforming business operations globally.
Customers will have access to a variety of solutions through Azure Marketplace, allowing them to benefit from their Microsoft Azure Consumption Commitment (MACC). Microsoft’s Chief Partner Officer, Nicole Dezen, highlighted the potential of this collaboration to drive AI innovation and improve employee and customer experiences.
The growth of AI developer productivity could potentially add over $1.5 trillion to the global GDP by 2030, with GitHub Copilot playing a key role in boosting efficiency. More than one million developers and 20,000 organisations have adopted GitHub Copilot to date.
Amazon is introducing new technologies designed to speed up deliveries and online shopping decisions. Announced on Wednesday, the company’s Vision Assisted Package Retrieval system will be installed in 1,000 electric delivery trucks starting next year. This system uses cameras and LED projectors to guide delivery workers to the correct packages, cutting down the time needed for each delivery.
Amazon also enhances its shopping experience with AI software to help customers make faster and more informed purchasing decisions. The software provides detailed information and product recommendations, from electronics to pet supplies, reducing the need for extensive research. These tools aim to improve customer satisfaction by making the buying process more efficient.
In addition, Amazon plans to open smaller warehouses attached to Whole Foods locations to offer a broader range of products not carried in-store. The first hybrid stores will open in Pennsylvania next year, allowing customers to order items like soft drinks alongside their grocery purchases for a seamless checkout experience.
The All England Club has announced that Wimbledon will replace line judges with AI technology from 2025. This decision marks the end of a 147-year tradition, as the courtside presence of immaculately dressed line judges has long been a staple of the event. AI technology, already in use at the US Open since 2020, is set to fully automate line calls, leaving the future of more than 300 line judges uncertain.
Many officials have expressed disappointment, with chair umpire Richard Ings calling it a ‘sad but inevitable day’. While the shift to AI offers undeniable precision, there are concerns about the loss of the human element in the sport. Ings highlighted that certain decisions, like not-ups or crowd disruptions, will still require human oversight, even though automated systems will handle line calls.
The move to Electronic Line Calling (ELC) has raised worries about the future of officiating, particularly for smaller tournaments. The cost of implementing AI technology, estimated at £100,000 per court, could deter officials from smaller events that lack the budget. Organisers of Wimbledon acknowledge the importance of tradition but emphasise the advantages of the change.
Despite the transition, some aspects will remain unchanged. Chair umpires will continue to lead matches, but the courts will look and feel different without the line judges who once shared the stage. Wimbledon’s decision follows a similar switch at Queen’s Club and adds to growing concerns about officiating’s future direction.
Companies in Japan are increasingly turning to AI to manage customer service roles, addressing the country’s ongoing labour shortage. These AI systems are now being used for more complex tasks, assisting workers across various industries.
Ridgelinez Ltd, a Fujitsu subsidiary, and Autobacs Seven Co have launched trials for ‘Rachel,’ an AI assistant that recommends products based on customer needs, specific car models, and available stock. The system, developed by Tokyo-based Couger Inc, is designed to ease the burden on car sales staff, allowing them to focus on more specialised tasks while the AI handles routine queries.
In other sectors, Oki Electric Industry and Kyushu Railway have introduced a trilingual AI assistant capable of speaking Japanese, English, and Chinese. This system provides passengers with station maps and assists with transfer information. Meanwhile, Tokyo startup Sapeet Co has developed an AI system that simulates customer interactions for training staff at jewellery stores, helping to improve customer service skills.
These AI solutions are playing a key role in addressing the labour shortage, allowing human employees to focus on more advanced tasks while AI systems manage routine customer service functions.
Vodafone has announced a significant development in its Giga TV service, as part of a renewed billion-dollar partnership with Google Cloud. Over the next ten years, Google’s artificial intelligence capabilities will be integrated into the platform to enhance personalisation and content discovery for its users.
The companies plan to leverage Google Cloud’s AI to improve Vodafone’s Android-based TV system in Germany. New features will help users find content more easily and deliver a more tailored viewing experience. Additionally, Google Ad Manager will be integrated into Giga TV, enhancing the advertising landscape within the platform.
Further collaboration will see YouTube become more deeply embedded in Vodafone’s TV devices, providing a richer video experience. These improvements are set to bring significant advancements in how viewers engage with television content, both in entertainment and beyond.
Margherita Della Valle, Vodafone Group CEO, expressed excitement about the partnership, emphasising how these AI-driven innovations will transform communication and learning. She highlighted the unprecedented scale on which the new content and services will be delivered to millions of users.
Captions, an AI-powered video editing app, has introduced a new tool that manages content publishing schedules for websites and generates videos on relevant topics. This tool analyses a site to collect content, keywords, service offerings, and key selling points, creating a customised content plan. Currently, the emphasis is on producing videos for social media platforms such as Instagram Reels and TikTok, with plans to explore additional formats in the future.
The tool is designed to support small businesses like cafes and dental clinics by showcasing their offerings and seasonal trends. In June, Captions launched a feature that enables users to automatically create and edit videos using 12 AI characters. This new tool utilises a business’s existing content and relevant trends to generate video prompts, allowing sellers to create a digital twin and incorporate their brand identity, including custom colours, logos, and fonts.
Captions CEO Gaurav Misra highlighted that the tool assists businesses lacking resources to create high-quality content, enabling them to build an online presence without requiring advanced video production skills. He envisions a future where businesses can incorporate more of their web pages into the AI content planning process. Recently, Captions secured $60 million in Series C funding, which will be used to enhance its AI capabilities. The company offers paid plans, including Max at $25 per month and Scale at $70 per month.
A recent pilot program using AI software has significantly reduced the time social workers spend on administrative tasks by more than 60%, according to Swindon Borough Council. The AI tool, Magic Notes, developed by UK-based Beam, was tested by 19 social workers and received ‘overwhelmingly positive’ feedback. By automating the recording of conversations and generating assessments, the software allowed social workers to focus more on meaningful interactions with the people they support.
The trial, held from April to June, revealed a significant reduction in assessment times, decreasing from an average of 90 minutes to just 35 minutes. Additionally, the time needed to write reports was slashed from four hours to 90 minutes. Social workers facing challenges such as visual impairments or dyslexia reported that the tool fostered a more inclusive work environment, enhancing their confidence in their roles.
Councillor Ray Ballman, the cabinet member for adult social care, described Magic Notes as a ‘game changer.’ He mentioned that the council is now looking into additional ways to integrate the technology to enhance care quality and provide better staff support.
Humanity’s rapid advancements in robotics and AI have shifted many ethical and philosophical dilemmas from the realm of science fiction into pressing real-world issues. AI technologies now permeate areas such as medicine, public governance, and the economy, making it critical to ensure their ethical use. Multiple actors, including governments, multinational corporations, international organisations, and individual citizens, share the responsibility to navigate these developments thoughtfully.
What is ethics?
Ethics refers to the moral principles that guide individual behaviour or the conduct of activities, determining what is considered right or wrong. In AI, ethics ensures that technologies are developed and used in ways that respect societal values, human dignity, and fairness. For example, one ethical principle is respect for others, which means ensuring that AI systems respect the rights and privacy of individuals.
What is AI?
Artificial Intelligence (AI) refers to systems that analyse their environment and make decisions autonomously to achieve specific goals. These systems can be software-based, like voice assistants and facial recognition software, or hardware-based, such as robots, drones, and autonomous cars. AI has the potential to reshape society profoundly. Without an ethical framework, AI could perpetuate inequalities, reduce accountability, and pose risks to privacy, security, and human autonomy. Embedding ethics in the design, regulation, and use of AI is essential to ensuring that this technology advances in a way that promotes fairness, responsibility, and respect for human rights.
AI ethics and its importance
AI ethics focuses on minimising risks related to poor design, inappropriate applications, and misuse of AI. Problems such as surveillance without consent and the weaponisation of AI have already emerged. This calls for ethical guidelines that protect individual rights and ensure that AI benefits society as a whole.
Global and regional efforts to regulate AI ethics
There are international initiatives to regulate AI ethically. For example, UNESCO‘s 2021 Recommendation on the Ethics of AI offers guidelines for countries to develop AI responsibly, focusing on human rights, inclusion, and transparency. The European Union’s AI Act is another pioneering legislative effort, which categorises AI systems by their risk level. The higher the risk, the stricter the regulatory requirements.
The Collingridge dilemma and AI
The Collingridge dilemma points to the challenge of regulating new technologies like AI. Early regulation is difficult due to limited knowledge of the technology’s long-term effects, but once the technology becomes entrenched, regulation faces opposition from stakeholders. AI is currently in a dual phase: while its long-term implications are uncertain, we already have enough examples of its immediate impact—such as algorithmic bias and privacy violations—to justify regulation in key areas.
Asimov’s Three Laws of Robotics: Ethical inspiration for AI
Isaac Asimov’s Three Laws of Robotics, while fictional, resonate with many of the ethical concerns that modern AI systems face today. These laws—designed to prevent harm to humans, ensure obedience to human commands, and prioritise the self-preservation of robots—provide a foundational, if simplistic, framework for responsible AI behaviour.
Modern ethical challenges in AI
However, real-world AI introduces a range of complex challenges that cannot be adequately managed by simple rules. Issues such as algorithmic bias, privacy violations, accountability in decision-making, and unintended consequences complicate the ethical landscape, necessitating more nuanced and adaptive strategies for effectively governing AI systems.
As AI continues to develop, it raises new ethical dilemmas, including the need for transparency in decision-making, accountability in cases of accidents, and the possibility of AI systems acting in ways that conflict with their initial programming. Additionally, there are deeper questions regarding whether AI systems should have the capacity for moral reasoning and how their autonomy might conflict with human values.
Categorising AI and ethics
Modern AI systems exhibit a spectrum of ethical complexities that reflect their varying capabilities and applications. Basic AI operates by executing tasks based purely on algorithms and pre-programmed instructions, devoid of any moral reasoning or ethical considerations. These systems may efficiently sort data, recognise patterns, or automate simple processes, yet they do not engage in any form of ethical deliberation.
In contrast, more advanced AI systems are designed to incorporate limited ethical decision-making. These systems are increasingly being deployed in critical areas such as healthcare, where they help diagnose diseases, recommend treatments, and manage patient care. Similarly, in autonomous vehicles, AI must navigate complex moral scenarios, such as how to prioritise the safety of passengers versus pedestrians in unavoidable accident situations. While these advanced systems can make decisions that involve some level of ethical consideration, their ability to fully grasp and navigate complex moral landscapes remains constrained.
The variety of ethical dilemmas
Legal impacts
The question of AI accountability is increasingly relevant in our technologically driven society, particularly in scenarios involving autonomous vehicles, where determining liability in the event of an accident is fraught with complications. For instance, if an autonomous car is involved in a collision, should the manufacturer, software developer, or vehicle owner be held responsible? As AI systems become more autonomous, existing legal frameworks may struggle to keep pace with these advancements, leading to legal grey areas that can result in injustices. Additionally, AI technologies are vulnerable to misuse for criminal activities, such as identity theft, fraud, or cyberattacks. This underscores the urgent need for comprehensive legal reforms that not only address accountability issues but also develop robust regulations to mitigate the potential for abuse.
Financial impacts
The integration of AI into financial markets introduces significant risks, including the potential for market manipulation and exacerbation of financial inequalities. For instance, algorithms designed to optimise trading strategies may inadvertently favour wealthy investors, perpetuating a cycle of inequality. Furthermore, biased decision-making algorithms can lead to unfair lending practices or discriminatory hiring processes, limiting opportunities for marginalised groups. As AI continues to shape financial systems, it is crucial to implement safeguards and oversight mechanisms that promote fairness and equitable access to financial resources.
Environmental impacts
The environmental implications of AI cannot be overlooked, particularly given the substantial energy consumption associated with training and deploying large AI models. The computational power required for these processes contributes significantly to carbon emissions, raising concerns about the sustainability of AI technologies. In addition, the rapid expansion of AI applications in various industries may lead to increased electronic waste, as outdated hardware is discarded in favour of more advanced systems. To address these challenges, stakeholders must prioritise the development of energy-efficient algorithms and sustainable practices that minimise the ecological footprint of AI technologies.
Social impacts
AI-driven automation poses a profound threat to traditional job markets, particularly in sectors that rely heavily on routine tasks, such as manufacturing and customer service. As machines become capable of performing these jobs more efficiently, human workers may face displacement, leading to economic instability and social unrest. Moreover, the deployment of biassed algorithms can deepen existing social inequalities, especially when applied in sensitive areas like hiring, loan approvals, or criminal justice. The use of AI in surveillance systems also raises significant privacy concerns, as individuals may be monitored without their consent, leading to a chilling effect on free expression and civil liberties.
Psychological impacts
The interaction between humans and AI systems can have far-reaching implications for emotional well-being. For example, AI-driven customer service chatbots may struggle to provide the empathetic responses that human agents can offer, leading to frustration among users. Additionally, emotionally manipulative AI applications in marketing may exploit psychological vulnerabilities, promoting unhealthy consumer behaviours or contributing to feelings of inadequacy. As AI systems become more integrated into everyday life, understanding and mitigating their psychological effects will be essential for promoting healthy human-computer interactions.
Trust issues
Public mistrust of AI technologies is a significant barrier to their widespread adoption. This mistrust is largely rooted in the opacity of AI systems and the potential for algorithmic bias, which can lead to unjust outcomes. To foster trust, it is crucial to establish transparent practices and accountability measures that ensure AI systems operate fairly and ethically. This can include the development of explainable AI, which allows users to understand how decisions are made, as well as the implementation of regulatory frameworks that promote responsible AI development. By addressing these trust issues, stakeholders can work toward creating a more equitable and trustworthy AI landscape.
These complex ethical challenges require global coordination and thoughtful, adaptable regulation to ensure that AI serves humanity’s best interests, respects human dignity, and promotes fairness across all sectors of society. The ethical considerations around AI extend far beyond individual technologies or industries, impacting fundamental human rights, economic equality, environmental sustainability, and societal trust.
As AI continues to advance, the collective responsibility of governments, corporations, and individuals is to build robust, transparent systems that not only push the boundaries of innovation but also safeguard society. Only through an ethical framework can AI fulfil its potential as a transformative force for good rather than deepening existing divides or creating new dangers. The journey towards creating ethically aware AI systems necessitates ongoing research, interdisciplinary collaboration, and a commitment to prioritising human well-being in all technological advancements.
A diverse group of academics will lead the drafting of a Code of Practice on general-purpose AI (GPAI). The Code is crucial for AI systems like ChatGPT and will outline the AI Act’s risk management and transparency requirements. The list of leaders includes renowned AI expert Yoshua Bengio and a range of other professionals with expertise in technical, legal, and social aspects of AI.
The announcement follows concerns from three influential MEPs who questioned the timing and international expertise of the working group leaders. Despite these concerns, the group comprises academics and researchers from institutions across the globe. The Code’s first draft is expected in November, with a workshop for GPAI providers scheduled in mid-October.
Yoshua Bengio, often called a ‘godfather of AI,’ will chair the group for technical risk mitigation. Other notable figures include law professor Alexander Peukert and AI governance expert Marietje Schaake. The working groups will address various aspects of risk management and transparency in AI development.
The EU AI Act will heavily rely on the Code of Practice until official standards are finalised by 2026. Leaders in AI and related fields are expected to shape guidelines that support innovation while ensuring AI safety.
AI models, including ChatGPT and Cohere, once depended on low-cost workers to perform basic fact-checking. Today, these models require human trainers with specialised knowledge in fields like medicine, finance, and quantum physics. Invisible Tech, one of the leading companies in this space, partners with major AI firms to reduce errors in AI-generated outputs, such as hallucinations, where the model provides inaccurate information.
Invisible Tech employs thousands of remote experts, offering significant pay for high-level expertise. Advanced knowledge in subjects like quantum physics can command rates as high as $200 per hour. Companies like Cohere and Microsoft are also leveraging these trainers to improve their AI systems.
This shift from basic fact-checking to advanced training is vital as AI models like ChatGPT continue to face challenges in distinguishing between fact and fiction. The demand for human trainers has surged, with many AI firms competing to reduce errors and improve their models.
With this growth, companies such as Scale AI and Invisible Tech have established themselves as key players in the industry. As AI continues to evolve, more businesses are emerging, catering to the increasing need for human expertise in AI training.