AI software enhances social workers’ engagement

A recent pilot program using AI software has significantly reduced the time social workers spend on administrative tasks by more than 60%, according to Swindon Borough Council. The AI tool, Magic Notes, developed by UK-based Beam, was tested by 19 social workers and received ‘overwhelmingly positive’ feedback. By automating the recording of conversations and generating assessments, the software allowed social workers to focus more on meaningful interactions with the people they support.

The trial, held from April to June, revealed a significant reduction in assessment times, decreasing from an average of 90 minutes to just 35 minutes. Additionally, the time needed to write reports was slashed from four hours to 90 minutes. Social workers facing challenges such as visual impairments or dyslexia reported that the tool fostered a more inclusive work environment, enhancing their confidence in their roles.

Councillor Ray Ballman, the cabinet member for adult social care, described Magic Notes as a ‘game changer.’ He mentioned that the council is now looking into additional ways to integrate the technology to enhance care quality and provide better staff support.

AI and ethics in modern society

Humanity’s rapid advancements in robotics and AI have shifted many ethical and philosophical dilemmas from the realm of science fiction into pressing real-world issues. AI technologies now permeate areas such as medicine, public governance, and the economy, making it critical to ensure their ethical use. Multiple actors, including governments, multinational corporations, international organisations, and individual citizens, share the responsibility to navigate these developments thoughtfully.

What is ethics?

Ethics refers to the moral principles that guide individual behaviour or the conduct of activities, determining what is considered right or wrong. In AI, ethics ensures that technologies are developed and used in ways that respect societal values, human dignity, and fairness. For example, one ethical principle is respect for others, which means ensuring that AI systems respect the rights and privacy of individuals.

What is AI?

Artificial Intelligence (AI) refers to systems that analyse their environment and make decisions autonomously to achieve specific goals. These systems can be software-based, like voice assistants and facial recognition software, or hardware-based, such as robots, drones, and autonomous cars. AI has the potential to reshape society profoundly. Without an ethical framework, AI could perpetuate inequalities, reduce accountability, and pose risks to privacy, security, and human autonomy. Embedding ethics in the design, regulation, and use of AI is essential to ensuring that this technology advances in a way that promotes fairness, responsibility, and respect for human rights.

AI ethics and its importance

AI ethics focuses on minimising risks related to poor design, inappropriate applications, and misuse of AI. Problems such as surveillance without consent and the weaponisation of AI have already emerged. This calls for ethical guidelines that protect individual rights and ensure that AI benefits society as a whole.

a person in a white suit

Global and regional efforts to regulate AI ethics

There are international initiatives to regulate AI ethically. For example, UNESCO‘s 2021 Recommendation on the Ethics of AI offers guidelines for countries to develop AI responsibly, focusing on human rights, inclusion, and transparency. The European Union’s AI Act is another pioneering legislative effort, which categorises AI systems by their risk level. The higher the risk, the stricter the regulatory requirements.

The Collingridge dilemma and AI

The Collingridge dilemma points to the challenge of regulating new technologies like AI. Early regulation is difficult due to limited knowledge of the technology’s long-term effects, but once the technology becomes entrenched, regulation faces opposition from stakeholders. AI is currently in a dual phase: while its long-term implications are uncertain, we already have enough examples of its immediate impact—such as algorithmic bias and privacy violations—to justify regulation in key areas.

Asimov’s Three Laws of Robotics: Ethical inspiration for AI

Isaac Asimov’s Three Laws of Robotics, while fictional, resonate with many of the ethical concerns that modern AI systems face today. These laws—designed to prevent harm to humans, ensure obedience to human commands, and prioritise the self-preservation of robots—provide a foundational, if simplistic, framework for responsible AI behaviour.

 Page, Text, Chart, Plot

Modern ethical challenges in AI

However, real-world AI introduces a range of complex challenges that cannot be adequately managed by simple rules. Issues such as algorithmic bias, privacy violations, accountability in decision-making, and unintended consequences complicate the ethical landscape, necessitating more nuanced and adaptive strategies for effectively governing AI systems.

As AI continues to develop, it raises new ethical dilemmas, including the need for transparency in decision-making, accountability in cases of accidents, and the possibility of AI systems acting in ways that conflict with their initial programming. Additionally, there are deeper questions regarding whether AI systems should have the capacity for moral reasoning and how their autonomy might conflict with human values.

Categorising AI and ethics

Modern AI systems exhibit a spectrum of ethical complexities that reflect their varying capabilities and applications. Basic AI operates by executing tasks based purely on algorithms and pre-programmed instructions, devoid of any moral reasoning or ethical considerations. These systems may efficiently sort data, recognise patterns, or automate simple processes, yet they do not engage in any form of ethical deliberation.

In contrast, more advanced AI systems are designed to incorporate limited ethical decision-making. These systems are increasingly being deployed in critical areas such as healthcare, where they help diagnose diseases, recommend treatments, and manage patient care. Similarly, in autonomous vehicles, AI must navigate complex moral scenarios, such as how to prioritise the safety of passengers versus pedestrians in unavoidable accident situations. While these advanced systems can make decisions that involve some level of ethical consideration, their ability to fully grasp and navigate complex moral landscapes remains constrained.

The variety of ethical dilemmas

 Logo, Nature, Outdoors, Person

Legal impacts

The question of AI accountability is increasingly relevant in our technologically driven society, particularly in scenarios involving autonomous vehicles, where determining liability in the event of an accident is fraught with complications. For instance, if an autonomous car is involved in a collision, should the manufacturer, software developer, or vehicle owner be held responsible? As AI systems become more autonomous, existing legal frameworks may struggle to keep pace with these advancements, leading to legal grey areas that can result in injustices. Additionally, AI technologies are vulnerable to misuse for criminal activities, such as identity theft, fraud, or cyberattacks. This underscores the urgent need for comprehensive legal reforms that not only address accountability issues but also develop robust regulations to mitigate the potential for abuse.

Financial impacts

The integration of AI into financial markets introduces significant risks, including the potential for market manipulation and exacerbation of financial inequalities. For instance, algorithms designed to optimise trading strategies may inadvertently favour wealthy investors, perpetuating a cycle of inequality. Furthermore, biased decision-making algorithms can lead to unfair lending practices or discriminatory hiring processes, limiting opportunities for marginalised groups. As AI continues to shape financial systems, it is crucial to implement safeguards and oversight mechanisms that promote fairness and equitable access to financial resources.

Environmental impacts

The environmental implications of AI cannot be overlooked, particularly given the substantial energy consumption associated with training and deploying large AI models. The computational power required for these processes contributes significantly to carbon emissions, raising concerns about the sustainability of AI technologies. In addition, the rapid expansion of AI applications in various industries may lead to increased electronic waste, as outdated hardware is discarded in favour of more advanced systems. To address these challenges, stakeholders must prioritise the development of energy-efficient algorithms and sustainable practices that minimise the ecological footprint of AI technologies.

Social impacts

AI-driven automation poses a profound threat to traditional job markets, particularly in sectors that rely heavily on routine tasks, such as manufacturing and customer service. As machines become capable of performing these jobs more efficiently, human workers may face displacement, leading to economic instability and social unrest. Moreover, the deployment of biassed algorithms can deepen existing social inequalities, especially when applied in sensitive areas like hiring, loan approvals, or criminal justice. The use of AI in surveillance systems also raises significant privacy concerns, as individuals may be monitored without their consent, leading to a chilling effect on free expression and civil liberties.

Psychological impacts

The interaction between humans and AI systems can have far-reaching implications for emotional well-being. For example, AI-driven customer service chatbots may struggle to provide the empathetic responses that human agents can offer, leading to frustration among users. Additionally, emotionally manipulative AI applications in marketing may exploit psychological vulnerabilities, promoting unhealthy consumer behaviours or contributing to feelings of inadequacy. As AI systems become more integrated into everyday life, understanding and mitigating their psychological effects will be essential for promoting healthy human-computer interactions.

Trust issues

Public mistrust of AI technologies is a significant barrier to their widespread adoption. This mistrust is largely rooted in the opacity of AI systems and the potential for algorithmic bias, which can lead to unjust outcomes. To foster trust, it is crucial to establish transparent practices and accountability measures that ensure AI systems operate fairly and ethically. This can include the development of explainable AI, which allows users to understand how decisions are made, as well as the implementation of regulatory frameworks that promote responsible AI development. By addressing these trust issues, stakeholders can work toward creating a more equitable and trustworthy AI landscape.

These complex ethical challenges require global coordination and thoughtful, adaptable regulation to ensure that AI serves humanity’s best interests, respects human dignity, and promotes fairness across all sectors of society. The ethical considerations around AI extend far beyond individual technologies or industries, impacting fundamental human rights, economic equality, environmental sustainability, and societal trust.

As AI continues to advance, the collective responsibility of governments, corporations, and individuals is to build robust, transparent systems that not only push the boundaries of innovation but also safeguard society. Only through an ethical framework can AI fulfil its potential as a transformative force for good rather than deepening existing divides or creating new dangers. The journey towards creating ethically aware AI systems necessitates ongoing research, interdisciplinary collaboration, and a commitment to prioritising human well-being in all technological advancements.

Academics to shape EU GPAI Code of Practice

A diverse group of academics will lead the drafting of a Code of Practice on general-purpose AI (GPAI). The Code is crucial for AI systems like ChatGPT and will outline the AI Act’s risk management and transparency requirements. The list of leaders includes renowned AI expert Yoshua Bengio and a range of other professionals with expertise in technical, legal, and social aspects of AI.

The announcement follows concerns from three influential MEPs who questioned the timing and international expertise of the working group leaders. Despite these concerns, the group comprises academics and researchers from institutions across the globe. The Code’s first draft is expected in November, with a workshop for GPAI providers scheduled in mid-October.

Yoshua Bengio, often called a ‘godfather of AI,’ will chair the group for technical risk mitigation. Other notable figures include law professor Alexander Peukert and AI governance expert Marietje Schaake. The working groups will address various aspects of risk management and transparency in AI development.

The EU AI Act will heavily rely on the Code of Practice until official standards are finalised by 2026. Leaders in AI and related fields are expected to shape guidelines that support innovation while ensuring AI safety.

Advanced human trainers in demand for AI

AI models, including ChatGPT and Cohere, once depended on low-cost workers to perform basic fact-checking. Today, these models require human trainers with specialised knowledge in fields like medicine, finance, and quantum physics. Invisible Tech, one of the leading companies in this space, partners with major AI firms to reduce errors in AI-generated outputs, such as hallucinations, where the model provides inaccurate information.

Invisible Tech employs thousands of remote experts, offering significant pay for high-level expertise. Advanced knowledge in subjects like quantum physics can command rates as high as $200 per hour. Companies like Cohere and Microsoft are also leveraging these trainers to improve their AI systems.

This shift from basic fact-checking to advanced training is vital as AI models like ChatGPT continue to face challenges in distinguishing between fact and fiction. The demand for human trainers has surged, with many AI firms competing to reduce errors and improve their models.

With this growth, companies such as Scale AI and Invisible Tech have established themselves as key players in the industry. As AI continues to evolve, more businesses are emerging, catering to the increasing need for human expertise in AI training.

Alphabet announces new data centres in South Carolina

Alphabet plans to invest $3.3 billion in South Carolina to establish two new data centres, according to CEO Sundar Pichai. This investment comes as the Google parent company and its competitors significantly enhance their infrastructure to support the growth of AI applications. The new data centre campuses will be located in Dorchester County, alongside an expansion of an existing facility in Berkeley County, as confirmed by the South Carolina governor’s office.

The new facilities in Dorchester County, located in the Pine Hill Business Campus in Ridgeville and Winding Woods Commerce Park in St. George, represent a $2 billion investment and are anticipated to create 200 operational jobs. Additionally, the expansion in Berkeley County will require another $1.3 billion investment. In July, Alphabet reported capital expenditures of $13 billion for the June quarter and indicated that spending would remain at or above $12 billion for the rest of 2024.

This announcement comes on the heels of Microsoft’s recent partnership with BlackRock and the Abu Dhabi-backed investment firm MGX to establish a fund exceeding $30 billion, focused on developing AI infrastructure, including the construction of data centres and energy projects.

James Cameron joins Stability AI board

James Cameron, renowned director of films like Titanic and The Terminator, has joined the board of Stability AI, an AI startup based in London. The company, known for its AI-driven image-generation tools, is aiming to transform visual media through innovative technologies.

Stability AI’s CEO, Prem Akkaraju, highlighted the importance of Cameron’s appointment in helping the firm achieve its goal of providing creators with a comprehensive portfolio of AI tools. The company has raised significant funding, including $80 million earlier this year, and is seen as a competitor to AI tools from Google and OpenAI.

Cameron expressed excitement about how generative AI and computer-generated imagery could revolutionise storytelling, offering artists unprecedented ways to bring their ideas to life. Stability AI’s tools include Stable Video Diffusion, a text-to-video generation platform.

While the relationship between AI and Hollywood has grown closer, it has also sparked controversy. In 2023, writers and actors went on strike, pushing for protections against the unregulated use of AI in film and television production. Cameron joins other notable figures on the board, such as former Facebook president Sean Parker.

Meta introduces prototype of Orion AR glasses

At its annual Connect conference, Meta Platforms unveiled its first working prototype of augmented-reality glasses called Orion. CEO Mark Zuckerberg described the chunky black glasses as a glimpse into a future where virtual and physical worlds merge seamlessly, referring to them as a “time machine” that could transform user interactions. The announcement also featured improved AI chatbot capabilities and a new Quest mixed-reality headset, contributing to a record closing high for Meta shares at $568.31.

The Orion glasses, made from magnesium alloy and powered by custom silicon designed by Meta, will include features like hand-tracking, voice controls, and a wrist-based neural interface. Meta plans to refine the glasses to make them smaller and more affordable for a projected consumer launch in 2027. However, previous attempts at AR by major tech companies have often encountered challenges. Analysts recognise Meta’s goal of making augmented reality accessible, but public scepticism about AI technology continues to be a significant barrier.

Although Zuckerberg did not demonstrate the glasses’ features live, a video showcased testers, including Nvidia CEO Jensen Huang, interacting with the device. Meta’s existing Ray-Ban smart glasses gained popularity after the introduction of an AI assistant, which will soon allow users to scan QR codes and stream music using voice commands. Future updates for these glasses are set to include real-time language translation and video generation capabilities.

Alongside its AR announcements, Meta unveiled several AI updates, including improved audio responses for its digital assistant, Meta AI, which can now mimic celebrity voices. With over 400 million monthly users, Meta is heavily investing in AI and AR technologies, anticipating record capital expenses of $37 billion to $40 billion for 2024. However, despite these investments, the Reality Labs division reported substantial losses of $8.3 billion in the first half of this year.

AI-written police reports spark efficiency debate

Several police departments in the United States have begun using AI to write incident reports, aiming to reduce time spent on paperwork. Oklahoma City’s police department was an early adopter of the AI-powered Draft One software, but paused its use to address concerns raised by the District Attorney’s office. The software analyses bodycam footage and radio transmissions to draft reports, potentially speeding up processes, although it may raise legal concerns.

Paul Mauro, a former NYPD inspector, noted that the technology could significantly reduce the burden on officers, who often spend hours writing various reports. However, he warned that officers must still review AI-generated reports carefully to avoid errors. The risk of inaccuracies or ‘AI hallucinations’ means oversight remains crucial, particularly when reports are used as evidence in court.

Mauro suggested that AI-generated reports could help standardise police documentation and assist in data analysis across multiple cases. This could improve efficiency in investigations by identifying patterns more quickly than manual methods. He also recommended using the technology for minor crimes while legal experts ensure compliance with regulations.

The potential for AI to transform police work has drawn comparisons to the initial resistance to bodycams, which are now widely accepted. While there are challenges, the introduction of AI in police reporting may offer long-term benefits for law enforcement, if implemented thoughtfully and responsibly.

Microsoft signs deal to power data centres with nuclear energy

America’s Three Mile Island energy plant, infamous for the worst nuclear accident in US history, is preparing to reopen after Microsoft signed a 20-year deal to purchase power from the facility. The plant is scheduled to restart in 2028 following upgrades and will supply clean energy to support Microsoft’s growing data centres, especially those focused on AI. The agreement is pending regulatory approval.

Constellation Energy, the plant owner, confirmed that the reactor set to restart is separate from the unit involved in the 1979 accident, which, while not fatal, created significant public fear surrounding nuclear power. This deal represents a revival of interest in atomic energy, driven by increasing concerns about climate change and rising energy needs. The CEO of Constellation described this move as a “rebirth” of nuclear power, highlighting its potential as a dependable source of carbon-free energy.

The plant’s reopening is projected to create 3,400 jobs and add over 800 megawatts of carbon-free electricity to the grid, driving significant economic activity. Although the revival has faced some protests, it underscores a growing trend among tech companies, with Amazon also exploring nuclear energy to meet its expanding energy demands.

Runway partners with Lionsgate to revolutionise film-making

Runway, a generative AI startup, has announced a significant partnership with Lionsgate, the studio responsible for popular franchises such as John Wick and Twilight. This collaboration will enable Lionsgate’s creative teams, including filmmakers and directors, to utilise Runway’s AI video-generating models. These models have been trained on the studio’s film catalogue and will be used to enhance their creative work. Michael Burns, vice chair of Lionsgate, emphasised the potential for this partnership to support creative talent.

Runway is considering new opportunities, including licensing its AI models to individual creators, allowing them to create and train custom models. This partnership represents the first public collaboration between a generative AI startup and a major Hollywood studio. Although Disney and Paramount have reportedly been discussing similar partnerships with AI providers, no official agreements have been reached yet.

This deal comes at a time of increased attention on AI in the entertainment industry, due to California’s new laws that regulate the use of AI digital replicas in film and television. Runway is also currently dealing with legal challenges regarding the alleged use of copyrighted works to train its models without permission.