Prime Minister of Pakistan, Shehbaz Sharif, has highlighted the significant role of provinces in supporting students through the National Finance Commission (NFC), which allocates 60 percent of its shares to them. Speaking at the International Youth Day event, he pointed out that provinces now have ample resources to invest in educational initiatives, including the distribution of laptops, a practice he championed as Chief Minister of Punjab.
The Prime Minister announced that the federal government would distribute smartphones to one million high-achieving students, while provinces will continue to provide laptops. Emphasising the importance of technology in education, he underlined that equipping the new generation with modern tools is essential for the country’s future. AI, he noted, is a key area for growth.
Citing China’s success through technological advancements, the Prime Minister of Pakistan vowed to make all necessary resources available to students. He also reflected on accumulated debt over the past 70 years, contrasting it with the vision of the country’s founders. The speech included a call to action for a long-term educational programme to be launched after 14 August.
Sharif’s remarks stressed the need to bridge the gap between different social classes in Pakistan, with a focus on ensuring that every child, regardless of background, has access to the latest technology. He expressed hope that these initiatives would lead to a brighter future for the nation.
Huawei Technologies is on the brink of releasing a new AI chip, Ascend 910C, to challenge Nvidia’s dominance in the Chinese market. The company has made significant strides despite US sanctions, with Chinese internet firms and telecom operators recently testing the processor.
Huawei claims that the Ascend 910C rivals Nvidia’s H100, a powerful AI chip that has been unavailable in China.
Why does this matter?
The development signals Huawei’s ongoing efforts to circumvent restrictions and bolster its position in the AI sector.
The potential impact of OpenAI’s realistic voice feature on human interactions has raised concerns, with the company warning that people might form emotional bonds with AI at the expense of real-life relationships. The company noted that users of its GPT-4 model have shown signs of anthropomorphizing the AI, attributing human-like qualities to it, which could lead to misplaced trust and dependency. OpenAI’s report highlighted that the high-quality voice interaction might exacerbate these issues, raising questions about the long-term effects on social norms.
The company observed that some testers of the AI voice feature interacted with it in ways that suggested an emotional connection, such as expressing sadness over the end of their session. While these behaviours might seem harmless, OpenAI emphasised the need to study their potential evolution over time. The report also suggested that reliance on AI for social interaction could diminish users’ abilities or willingness to engage in human relationships, altering how people interact with one another.
Concerns were also raised about the AI’s ability to recall details and handle tasks, which might lead to over-reliance on the technology. OpenAI further noted that its AI models, designed to be deferential in conversations, might inadvertently promote anti-social norms when users become accustomed to behaviours, such as interrupting, that are inappropriate in human interactions. The company pledged to continue testing how these voice capabilities could affect emotional attachment and social behaviour.
The issue gained attention following a controversy in June when OpenAI was criticized for allegedly using a voice similar to actress Scarlett Johansson‘s in its chatbot. Although the company denied the voice belonged to Johansson, the incident underscored the risks associated with voice-cloning technology. As AI models continue to advance toward human-like reasoning, experts are increasingly urging a pause to consider the broader implications for human relationships and societal norms.
US Energy Secretary Jennifer Granholm has assured that the country will be able to meet the growing electricity demands driven by the rapid expansion of data centres powering AI. The Department of Energy anticipates that electricity demand will double by midcentury due to factors such as manufacturing growth, electrification of vehicles, and AI development. Despite concerns from local communities about the strain on resources, Granholm remains confident that clean energy sources will be sufficient to handle this increased demand, bolstered by significant investments under recent legislation.
Granholm highlighted the strong growth in renewable energy investments, predicting the deployment of over 60 gigawatts of clean energy and storage capacity this year alone. However, she acknowledged the immediate challenge of building transmission lines to connect data centers to these clean power sources. The Department of Energy is working to expedite the permitting process for such projects, with public and private investments playing a key role in expanding infrastructure.
The growth of AI has put many renewable energy goals to a test. Collaborations between tech giants such as Google and energy departments are emerging as a solution to meet the surging demand. For example, a recent partnership in Virginia between Iron Mountain and the state’s energy department will introduce large-scale batteries to store renewable energy for data centers. Granholm suggested that such initiatives could turn the demand from data centers into a catalyst for further investment in renewable energy.
The United States DOE is also researching ways to improve efficiency in data centers, aiming to help tech companies increase computing power while managing energy consumption. Granholm, after recent meetings with tech and utility leaders, hinted at upcoming major announcements that would reinforce America’s leadership in technology and innovation.
As the US election draws near, the proliferation of deepfake content is raising serious concerns about its impact on undecided voters. Deepfakes—AI-generated images, videos, or audio clips—pose a significant threat to the democratic process by making it increasingly difficult for the public to distinguish between reality and fiction. This issue was recently highlighted when Donald Trump falsely claimed that a large crowd welcoming Vice President Kamala Harris in Detroit was an AI fabrication, despite evidence proving the event’s authenticity.
Trump’s unfounded allegations and the spread of misleading deepfake content by his supporters are not just problematic for those who are firmly in his camp, but for undecided voters. These voters, who are critical to the outcome of the election, may struggle to discern the truth amidst a flood of manipulated media. This erosion of trust in what is real and what is fabricated undermines a key pillar of democracy and creates fertile ground for anti-democratic forces to gain power.
The growing prevalence of deepfakes and other digital misinformation strategies is expected to intensify in the run-up to the election. Already, Trump supporters have circulated a clearly AI-generated image, falsely claiming it was promoted by the Harris campaign. Such tactics aim to blur the lines between truth and falsehood, turning the election discourse away from verifiable facts and towards a chaotic environment where nothing can be trusted.
Experts warn that unless decisive action is taken, deepfake content will continue to compromise the integrity of the democratic process. The European Union has expressed similar concerns about the role of deepfakes in elections, highlighting the global scale of the problem. In the US, the spread of political spam and digital misinformation has surged as the 2024 election approaches, further complicating the landscape for voters.
In a groundbreaking case in the UK, a 27-year-old man named Hugh Nelson has admitted to using AI technology to create indecent images of children, a crime for which he is expected to be jailed. Nelson pleaded guilty to multiple charges at Bolton Crown Court, including attempting to incite a minor into sexual activity, distributing and making indecent images, and publishing obscene content. His sentencing is scheduled for 25 September.
The case, described by Greater Manchester Police (GMP) as ‘deeply horrifying,’ marks the first instance in the region—and possibly nationally—where AI technology was used to transform ordinary photographs of children into indecent images. Detective Constable Carly Baines, who led the investigation, emphasised the global reach of Nelson’s crimes, noting that arrests and safeguarding measures have been implemented in various locations worldwide.
Authorities hope this case will influence future legislation, as the use of AI in such offences is not yet fully addressed by current UK laws. The Crown Prosecution Service highlighted the severity of the crime, warning that the misuse of emerging technologies to generate abusive imagery could lead to an increased risk of actual child abuse.
An Austrian advocacy group, NOYB, has filed a complaint against the social media platform X, owned by Elon Musk, accusing the company of using users’ data to train its AI systems without their consent. The complaint, led by privacy activist Max Schrems, was lodged with authorities in nine European Union countries, pressuring Ireland’s Data Protection Commission (DPC), the primary EU regulator for major US tech firms because their EU operations are based in Ireland.
Despite this fact, NOYB’s complaint primarily focuses on X’s lack of cooperation and the inadequacy of its mitigation measures rather than questioning the legality of the data processing itself. Schrems emphasised the need for X to fully comply with the EU law by obtaining user consent before using their data. X has yet to respond to the latest complaint but intends to work with the DPC on AI-related issues.
In a related case, Meta, Facebook’s parent company, delayed the launch of its AI assistant in Europe after the Irish DPC advised against it, following similar complaints from NOYB regarding using personal data for AI training.
ASOS has deepened its collaboration with Microsoft by signing a new three-year deal to extend its use of AI technologies. The partnership, aimed at enhancing both customer experiences and internal operations, will see the introduction of AI tools designed to save time and allow employees to focus on more creative and strategic tasks. The online retailer’s director of technology operations, Victoria Arden, emphasised the importance of this move in driving operational excellence.
Since early 2023, ASOS has been utilising Microsoft’s Copilot tools, including those for Microsoft 365 and GitHub, to improve the efficiency of its engineering and HR teams. The HR team, for instance, has used Copilot to analyse employee engagement surveys, while other departments have explored AI-powered insights through tools like Power BI. The partnership highlights ASOS’s commitment to adopting cutting-edge technologies to enhance its data-driven decision-making processes.
ASOS has been actively piloting AI solutions to improve various aspects of its business. A recent example is the use of Copilot in Power BI to summarise performance data, aiding the company in making informed decisions. The retailer’s AI Stylist, powered by Microsoft’s Azure OpenAI, also represents a key innovation, helping customers discover new fashion trends through a conversational interface.
The collaboration between ASOS and Microsoft is built on a strong foundation established in 2022 when ASOS chose Microsoft Azure as its preferred cloud platform. However, this extended partnership reflects ASOS’s dedication to innovation through safe and responsible experimentation, aiming to continue delivering personalised, data-driven services to its global customer base.
IBM has teamed up with WWF-Germany to develop an AI-driven solution aimed at safeguarding African forest elephants, a species facing severe threats from poaching and habitat loss. This new technology will use AI to accurately identify individual elephants from camera trap photos, enhancing conservation efforts and allowing for more precise tracking of these endangered animals.
The partnership will combine IBM’s technological expertise with WWF’s conservation knowledge to create an AI-powered tool that could revolutionise how elephants are monitored. By focusing on image recognition, the technology aims to identify elephants by their unique physical features, such as heads and tusks, much like human fingerprints.
Additionally, the collaboration will employ IBM Environmental Intelligence to monitor and analyse biomass and vegetation in elephant habitats. The data will be crucial in predicting elephant movements and assessing the ecosystem services provided by these animals, such as carbon sequestration. Such insights could also pave the way for sustainable finance investments by quantifying the carbon services offered by elephants.
IBM emphasised the broader potential of this initiative, highlighting its role in supporting nature restoration and contributing to global climate change efforts. By integrating advanced technology with conservation strategies, the partnership seeks to make a lasting positive impact on both the environment and sustainable development.
AI is rapidly transforming the landscape of scientific research, but not always for the better. A growing concern is the proliferation of AI-generated errors and misinformation within academic journals. From bizarrely inaccurate images to nonsensical text, the quality of published research is being compromised. Trend like this one is exacerbated by the pressure on researchers to publish prolifically, leading many to turn to it as a shortcut.
Paper mills, which generate fraudulent academic papers for profit, are exploiting AI to produce vast quantities of low-quality content. These fabricated studies, often filled with nonsensical data and plagiarised text, are infiltrating reputable journals. The academic publishing industry is struggling to keep pace with this influx of junk science, as traditional quality control measures prove inadequate.
Beyond the issue of outright fraud, the misuse of AI by well-intentioned researchers is also a problem. While AI tools can be valuable for tasks like data analysis and language translation, their limitations are often overlooked. Overreliance on AI can lead to errors, biases, and a decline in critical thinking. As a result, the credibility of scientific research is at stake.
To address this crisis, a multifaceted approach is necessary. Increased investment in detection tools, stricter peer review standards, and greater transparency in the research process are essential steps. Additionally, academic institutions must foster a culture that prioritises quality over quantity, encouraging researchers to focus on depth rather than speed. Ultimately, safeguarding the integrity of scientific research requires a collaborative effort from researchers, publishers, and the public.