Microsoft and G42 invest $1 billion in Kenyan data center

Microsoft is partnering with UAE-based AI firm G42 to invest $1 billion in a new data centre in Kenya to expand cloud-computing services in East Africa. The data centre, built by G42 and its partners, will use geothermal energy and provide access to Microsoft’s Azure through a new cloud region specifically for East Africa.

This initiative is part of a broader effort by major tech companies like Amazon, Microsoft, and Alphabet to meet the growing demand for cloud and generative AI services. G42, which recently received a $1.5 billion investment from Microsoft, is also developing an open-source AI model in Swahili and English.

During President William Ruto’s visit to the United States, a letter of intent for the project will be signed on Friday between Microsoft, G42, and Kenya’s Ministry of Information, Communications, and the Digital Economy. The data centre is expected to be operational within two years after the final agreements are signed.

iFlytek slashes AI model prices amid tech price war in China

AI firm iFlytek has entered a price war among China’s top tech companies by significantly reducing the cost of its ‘Spark’ large-language model (LLM). iFlytek’s move follows recent price cuts by Alibaba, Baidu, and Bytedance for their own LLMs used in generative AI products. Spark Lite, launched last September, is now free for public use, while Spark Pro and Max versions are priced at just 0.21 yuan (less than 3 cents) per 10,000 tokens, which is five times cheaper than competitors.

iFlytek claims that Spark surpasses ChatGPT 3.5 in Chinese language tasks and performs comparably in English. The Hefei-based company, renowned for its voice recognition technology, highlighted that Spark’s pricing allows significant cost savings. For instance, Spark Max can generate the entirety of Yu Hua’s novel ‘To Live’ for just 2.1 yuan ($0.29).

State-owned China Mobile, holding a 10% stake in iFlytek, is its largest shareholder. Strategic pricing aims to make advanced AI technology more accessible to the public while challenging the market dominance of other tech giants.

AI drives productivity surge in certain industries, report shows

A recent PwC (PricewaterhouseCoopers International Limited) report highlights that sectors of the global economy with high exposure to AI are experiencing significant productivity gains and wage increases. The study found that productivity growth in AI-intensive industries is nearly five times faster than in sectors with less AI integration. In the UK, job postings requiring AI skills are growing 3.6 times faster than other listings, with employers offering a 14% wage premium for these roles, particularly in legal and IT sectors.

Since the launch of ChatGPT in late 2022, AI’s impact on employment has been widely debated. However, PwC’s findings indicate that AI has influenced the job market for over a decade. Job postings for AI specialists have increased sevenfold since 2012, far outpacing the growth for other roles. The report suggests that AI is being used to address labour shortages, which could benefit countries with ageing populations and high worker demand.

PwC’s 2024 global AI jobs barometer reveals that the growth in AI-related employment contradicts fears of widespread job losses due to automation. Despite predictions of significant job reductions, the continued rise in AI-exposed occupations suggests that AI is creating new industries and transforming the job market. According to PwC UK’s chief economist, Barret Kupelian, as AI technology advances and spreads across more sectors, its potential economic impact could be transformative, marking only the beginning of its influence on productivity and employment.

FBI charges man with creating AI-generated child abuse material

A Wisconsin man, Steven Anderegg, has been charged by the FBI for creating over 10,000 sexually explicit and abusive images of children using AI. The 42-year-old allegedly used the popular AI tool Stable Diffusion to generate around 13,000 hyper-realistic images depicting prepubescent children in disturbing and explicit scenarios. Authorities discovered these images on his laptop following a tip-off from the National Center for Missing & Exploited Children (NCMEC), which had flagged his Instagram activity.

Anderegg’s charges include creating, distributing, and possessing child sexual abuse material (CSAM), as well as sending explicit content to a minor. If convicted, he faces up to 70 years in prison. The following case marks one of the first instances where the FBI has charged someone for generating AI-created child abuse material. The rise in such cases has prompted significant concern among child safety advocates and AI researchers, who warn of the increasing potential for AI to facilitate the creation of harmful content.

Reports of online child abuse have surged, partly due to the proliferation of AI-generated material. In 2023, the NCMEC noted a 12% increase in flagged incidents, straining their resources. The Department of Justice has reaffirmed its commitment to prosecuting those who exploit AI to create CSAM, emphasising that AI-generated explicit content is equally punishable under the law.

Stable Diffusion, an open-source AI model, has been identified as a tool used to generate such material. Stability AI, the company behind its development, has stated that the model used by Anderegg was an earlier version created by another startup, RunwayML. Stability AI asserts that it has since implemented stronger safeguards to prevent misuse and prohibits creating illegal content with its tools.

IBM releases open-source AI models and partners with Saudi Arabia

IBM announced it would release a family of AI models as open-source software and assist Saudi Arabia in training an AI system in Arabic. Unlike competitors such as Microsoft, which charge for their AI models, IBM provides open access to its ‘Granite’ AI models, allowing companies to customise them. These models aim to help software developers complete computer code more efficiently. IBM monetises this by offering a paid tool, Watsonx, to help run the customised models within data centres.

IBM’s approach focuses on profiting when customers utilise the AI models, regardless of their origin or data centre location. IBM’s CEO, Arvind Krishna, emphasised that they believe in the early potential of generative AI and the benefits of competition for consumers. He also highlighted the importance of being safe and responsible in AI development.

Additionally, IBM announced a collaboration with the Saudi Data and Artificial Intelligence Authority to train its ‘ALLaM’ Arabic language model using Watsonx. The following initiative will enhance IBM’s AI capabilities by incorporating the ability to understand multiple Arabic dialects.

South Korea announces plan for AI copyright and deepfake management

The South Korean government announced comprehensive plans to restructure its copyright system for AI-generated content and address the spread of fake news created by deepfake technology. This announcement was made during a Cabinet meeting in Seoul, led by Minister of Science and ICT Lee Jong-ho, who outlined 20 policy initiatives designed to tackle the pressing issues of the digital age.

Building on the Digital Bill of Rights introduced last September, the new policies aim to establish a digital framework that ensures the protection and advancement of digital rights. The Ministry of Science and ICT emphasized that these initiatives will be developed through extensive public consultations and policy research, with outcomes to be shared with the international community, including OECD member countries and the UN.

“The plan to establish a new digital order is based on the Digital Bill of Rights, and the policies will be made through pan-government efforts so that people can actually solve the issues we face in the digital era,” Minister Lee stated.

Among the 20 policy initiatives, eight key policy tasks have been identified: 

  1. Securing AI Safety, Trust, and Ethics by the establishment of frameworks to ensure the ethical development and deployment of AI technologies. 
  2. Addressing Deep Fake Fake News by mandating watermarks on AI-generated content.
  3. Reforming the AI Copyright System by revising copyright laws for AI-generated content to support the AI creative industry. 
  4. Responding to Digital Disasters and Cyber Threats by enhancing capabilities to address them effectively.
  5. Improving Digital Access and Securing Alternatives by ensuring all citizens have access to digital technologies and services, including alternatives where necessary.
  6. Stable Implementation of Telemedicine through the amendment of the Medical Service Act to establish a legal basis for telemedicine, creating a framework for non-face-to-face medical treatment.
  7. Protecting the Right to Disconnect
  8. Guaranteeing the Right to Be Forgotten by facilitating the removal of unwanted digital records

The government has prioritized the reform of the copyright system for AI-generated content. This reform is expected to be completed by the end of the year and is aimed at supporting the development of the AI-based creative industry. The Ministry highlighted the need to swiftly revise the AI copyright system to support the sector.

In a bid to counter the rise of deepfake technology, the government plans to mandate the use of watermarks on AI-generated content. Additionally, new laws will be enacted and existing ones amended to monitor and promptly remove deepfake content, particularly during election campaigns. The government is also promoting the development of advanced technologies to detect and automate the deletion of deep fakes.

Another major focus is the stabilization and expansion of telemedicine services. Although South Korea permitted remote medical activities among professionals with the amendment of the Medical Service Act in 2002, non-face-to-face treatment between doctors and patients remains restricted. The temporary allowances for contact-free treatment during the COVID-19 pandemic, especially for vulnerable populations, highlighted the need for permanent legal frameworks. The government will amend the Medical Service Act to solidify the legal basis for telemedicine, ensuring thorough communication with medical professionals, patients, and consumers throughout the process.

Further policies aim to foster a culture that respects workers’ right to disconnect from work-related communications outside of regular hours and facilitate the removal of digital records that individuals wish to erase from their online presence.

To promote these initiatives globally, the government will host a discussion session on digital rights protection at the AI Seoul Summit, held this week. Additionally, South Korea plans to establish a cooperative framework on digital protocols with leading universities and research institutes, including the University of Oxford and the University of British Columbia.

Price war escalates in China as Alibaba and Baidu cut AI costs

On Tuesday, Chinese tech giants Alibaba and Baidu significantly reduced prices for their large-language models (LLMs), intensifying a price war in the cloud computing sector. Alibaba’s cloud unit announced cuts of up to 97% on its Tongyi Qwen models, with the Qwen-Long model now costing only 0.0005 yuan per 1,000 tokens, down from 0.02 yuan. Baidu quickly followed, making its Ernie Speed and Ernie Lite models free for all business users.

The price reduction comes amid an ongoing price war in China’s cloud computing industry, with Alibaba and Tencent already lowering prices for their cloud services. Cloud vendors in China have increasingly relied on AI chatbot services to boost sales, spurred by the popularity of OpenAI’s ChatGPT. The competition has now extended to the LLMs powering these chatbots, potentially impacting profit margins.

Other companies have also joined the fray. Bytedance recently slashed the prices of its Doubao LLMs by 99.3% below the industry average for business users. Chinese startup Moonshot introduced a tipping feature for prioritising chatbot use, targeting both business and individual users. Baidu was the first in China to charge consumers for its LLM products, with its Ernie 4 model costing 59 yuan per month.

Microsoft aims to transform Windows into an AI OS with Copilot+ PCs launch

Microsoft is pushing generative AI to the forefront of Windows and its PCs. At its Build developer conference, the company unveiled new Copilot+ PCs and AI-powered features like Recall, designed to help users find past apps and files. These AI-first devices, featuring dedicated chips called NPUs, will be deeply integrated into Windows 11. The first models will use Qualcomm’s Snapdragon X Elite and Plus chips, promising extensive battery life, with Intel and AMD also on board to create processors for these devices.

In addition to the Copilot+ PCs, Microsoft introduced new Surface devices, including the Surface Laptop and Surface Pro. The Surface Laptop now features up to 22 hours of battery life and faster performance, while the new Surface Pro boasts a 90% speed increase, an OLED display, and an upgraded front-facing camera. Both devices support Wi-Fi 7 and have haptic feedback features.

Microsoft’s upcoming Recall feature for Windows 11 will allow users to ‘remember’ apps and content accessed weeks or months ago, enabling them to find past activities easily. Recall can associate colours, images, and more, allowing natural language searches. Microsoft emphasises user privacy, ensuring that all data remains on the device and is not used for AI training.

Other AI enhancements include Super Resolution for upscaling old photos and Live Captions with translations for over 40 languages. These features are powered by the Windows Copilot Runtime, which supports generative AI-powered apps even without an internet connection. CapCut, a popular video editor, will utilise this runtime to enhance its AI capabilities.

Google introduces AI Overviews to enhance search experience

Google has announced the rollout of ‘AI Overviews’, previously known as the Search Generative Experience (SGE), marking a significant shift in how users experience search results. The following feature will provide AI-generated summaries at the top of many search results, initially for users in the US and soon globally. Liz Reid, Google’s head of Search, explained that the advancement simplifies the search process by handling more complex tasks, allowing users to focus on what matters most to them.

At the recent I/O developer conference, Google unveiled various AI-driven features to enhance search capabilities. These include the ability to search using video via Lens, a planning tool for generating trip itineraries or meal plans from a single query, and AI-organized results pages tailored to specific needs, like finding restaurants for different occasions. Google’s Gemini AI model powers these innovations, summarising web content and customising results based on user input.

Despite the extensive integration of AI, only some searches will involve these advanced features. Reid noted that simple searches like navigating a specific website won’t benefit from AI enhancements. However, AI can provide comprehensive and detailed responses for more complex queries.

Why does it matter?

Google aims to balance creativity with factual accuracy in its AI outputs, ensuring reliable information while maintaining a human perspective, especially valued by younger users. Google’s shift towards AI-enhanced search represents a broader evolution from traditional keyword searches to more dynamic and interactive user experiences. By enabling natural language queries and providing rich, contextual answers, Google seeks to make searching more intuitive and efficient. The approach not only aims to attract more users but also promises to transform how people interact with information online, reducing the need for extensive typing and multiple tabs.

Scarlett Johansson slams OpenAI for voice likeness

Scarlett Johansson has accused OpenAI of creating a voice for its ChatGPT system that sounds ‘eerily similar’ to hers despite declining an offer to voice the chatbot herself. Johansson’s statement, released Monday, followed OpenAI’s announcement to withdraw the voice known as ‘Sky’.

OpenAI CEO Sam Altman clarified that a different professional actress performed Sky’s voice and was not meant to imitate Johansson. He expressed regret for not communicating better and paused the use of Sky’s voice out of respect for Johansson.

Johansson revealed that Altman had approached her last September with an offer to voice a ChatGPT feature, which she turned down. She stated that the resemblance of Sky’s voice to her own shocked and angered her, noting that even her friends and the public found the similarity striking. The actress suggested that Altman might have intentionally chosen a voice resembling hers, referencing his tweet about ‘Her’, a film where Johansson voices an AI assistant.

Why does it matter?

The controversy highlights a growing issue in Hollywood concerning the use of AI to replicate actors’ voices and likenesses. Johansson’s concerns reflect broader industry anxieties as AI technology advances, making computer-generated voices and images increasingly indistinguishable from human ones. She has hired legal counsel to investigate the creation process of Sky’s voice.

OpenAI recently introduced its latest AI model, GPT-4o, featuring audio capabilities that enable users to converse with the chatbot in real-time, showcasing a leap forward in creating more lifelike AI interactions. Scarlett Johansson’s accusations underline the ongoing challenges and ethical considerations of using AI in entertainment.