Senators call for inquiry into AI content summarisation

A group of Democratic senators, led by Amy Klobuchar, has called on the United States Federal Trade Commission (FTC) and the Department of Justice (DOJ) to investigate whether AI tools that summarise online content are anti-competitive. The concern is that AI-generated summaries keep users on platforms like Google and Meta, preventing traffic from reaching the original content creators, which can result in lost advertising revenue for those creators.

The senators argue that platforms profit from using third-party content to generate AI summaries, while publishers are left with fewer opportunities to monetise their work. Content creators are often forced to choose between having their work summarised by AI tools or opting out entirely from being indexed by search engines, risking significant drops in traffic.

There is also a concern that AI features can misappropriate third-party content, passing it off as new material. The senators believe that the dominance of major online platforms is creating an unfair market for advertising revenue, as these companies control how content is monetised and limit the potential for original creators to benefit.

The letter calls for regulators to examine whether these practices violate antitrust laws. The FTC and DOJ will need to determine if the behaviour constitutes exclusionary conduct or unfair competition. The push from legislators could also lead to new laws if current regulations are deemed insufficient.

Elea data centres drives Brazil’s digital transformation with rebrand and sustainability focus

Brazil is experiencing a transformative shift in its digital infrastructure landscape with the rebranding of Elea Data Centers from Elea Digital Data Centers. The strategic change, accompanied by the acquisition of two major data centre campuses in São Paulo, significantly bolsters Elea’s presence and capabilities in the Brazilian market.

Elea now operates nine facilities across five major metropolitan areas, making it the country’s largest decentralised data centre provider. Each facility is powered by 100% renewable energy, underscoring the company’s leadership in sustainable practices and setting a high standard for environmental responsibility within the industry.

The updated identity emphasises Elea’s mission to drive Brazil’s digital transformation by offering state-of-the-art infrastructure solutions catering to various technological needs. From edge computing to hyperscale data centres, Elea is committed to supporting the evolving demands of businesses and positioning Brazil at the forefront of technological innovation.

Why does this matter?

The rebrand reflects Elea’s dedication to preparing the nation for future advancements, particularly in emerging fields such as AI. It underscores the company’s role in shaping Brazil’s digital future, focusing on sustainability and cutting-edge technology.

Huawei to boost Malawi’s digital transformation

Huawei is significantly contributing to Malawi’s digital transformation through its comprehensive Smart Village Program, which aims to bridge the digital divide in rural areas. This program integrates smart agriculture technologies, expands access to financial services, and enhances education and healthcare through digital solutions.

As part of this initiative, Huawei will establish technical training centres in rural regions to equip young people with crucial digital skills in AI, cybersecurity, and smart agriculture. That effort is a key component of Huawei’s larger $430 million investment plan for Africa, which includes funding for cloud development, talent development, and long-term technological progress.

The initiative supports Malawi’s MW2063 agenda, which envisions transforming the country into an industrialised upper-middle-income nation by 2063. It also builds on previous collaborations, such as the launch of Malawi’s National Data Centre in 2022, marking a significant advancement in the nation’s digital infrastructure.

In addition to Malawi, Huawei’s regional impact extends to other African countries, including Zambia and Uganda, where it is involved in smart village projects, and Kenya, where it contributes to smart city initiatives. These efforts aim to enhance connectivity and drive technological innovation across the continent.

Mobily transforms telecommunications with AI, supporting Saudi Arabia’s Vision 2030

Mobily is leveraging AI to revolutionise the telecommunications industry, particularly in the Middle East. By aligning with Saudi Arabia’s Vision 2030, Mobily is using AI to drive growth and innovation. The company’s AI-driven solutions improve network efficiency, enhance customer experience, and boost business agility, positioning Mobily as a leader in the region’s telecom sector.

Through predictive maintenance, Mobily ensures network reliability, while AI-powered customer service chatbots and analytics platforms optimise performance and provide personalised services to meet the growing demands of digital consumers. Mobily also places a strong emphasis on enhancing the customer experience through AI. The company uses AI to offer personalised support, analyse customer data to deliver tailored recommendations, anticipate needs, and provide proactive service. AI-powered tools like chatbots and virtual assistants streamline customer service, resulting in faster response times and improved satisfaction.

Additionally, Mobily ensures its use of AI adheres to strict ethical standards, prioritising data privacy, transparency, and fairness. With robust encryption, user consent practices, and bias mitigation strategies, Mobily safeguards customer information while building trust through ethical AI use.

Mobily also focuses on building and developing AI talent. The company collaborates with universities to create internship programs and invests in continuous learning initiatives for its employees, fostering a culture of innovation and ensuring that the organisation stays ahead in AI advancements. Furthermore, Mobily emphasises cross-departmental collaboration to integrate AI effectively across marketing, operations, and other business units.

iPhone 16 criticised in China for lack of AI

Apple’s new iPhone 16, launched on Monday, faced criticism in China for its lack of AI features, as the company contends with increasing competition from domestic tech giant Huawei. While Apple highlighted AI-enhanced capabilities in its global announcement, the iPhone 16’s Chinese version will not have AI functionality until next year, which sparked significant debate on Chinese social media platforms.

On Weibo, discussions centred on the absence of AI, with users questioning the value of the new model compared to Huawei’s imminent launch of a three-way foldable smartphone. Some users expressed disappointment that Apple hadn’t yet partnered with a local AI provider to enhance the iPhone‘s functionality in China.

Despite the AI criticism, analysts believe that the lack of immediate AI integration is unlikely to impact short-term sales. Experts pointed to Apple’s strong customer loyalty and predicted that users of older iPhone models will still drive demand for upgrades. However, they warned that the company must develop a robust AI ecosystem in China to stay competitive in the long run.

Pre-orders for the iPhone 16 will begin on Friday through platforms such as JD.com, with deliveries expected from 20 September. Meanwhile, Huawei’s latest models continue to gain popularity in China, posing a growing challenge to Apple’s market share.

California’s AI bill gains industry support

Around 120 current and former employees from AI giants like OpenAI, Anthropic, DeepMind, and Meta have publicly voiced their support for California’s new AI regulation bill, SB 1047. The bill, which includes whistle-blower protections for employees revealing the risks in AI models, aims to impose stronger regulations on developing powerful AI technologies. Supporters argue that these measures are crucial to prevent potential threats such as cyberattacks and the misuse of biological weapons.

California’s SB 1047 has already passed the State Assembly and Senate and is awaiting Governor Gavin Newsom’s decision, with a deadline set for 30 September. Notably, high-profile signatories of the letter backing the bill include Geoffrey Hinton, a Turing Award winner, and Jan Leike, a former OpenAI alignment lead, signalling wide support from influential figures in the tech world.

Proponents of the bill believe AI companies should be responsible for testing and ensuring their models don’t pose significant harm. They argue that regulations are essential to safeguard critical infrastructure and prevent AI misuse. Despite its limitations, experts like Harvard’s Lawrence Lessig have called the bill a ‘solid step forward’ in managing AI risks.

However, not everyone agrees. OpenAI and other major tech organisations, including the US Chamber of Commerce and the Software and Information Industry Association, oppose the bill, claiming it would stifle innovation in the fast-moving AI sector. Tech industry advocates argue that over-regulation may hinder the development of cutting-edge technologies.

US proposes mandatory reporting for advanced AI and cloud providers

The US Commerce Department has proposed new rules that would require developers of advanced AI and cloud computing providers to report their activities to the government. The proposal aims to ensure that cutting-edge AI technologies are safe and secure, particularly against cyberattacks.

It also mandates detailed reporting on cybersecurity measures and the results of ‘red-teaming’ efforts, where systems are tested for vulnerabilities, including potential misuse for cyberattacks or the development of dangerous weapons.

The move comes as AI, especially generative models, has sparked excitement and concern, with fears over job displacement, election interference, and catastrophic risks. Under the proposal, the collected data would help the US government enforce safety standards and protect against threats from foreign adversaries.

Why does this matter?

The following regulatory push follows President Biden’s 2023 executive order requiring AI developers to share safety test results with the government before releasing certain systems to the public. The new rules come amid stalled legislative action on AI and are part of broader efforts to limit the use of US technology by foreign powers, particularly China.

South Korea hosts global summit on AI in warfare

South Korea hosted a pivotal international summit on Monday to craft guidelines for the responsible use of AI in the military. Representatives from over 90 countries, including the US and China, attended the two-day event in Seoul. The summit aimed to produce a blueprint for AI use in warfare, though any agreement is expected to lack binding legal power. The initiative marked the second such gathering, following a similar summit in Amsterdam last year, where nations endorsed a call to action without legal obligations.

South Korean Defense Minister Kim Yong-hyun highlighted AI’s growing role in modern warfare, referencing Ukraine’s use of AI-powered drones in its ongoing conflict with Russia. He likened AI’s potential in the military to a ‘double-edged sword,’ emphasising its ability to enhance operational capabilities and its risks if misused. South Korea‘s foreign minister, Cho Tae-yul, further underscored the need for international safeguards, suggesting that mechanisms be put in place to prevent autonomous weapons from making lethal decisions without human oversight.

The summit aims to outline principles for the responsible use of AI in the military, drawing from guidelines established by NATO and various national governments. However, many attending nations will endorse the proposed frame, which remains to be seen. While the document seeks to establish minimum guardrails for AI, it is not expected to impose legally binding commitments.

Beyond this summit, international discussions on AI’s role in warfare are ongoing. ThUN also explores potential restrictions on lethal autonomous weapons under the 1983 Convention on Certain Conventional Weapons (CCW). Additionally, the US government has been leading efforts to promote responsible AI use in the military, with 55 countries already endorsing its declaration.

Co-hosted by the Netherlands, Singapore, Kenya, and the United Kingdom, Seoul brings together around 2,000 participants, including representatives from international organisations, academia, and the private sector, discussing various topics, from civilian protection to AI’s potential role in nuclear weapon control. The summit seeks to ensure ongoing collaboration on the rapidly evolving technology, especially as governments remain the key decision-makers in this crucial area.

FedEx expands fulfilment with investment in AI robotics firm Nimble

FedEx has made a strategic investment in AI robotics and automation company Nimble to enhance its fulfilment services for small and medium-sized businesses. The investment aims to support FedEx’s Fulfilment unit, which assists businesses with order fulfilment and inventory management.

The investment comes as parcel delivery companies increasingly turn to automation to reduce costs and improve efficiency, particularly during periods of lower freight demand. FedEx believes Nimble’s automated third-party logistics solutions will help optimise supply chain operations across North America.

Scott Temple, president of FedEx Supply Chain, stated that the alliance with Nimble will expand the company’s presence in e-commerce, allowing FedEx to scale its fulfilment offerings throughout North America. The exact size of the investment has not been disclosed.

Nimble’s AI robotics technology is expected to help FedEx improve the efficiency of its fulfilment operations and further strengthen its position in the e-commerce sector.

ChatGPT gains over million subscribers, new pricing plans discussed

OpenAI announced on Thursday that it now has over 1 million paying users across its ChatGPT business products, including Enterprise, Team, and Edu. The increase from 600,000 users in April highlights CEO Sam Altman’s success in driving enterprise adoption of the AI tool.

Recent reports suggest OpenAI executives are discussing premium subscriptions for upcoming large-language models, such as the reasoning-focused Strawberry and a new flagship model called Orion. Subscription prices could reach as high as $2,000 per month for these advanced AI tools.

ChatGPT Plus currently costs $20 per month, while the free tier continues to be used by hundreds of millions every month. OpenAI is also working on Strawberry to enable its AI models to perform deep research, refining them after their initial training.

The discussion around premium pricing follows news that Apple and Nvidia are in talks to invest in OpenAI, with the AI company expected to be valued at over $100 billion. ChatGPT currently has more than 200 million weekly active users, doubling its user base since last autumn.