AI demand lifts IBM earnings

IBM has reported stronger-than-expected revenue for the second quarter and raised its annual growth forecast for its software business. This growth is driven by increased client expenditure on AI technology, particularly through the expansion of its Watsonx platform. Watsonx supports both AI deployment and open-source AI models. Shares rose by about 3% in extended trading, adding to a 12% gain this year from the AI sector rally.

Software revenue for the quarter grew by 7% to £6.74 billion. The 113-year-old company now expects this segment to achieve high-single-digit growth in 2024, exceeding its previous forecast. The AI Book of Business, which includes bookings and sales across various products, has grown to £2 billion, with £1 billion added in the second quarter alone.

In contrast, IBM has lowered its annual consulting revenue expectations. They are now forecasting low-single-digit growth instead of the previously anticipated 6%-8% range. Consulting revenue fell by 1% to £5.18 billion due to reduced client expenditure on short-term projects amid higher interest rates and inflation.

Overall revenue for the second quarter reached $15.77 billion, surpassing analysts’ estimates of $15.62 billion. Adjusted profit was $2.43 per share, beating the expected $2.20, driven by strong sales in the high-margin software business.

China cracks down on unauthorised ChatGPT access

The Cyberspace Administration of China (CAC), China’s internet regulator, has publicly identified and named agents facilitating local ChatGPT access. The latest crackdown comes in the backdrop of OpenAI’s decision to restrict access to its API in ‘unsupported countries and territories’ like mainland China, Hong Kong, and Macau.

Alongside CAC, other local authorities have penalised several website operators this year for providing unauthorised access to generative AI services like ChatGPT. These measures are indicative of the CAC’s commitment to enforcing China’s AI regulations, which mandate rigorous screening and registration of all AI services before they can be publicly made available. Even with these stringent rules, some developers and businesses have managed to sidestep the regulations by using virtual private networks.

Why does this matter?

Despite Beijing’s ambition of leading the world’s AI race, it is stringent about its requirement of GenAI providers upholding core socialist values and avoiding generating content that threatens national security or the socialist system. As of January, about 117 GenAI products have been registered with the CAC, and 14 large language models and enterprise applications have been given formal approval for commercial use.

Meta oversight board calls for clearer rules on AI-generated pornography

Meta’s Oversight Board has criticised the company’s rules on sexually explicit AI-generated depictions of real people, stating they are ‘not sufficiently clear.’ That follows the board’s review of two pornographic deepfakes of famous women posted on Meta’s Facebook and Instagram platforms. The board found that both images violated Meta’s policy against ‘derogatory sexualised photoshop,’ which is considered bullying and harassment and should have been promptly removed.

In one case involving an Indian public figure, Meta failed to act on a user report within 48 hours, leading to an automatic ticket closure. The image was only removed after the board intervened. In contrast, Meta’s systems automatically took down the image of an American celebrity. The board recommended that Meta clarify its rules to cover a broader range of editing techniques, including generative AI. It criticised the company for not adding the Indian woman’s image to a database for automatic removals.

Meta has stated it will review the board’s recommendations and update its policies accordingly. The board emphasised the importance of removing harmful content to protect those impacted, noting that many victims of deepfake intimate images are not public figures and struggle to manage the spread of non-consensual depictions.

US Senate passes bill to combat AI deepfakes

The US Senate has unanimously passed the DEFIANCE Act, allowing victims of nonconsensual intimate images created by AI, known as deepfakes, to sue their creators for damages. The bill enables victims to pursue civil remedies against those who produced or distributed sexually explicit deepfakes with malicious intent. Victims identifiable in these deepfakes can receive up to $150,000 in damages and up to $250,000 if linked to sexual assault, stalking, or harassment.

The legislative move follows high-profile incidents, such as AI-generated explicit images of Taylor Swift appearing on social media and similar cases affecting high school girls across the country. Senate Majority Leader Chuck Schumer emphasised the widespread impact of malicious deepfakes, highlighting the urgent need for protective measures.

Schumer described the DEFIANCE Act as part of broader efforts to implement AI safeguards to prevent significant harm. He called on the House to pass the bill, which has a companion bill awaiting consideration. Schumer assured victims that the government is committed to addressing the issue and protecting individuals from the abuses of AI technology.

Google’s Vertex AI will use Mistral AI’s Codestral

Google Cloud announced Wednesday that their AI service (Vertex) will use Mistral AI’s Codestral AI model, as the Google Cloud team explained.

“Today, we’re announcing that Google Cloud is the first hyper scaler to introduce Codestral – Mistral AI’s first open-weight generative AI model explicitly designed for code generation tasks — as a fully managed service.”, the company emphasised.

Mistral AI is a Paris-based startup firm founded in 2023 by former Google Deep Mind and Meta AI scientists. The partnership shows the quick growth of Mistral AI, considered the European alternative to Microsoft-backed OpenAI by many analysts.

Bing previews their generative search in answer to Google’s AI Overviews

Microsoft previewed Bing’s generative search, which is the answer to Google’s AI-powered search experiences. It is currently only available for a small percentage of users. It aggregates information from around the web and generates a summary in response to search queries.

Bing generative search will show information about the search and provide top examples, links, and sources showing where those details came from. As with Google’s similar AI Overviews feature, there’s an option to dismiss AI-generated summaries for traditional search from the same results page.

These AI-generated overview features have already generated concern, especially among publishers, as they threaten to cannibalise traffic to the sites from which they source their information. A study found that AI Overviews could negatively affect about 25% of publisher traffic due to the de-emphasis on article links.

Microsoft insists that it’s ‘maintaining the number of clicks to websites’ and ‘look[ing] closely at how generative search impacts traffic to publishers.’ According to Kyle Wiggers, senior reporter at TechCrunch, the company had no stats to back this commitment, alluding only to ‘early data’ that it’s choosing to keep private for the time being.

China’s new video-generating AI faces limitations due to political censorship

A new AI video-generating model, Kling, developed by Beijing-based Kuaishou, is now widely available but with significant limitations. Initially launched in a waitlisted access for users with Chinese phone numbers, Kling can now be accessed by anyone providing their email. The model generates five-second videos based on user prompts, simulating physics like rustling leaves and flowing water with a resolution of 720p.

However, Kling censors politically sensitive topics. Prompts related to ‘Democracy in China,’ ‘Chinese President Xi Jinping,’ and ‘Tiananmen Square protests’ result in error messages. The censorship occurs at the prompt level, allowing for the generation of videos related to these topics as long as they are not explicitly mentioned.

That behaviour likely stems from intense political pressure from the Chinese government. The Cyberspace Administration of China (CAC) is actively testing AI models to ensure they align with core socialist values and has proposed a blacklist of sources for training AI models. Companies must prepare models that produce ‘safe’ answers to thousands of questions, which may slow China’s AI development and create two classes of models: those heavily filtered and those less so.

The dichotomy raises questions about the broader implications for the AI ecosystem, as restrictive policies may hinder technological advancement and innovation.

UK and India forge new tech security partnership

Britain has initiated a new technology security partnership with India, aiming to boost economic growth and collaboration in telecom security while fostering investment in emerging technologies. The agreement will enhance cooperation on critical technologies, including semiconductors, quantum computing, and AI.

British Foreign Secretary David Lammy emphasised that this partnership would address future AI and critical minerals challenges, promoting mutual growth, innovation, job creation, and investment. Lammy made these remarks during his visit to India, where he met with Prime Minister Narendra Modi and India’s Minister for External Affairs.

Additionally, both nations have committed to closer collaboration on tackling climate change. That includes mobilising finance and advancing partnerships in offshore wind energy and green hydrogen.

Microsoft expands AI infrastructure with Lumen Technologies

Microsoft has announced a partnership with Lumen Technologies to expand its capacity for AI workloads using LT’s network equipment. The tech giant, which has faced challenges due to data center infrastructure shortages, aims to meet the growing demand for AI services at its data centers.

In April, Microsoft revealed that the shortage of necessary infrastructure was limiting its ability to fully leverage the boom in AI technology. The company, which has invested heavily in OpenAI and its ChatGPT technology, continues to pour billions into cloud infrastructure to stay ahead of competitors like Google and Amazon.

As part of the deal, Lumen Technologies will switch to Microsoft’s Azure cloud services to reduce costs. The transition is expected to improve Lumen’s cash flow by over $20 million in the next year, aiding the company’s efforts to restructure its debt and achieve financial stability.

Why does this matter?

The collaboration comes as Microsoft also makes strides in AI development with projects like Vall-E-2, which achieves human-like speech, and its commitment to expanding AI in education in Hong Kong. These initiatives highlight Microsoft’s ongoing efforts to maintain its leadership in the rapidly evolving AI landscape.

EU launches RoboSAPIENS project for adaptive industrial robots

A consortium of universities, technology accelerators, and private research labs, funded by the EU’s Horizon Europe program, has launched RoboSAPIENS, a project aimed at enhancing the adaptability and trustworthiness of industrial robots. According to the International Federation of Robotics (IFR), industrial robot installations in Europe increased by 24% in 2021, reaching 84,302 units. The new project seeks to ensure these robots can efficiently adapt to changing environments while maintaining safe collaboration with humans.

The RoboSAPIENS consortium aims to advance robotic self-adaptation, empowering robots to dynamically respond to unforeseen changes in system structure or environment. The initiative focuses on developing control software for open-ended self-adaptation, improving safety engineering techniques, and utilising deep learning to reduce task uncertainty. This approach is designed to ensure that robots can reliably and reproducibly adapt to new tasks without the need for reprogramming.

Project coordinator Peter Gorm Larsen emphasised the importance of safety and trustworthiness in industrial robotics as Europe advances its capabilities. The project will build on the Monitor-Analyze-Plan-Execute-Knowledge (MAPE-K) framework to include adaptive controllers, incorporating deep learning and digital twin simulation techniques. RoboSAPIENS is currently conducting industrial use studies with manufacturing and logistics companies to test its adaptive controllers in real-world scenarios.

Why does this matter?

The launch of RoboSAPIENS comes amid broader efforts by the European Commission to promote human-centric AI, as seen with the introduction of the InTouchAI.eu initiative. At the same time, AI experts express concerns about granting legal status to robots, highlighting the need for careful consideration of AI’s role in society. RoboSAPIENS aims to strike a balance between innovation and safety, ensuring that industrial robots can adapt effectively while maintaining reliable and trustworthy operations.