Google’s AlphaProof and AlphaGeometry 2 set new benchmarks in AI math-solving

Alphabet’s Google has revealed two innovative AI systems, AlphaProof and AlphaGeometry 2, which demonstrate significant advancements in solving complex mathematical problems. These systems tackled abstract math more effectively than previous AI models, showcasing enhanced reasoning capabilities.

DeepMind, Google’s AI unit, reported that these models managed to solve four out of six questions at the 2024 International Math Olympiad. AlphaProof, which integrates the Gemini language model with the AlphaZero system, solved three problems, including the most challenging one, while AlphaGeometry 2 solved another.

These achievements mark the best performance by an AI system in the competition to date, with some problems solved in minutes and others taking up to three days. Meanwhile, Microsoft-backed OpenAI is developing a similar project known as ‘Strawberry,’ raising concerns among its staff about its potential impact on humanity.

OpenAI challenges Google with SearchGPT

The introduction of SearchGPT by OpenAI, an AI-powered search engine with real-time internet access, challenges Google’s dominance in the search market. Announced on Thursday, the launch places OpenAI in competition not only with Google but also with its major backer, Microsoft, and emerging AI search tools like Perplexity. The announcement caused Alphabet’s shares to drop by 3%.

SearchGPT is currently in its prototype stage, with a limited number of users and publishers testing it. The tool aims to provide summarised search results with source links, allowing users to ask follow-up questions for more contextual responses. OpenAI plans to integrate SearchGPT’s best features into ChatGPT in the future. Publishers will have access to tools for managing their content’s appearance in search results.

Google, which holds a 91.1% market share in search engines, may feel the pressure to innovate as competitors like OpenAI and Perplexity enter the arena. Perplexity is already facing legal challenges from publishers, highlighting the difficulties newer AI-powered search providers might encounter.

SearchGPT marks a closer collaboration between OpenAI and publishers, with News Corp and The Atlantic as initial partners. This follows OpenAI’s content licensing agreements with major media organisations. Google did not comment on the potential impact of SearchGPT on its business.

AI voice clone enables lawmaker to speak in US Congress

US Democratic Rep. Jennifer Wexton of Virginia made history by becoming the first lawmaker to use an AI-generated model of her voice to speak on the House floor. Due to her battle with progressive supranuclear palsy (PSP), Wexton has lost the ability to use her full voice and move around as she once did. She announced in September that she would not seek reelection, citing her deteriorating health.

On Thursday, Wexton addressed the US House of Representatives using the AI model, explaining her reliance on a walker and anticipating the need for a wheelchair before her term ends. Hearing the AI rendition of her voice for the first time, Wexton described it as ‘the most beautiful thing I had ever heard,’ bringing her to tears.

Wexton’s diagnosis has renewed her determination to use her platform to help others. Her historic use of augmentative and alternative communication devices on the House floor highlights her commitment to continue serving despite her health challenges.

OpenAI CEO emphasises democratic control in the future of AI

Sam Altman, co-founder and CEO of OpenAI, raises a critical question: ‘Who will control the future of AI?’. He frames it as a choice between a democratic vision, led by the US and its allies to disseminate AI benefits widely, and an authoritarian one, led by nations like Russia and China, aiming to consolidate power through AI. Altman underscores the urgency of this decision, given the rapid advancements in AI technology and the high stakes involved.

Altman warns that while the United States currently leads in AI development, this advantage is precarious due to substantial investments by authoritarian governments. He highlights the risks if these regimes take the lead, such as restricted AI benefits, enhanced surveillance, and advanced cyber weapons. To prevent this, Altman proposes a four-pronged strategy – robust security measures to protect intellectual property, significant investments in physical and human infrastructure, a coherent commercial diplomacy policy, and establishing international norms and safety protocols.

He emphasises proactive collaboration between the US government and the private sector to implement these measures swiftly. Altman believes that proactive efforts today in security, infrastructure, talent development, and global governance can secure a competitive advantage and broad societal benefits. Ultimately, Altman advocates for a democratic vision for AI, underpinned by strategic, timely, and globally inclusive actions to maximise the technology’s benefits while minimising risks.

AI demand lifts IBM earnings

IBM has reported stronger-than-expected revenue for the second quarter and raised its annual growth forecast for its software business. This growth is driven by increased client expenditure on AI technology, particularly through the expansion of its Watsonx platform. Watsonx supports both AI deployment and open-source AI models. Shares rose by about 3% in extended trading, adding to a 12% gain this year from the AI sector rally.

Software revenue for the quarter grew by 7% to £6.74 billion. The 113-year-old company now expects this segment to achieve high-single-digit growth in 2024, exceeding its previous forecast. The AI Book of Business, which includes bookings and sales across various products, has grown to £2 billion, with £1 billion added in the second quarter alone.

In contrast, IBM has lowered its annual consulting revenue expectations. They are now forecasting low-single-digit growth instead of the previously anticipated 6%-8% range. Consulting revenue fell by 1% to £5.18 billion due to reduced client expenditure on short-term projects amid higher interest rates and inflation.

Overall revenue for the second quarter reached $15.77 billion, surpassing analysts’ estimates of $15.62 billion. Adjusted profit was $2.43 per share, beating the expected $2.20, driven by strong sales in the high-margin software business.

Meta oversight board calls for clearer rules on AI-generated pornography

Meta’s Oversight Board has criticised the company’s rules on sexually explicit AI-generated depictions of real people, stating they are ‘not sufficiently clear.’ That follows the board’s review of two pornographic deepfakes of famous women posted on Meta’s Facebook and Instagram platforms. The board found that both images violated Meta’s policy against ‘derogatory sexualised photoshop,’ which is considered bullying and harassment and should have been promptly removed.

In one case involving an Indian public figure, Meta failed to act on a user report within 48 hours, leading to an automatic ticket closure. The image was only removed after the board intervened. In contrast, Meta’s systems automatically took down the image of an American celebrity. The board recommended that Meta clarify its rules to cover a broader range of editing techniques, including generative AI. It criticised the company for not adding the Indian woman’s image to a database for automatic removals.

Meta has stated it will review the board’s recommendations and update its policies accordingly. The board emphasised the importance of removing harmful content to protect those impacted, noting that many victims of deepfake intimate images are not public figures and struggle to manage the spread of non-consensual depictions.

US Senate passes bill to combat AI deepfakes

The US Senate has unanimously passed the DEFIANCE Act, allowing victims of nonconsensual intimate images created by AI, known as deepfakes, to sue their creators for damages. The bill enables victims to pursue civil remedies against those who produced or distributed sexually explicit deepfakes with malicious intent. Victims identifiable in these deepfakes can receive up to $150,000 in damages and up to $250,000 if linked to sexual assault, stalking, or harassment.

The legislative move follows high-profile incidents, such as AI-generated explicit images of Taylor Swift appearing on social media and similar cases affecting high school girls across the country. Senate Majority Leader Chuck Schumer emphasised the widespread impact of malicious deepfakes, highlighting the urgent need for protective measures.

Schumer described the DEFIANCE Act as part of broader efforts to implement AI safeguards to prevent significant harm. He called on the House to pass the bill, which has a companion bill awaiting consideration. Schumer assured victims that the government is committed to addressing the issue and protecting individuals from the abuses of AI technology.

Google’s Vertex AI will use Mistral AI’s Codestral

Google Cloud announced Wednesday that their AI service (Vertex) will use Mistral AI’s Codestral AI model, as the Google Cloud team explained.

“Today, we’re announcing that Google Cloud is the first hyper scaler to introduce Codestral – Mistral AI’s first open-weight generative AI model explicitly designed for code generation tasks — as a fully managed service.”, the company emphasised.

Mistral AI is a Paris-based startup firm founded in 2023 by former Google Deep Mind and Meta AI scientists. The partnership shows the quick growth of Mistral AI, considered the European alternative to Microsoft-backed OpenAI by many analysts.

Bing previews their generative search in answer to Google’s AI Overviews

Microsoft previewed Bing’s generative search, which is the answer to Google’s AI-powered search experiences. It is currently only available for a small percentage of users. It aggregates information from around the web and generates a summary in response to search queries.

Bing generative search will show information about the search and provide top examples, links, and sources showing where those details came from. As with Google’s similar AI Overviews feature, there’s an option to dismiss AI-generated summaries for traditional search from the same results page.

These AI-generated overview features have already generated concern, especially among publishers, as they threaten to cannibalise traffic to the sites from which they source their information. A study found that AI Overviews could negatively affect about 25% of publisher traffic due to the de-emphasis on article links.

Microsoft insists that it’s ‘maintaining the number of clicks to websites’ and ‘look[ing] closely at how generative search impacts traffic to publishers.’ According to Kyle Wiggers, senior reporter at TechCrunch, the company had no stats to back this commitment, alluding only to ‘early data’ that it’s choosing to keep private for the time being.

China’s new video-generating AI faces limitations due to political censorship

A new AI video-generating model, Kling, developed by Beijing-based Kuaishou, is now widely available but with significant limitations. Initially launched in a waitlisted access for users with Chinese phone numbers, Kling can now be accessed by anyone providing their email. The model generates five-second videos based on user prompts, simulating physics like rustling leaves and flowing water with a resolution of 720p.

However, Kling censors politically sensitive topics. Prompts related to ‘Democracy in China,’ ‘Chinese President Xi Jinping,’ and ‘Tiananmen Square protests’ result in error messages. The censorship occurs at the prompt level, allowing for the generation of videos related to these topics as long as they are not explicitly mentioned.

That behaviour likely stems from intense political pressure from the Chinese government. The Cyberspace Administration of China (CAC) is actively testing AI models to ensure they align with core socialist values and has proposed a blacklist of sources for training AI models. Companies must prepare models that produce ‘safe’ answers to thousands of questions, which may slow China’s AI development and create two classes of models: those heavily filtered and those less so.

The dichotomy raises questions about the broader implications for the AI ecosystem, as restrictive policies may hinder technological advancement and innovation.