Brave combines AI chat with search features

Brave Search has unveiled an AI-powered chat feature that lets users ask follow-up questions to refine their initial search queries. This addition builds on Brave’s earlier ‘Answer with AI’ tool, which generates quick summaries for search queries. Now, users can engage further with a chat bar that appears beneath the summary, enabling deeper exploration without starting a new search.

For instance, a search for ‘Christopher Nolan films’ will provide an AI-generated list of his notable works. Users can then ask a follow-up question, such as “Which actors appear most in his films?” The AI will respond with relevant information while citing its sources. Powered by a mix of open and proprietary large language models, the feature seamlessly integrates search and chat for a more versatile user experience.

Unlike Google, which offers AI summaries but lacks a follow-up chat option, Brave is bridging the gap between search engines and chatbots. Brave also emphasizes privacy, ensuring that queries are not stored or used to profile users. With over 36M daily searches and 11M AI responses generated daily, Brave is advancing its commitment to private, user-friendly innovation.

US backs GlobalFoundries’ semiconductor growth with $1.5 billion

The US Commerce Department has awarded GlobalFoundries a $1.5 billion subsidy to expand semiconductor production in Malta, New York, and Vermont. This follows the company’s $13 billion commitment to bolstering United States manufacturing over the next decade, with a focus on automotive, AI, and aerospace sectors.

The funding will support enhanced technologies at the Malta facility and plans for a new plant aligned with market demand. New York state has pledged an additional $550 million to support the expansion. Commerce Secretary Gina Raimondo emphasised the urgency to finalise similar agreements before the administration ends.

GlobalFoundries CEO Thomas Caulfield highlighted the critical role of US-made chips in economic and national security. The subsidy is part of the $52.7 billion Chips and Science programme, which also allocated major awards to TSMC, Samsung, and Intel.

Growing energy demand risks climate goals

The rapid expansion of AI and cloud computing is increasing global electricity demand, raising concerns over the environmental impact. Data centres, primarily in the US, Europe, and Asia, are driving a surge in fossil fuel usage as renewable energy deployment struggles to keep pace. Coal and natural gas are being used to bridge the gap, undermining global decarbonisation targets.

In the US, data centre hubs like Northern Virginia have prompted utilities to extend fossil-fuel plant lifespans and construct new gas facilities. This trend mirrors developments in Poland, Germany, and Malaysia, where coal remains a significant energy source due to insufficient renewable capacity. Critics argue that current measures to offset emissions, such as sourcing clean energy, are not sufficient to counter the overall carbon footprint of the industry.

Efforts to decarbonise the sector include investments in advanced nuclear reactors and renewables. However, such solutions face delays, leaving utilities reliant on natural gas, described by analysts as cost-effective but imperfect. Projections suggest US natural gas demand could rise significantly, exacerbating emissions and hindering the clean-energy transition.

International commitments, like Azerbaijan’s Digitalisation Day initiative at COP29, highlight the urgency of balancing digital growth with sustainability. While global data centres aim to adopt green practices, the slow pace of renewable energy integration risks prolonging reliance on fossil fuels and delaying climate progress.

YouTube challenges TikTok with AI video feature

YouTube Shorts has rolled out a new capability in its Dream Screen feature, enabling users to create AI-generated video backgrounds. Previously limited to image generation, this update harnesses Google DeepMind’s AI video-generation model, Veo, to produce 1080p cinematic-style video clips. Creators can enter text prompts, such as ‘magical forest’ or ‘candy landscape,’ select an animation style, and receive a selection of dynamic video backdrops.

Once a background is chosen, users can film their Shorts with the AI-generated video playing behind them. This feature offers creators unique storytelling opportunities, such as setting videos in imaginative scenes or crafting engaging animated openings. In future updates, YouTube plans to let users generate stand-alone six-second video clips using Dream Screen.

The feature, available in the US, Canada, Australia, and New Zealand, distinguishes YouTube Shorts from TikTok, which currently only offers AI-generated background images. By providing tools for creating custom video backdrops, YouTube aims to cement its position as a leader in short-form video innovation.

KPMG invests $100 million in AI partnership with Google Cloud

KPMG has committed $100 million over the next four years to enhance its enterprise AI services through collaboration with Google Cloud. The investment will focus on developing AI tools, training employees, and leveraging Google’s technology to scale AI solutions for clients.

Steve Chase, KPMG’s vice chair for AI and innovation, highlighted that enterprise demand for AI has surged, with many businesses planning substantial investments in the technology. KPMG’s partnership with Google aligns with a broader strategy to expand AI services across multiple cloud platforms, including a prior $2 billion collaboration with Microsoft.

Google Cloud‘s president of revenue, Matt Renner, noted the rapid growth in cloud services, emphasising the synergy between cloud providers and consulting firms as a key driver for future industry expansion.

Massachusetts court rules against student in AI cheating case

A Massachusetts judge upheld disciplinary measures against a high school senior accused of cheating with an AI tool. The Hingham High School student’s parents sought to erase his record and raise his history grade, but the court sided with the school. Officials determined the student violated academic integrity by copying AI-generated text, including fabricated citations.

The student faced penalties including detention and temporary exclusion from the National Honor Society. He later gained readmission. His parents argued that unclear rules on AI usage led to confusion, claiming the school violated his constitutional rights. However, the court found the plagiarism policy sufficient.

Judge Paul Levenson acknowledged AI’s challenges in education but said the evidence showed misuse. The student and his partner had copied AI-generated content indiscriminately, bypassing proper review. The judge declined to order immediate changes to the student’s record or grade.

The case remains unresolved as the parents plan to pursue further legal action. School representatives praised the decision, describing it as accurate and lawful. The ruling highlights the growing complexities of generative AI in academic settings.

OpenAI explores browser and search market expansion

OpenAI is reportedly considering developing a web browser integrated with its chatbot and is in talks to enhance search features for platforms like Conde Nast, Redfin, and Priceline, according to The Information. These moves could position OpenAI as a competitor to Google in both the browser and search markets, further challenging the tech giant’s dominance.

OpenAI, led by Sam Altman, has already dipped into the search market with SearchGPT and has explored AI-powered collaborations with Samsung, a key Google partner, and Apple for its “Apple Intelligence” features. Meanwhile, Google faces increasing pressure, with the US Department of Justice suggesting it divest its Chrome browser to curb its search monopoly.

Although OpenAI’s browser plans remain in the early stages, the potential competition highlights a shift in the AI landscape, with Google and OpenAI vying to lead the generative AI race. Alphabet shares fell sharply following the report, reflecting market concerns about Google’s ability to maintain its stronghold.

Data deletion hampers OpenAI lawsuit progress

OpenAI is under scrutiny after engineers accidentally erased key evidence in an ongoing copyright lawsuit filed by The New York Times and Daily News. The publishers accuse OpenAI of using their copyrighted content to train its AI models without authorisation.

The issue arose when OpenAI provided virtual machines for the plaintiffs to search its training datasets for infringed material. On 14 November 2024, OpenAI engineers deleted the search data stored on one of these machines. While most of the data was recovered, the loss of folder structures and file names rendered the information unusable for tracing specific sources in the training process.

Plaintiffs are now forced to restart the time-intensive search, leading to concerns over OpenAI’s ability to manage its own datasets. Although the deletion is not suspected to be intentional, lawyers argue that OpenAI is best equipped to perform searches and verify its use of copyrighted material. OpenAI maintains that training AI on publicly available data falls under fair use, but it has also struck licensing deals with major publishers like the Associated Press and News Corp. The company has neither confirmed nor denied using specific copyrighted works for its AI training.

New AI model from Gates-backed Numenta

Numenta, supported by the Gates Foundation, has introduced an open-source AI model designed to cut down on energy and data use compared to existing AI systems. This innovation reflects the company’s unique take on how the brain functions, inspired by co-founder Jeff Hawkins’ expertise in neuroscience. Hawkins, known for creating the Palm Pilot, has channeled his understanding of human cognition into this new AI approach.

Unlike conventional AI systems that require vast data and electricity for training, Numenta’s model mimics the brain’s ability to process information in real time. It can adapt dynamically, like a child learning through exploration. The technology is designed to improve robotics, writing tools, and more, emphasising flexibility and efficiency.

To encourage broader adoption, Numenta has made its technology freely available, following a similar open-source trend seen with tech giants like Meta. However, CEO Subutai Ahmad emphasised the importance of closely monitoring its use, given concerns over potential misuse as the technology evolves.

Irish data authority seeks EU guidance on AI privacy under GDPR

The Irish Data Protection Commission (DPC) is awaiting guidance from the European Data Protection Board (EDPB) on handling AI-related privacy issues under the EU’s General Data Protection Regulation (GDPR). Data protection commissioners Des Hogan and Dale Sunderland emphasised the need for clarity, particularly on whether personal data continues to exist within AI training models. The EDPB is expected to provide its opinion before the end of the year, helping harmonise regulatory approaches across Europe.

The DPC has been at the forefront of addressing AI and privacy concerns, especially as companies like Meta, Google, and X (formerly Twitter) use EU users’ data to train large language models. As part of this growing responsibility, the Irish authority is also preparing for a potential role in overseeing national compliance with the EU’s upcoming AI Act, following the country’s November elections.

The regulatory landscape has faced pushback from Big Tech companies, with some arguing that stringent regulations could hinder innovation. Despite this, Hogan and Sunderland stressed the DPC’s commitment to enforcing GDPR compliance, citing recent legal actions, including a €310 million fine on LinkedIn for data misuse. With two more significant decisions expected by the end of the year, the DPC remains a key player in shaping data privacy in the age of AI.