Google is set to transform its Search engine into a more advanced AI-driven assistant, CEO Sundar Pichai revealed during an earnings call. The company’s ongoing AI evolution began with controversial “AI overviews” and is now expanding to include new capabilities developed by its research division, DeepMind. Google’s goal is to allow Search to browse the web, analyse information, and deliver direct answers, reducing reliance on traditional search results.
Among the upcoming innovations is Project Astra, a multimodal AI system capable of interpreting live video and responding to real-time questions. Another key development is Gemini Deep Research, an AI agent designed to generate in-depth reports, effectively automating research tasks that users previously conducted themselves. Additionally, Project Mariner could enable AI to interact with websites on behalf of users, potentially reshaping how people navigate the internet.
The shift towards AI-powered Search has sparked debate, particularly among businesses that depend on Google’s traffic and advertising. Google’s first attempt at AI integration resulted in embarrassing errors, such as incorrect and bizarre search responses. Despite initial setbacks, the company is pushing ahead, believing AI-enhanced Search will redefine how people find and interact with information online.
ByteDance, the company behind TikTok, has introduced OmniHuman-1, an advanced AI system capable of generating highly realistic deepfake videos from just a single image and an audio clip. Unlike previous deepfake technology, which often displayed telltale glitches, OmniHuman-1 produces remarkably smooth and lifelike footage. The AI can also manipulate body movements, allowing for extensive editing of existing videos.
Trained on 19,000 hours of video content from undisclosed sources, the system’s potential applications range from entertainment to more troubling uses, such as misinformation. The rise of deepfake content has already led to cases of political and financial deception worldwide, from election interference to multimillion-dollar fraud schemes. Experts warn that the technology’s increasing sophistication makes it harder to detect AI-generated fakes.
Despite calls for regulation, deepfake laws remain limited. While some governments have introduced measures to combat AI-generated disinformation, enforcement remains a challenge. With deepfake content spreading at an alarming rate, many fear that systems like OmniHuman-1 could further blur the line between reality and fabrication.
Snap has introduced an AI-powered text-to-image model designed to run efficiently on mobile devices, generating high-resolution images in just 1.4 seconds on an iPhone 16 Pro Max. Unlike cloud-based systems, this model operates entirely on the device, reducing costs while maintaining impressive visual quality. The company plans to integrate it into Snapchat’s AI Snaps and Bitmoji features in the coming months.
By developing its own AI model, Snap aims to provide users with more advanced creative tools while lowering operational expenses. The move aligns with a broader trend among tech companies investing heavily in AI to enhance their platforms. Previously, Snap relied on external providers like OpenAI and Google, but its in-house model gives it more control over future innovations.
Snapchat’s AI investment highlights the growing competition in mobile AI technology, with major players racing to deliver faster and more efficient features. As the company prepares to roll out these new capabilities, it remains to be seen how they will shape user experiences and engagement on the platform.
India‘s finance ministry has issued an advisory urging employees to refrain from using AI tools like ChatGPT and DeepSeek for official tasks, citing concerns over the potential risks to the confidentiality of government data. The directive, dated January 29, highlights the dangers of AI apps on office devices, warning that they could jeopardise the security of sensitive documents and information.
This move comes amid similar actions taken by other countries such as Australia and Italy, which have restricted the use of DeepSeek due to data security concerns. The advisory surfaced just ahead of OpenAI CEO Sam Altman’s visit to India, where he is scheduled to meet with the IT minister.
Representatives from India’s finance ministry, OpenAI, and DeepSeek have yet to comment on the matter. It remains unclear whether other Indian ministries have implemented similar measures.
Next week, Paris will host the AI Action Summit, where representatives from nearly 100 nations, including the US and China, will gather to discuss the future of AI. With the backing of both France and India, the summit aims to address AI development’s safe deployment, focusing on areas where France has a competitive edge, such as open-source systems and clean energy for powering data centres. The summit will also look at AI’s impact on labour markets and the promotion of national sovereignty in the increasingly global AI landscape.
Key industry figures, including top executives from Alphabet and Microsoft, are expected to attend. Discussions will involve a range of topics, including a potential non-binding communiqué that could reflect a global consensus on AI principles. However, it remains uncertain whether the US will align fully with other countries, given the Trump administration’s policies and tensions over issues like AI chip exports to China.
Unlike previous AI summits, which focused on safety regulations, the Paris event will not be creating new rules. Instead, the emphasis will be on how to ensure the benefits of AI reach developing nations, particularly through affordable AI models. In addition, France plans to showcase its clean energy capabilities, leveraging its nuclear power sector to address the growing energy demands of AI technologies, with some commitments expected from businesses and philanthropies to support public-interest AI projects globally.
Belgium‘s new government, led by Prime Minister Bart De Wever, has announced plans to utilise AI tools in law enforcement, including facial recognition technology for detecting criminals. The initiative will be overseen by Vanessa Matz, the country’s first federal minister for digitalisation, AI, and privacy. The AI policy is set to comply with the EU’s AI Act, which bans high-risk systems like facial recognition but allows exceptions for law enforcement under strict regulations.
Alongside AI applications, the Belgian government also aims to combat disinformation by promoting transparency in online platforms and increasing collaboration with tech companies and media. The government’s approach to digitalisation also includes a long-term strategy to improve telecom infrastructure, focusing on providing ultra-fast internet access to all companies by 2030 and preparing for potential 6G rollouts.
The government has outlined a significant digital strategy that seeks to balance technological advancements with strong privacy and legal protections. As part of this, they are working on expanding camera legislation for smarter surveillance applications. These moves are part of broader efforts to strengthen the country’s digital capabilities in the coming years.
The new OpenBusiness information system launched on Monday, replacing the previous NotifyBusiness system, which is now accessible only in a read-only format. The Greek Ministry of Development highlighted that OpenBusiness streamlines business procedures, significantly cutting costs, installation time, and startup delays for both private and public sector enterprises.
Minister Takis Theodorikakos praised the system, stating that it simplifies processes, reduces costs and time for starting economic activities, and enhances public administration efficiency.
OpenBusiness supports the licensing of 57 key economic activities and covers around 2,500 codes, offering businesses a more modern and accessible platform for their operations. It is designed to reduce bureaucracy, improve transparency, and foster a better business environment.
Bengaluru-based startup Presentations.ai has raised $3 million in a seed round led by Accel to enhance its AI-powered platform for creating business presentations. The company, which launched in 2019, saw rapid growth after the emergence of ChatGPT, gaining over a million users within three months of its beta release. Now, with over 5 million users worldwide, it aims to become the go-to AI tool for generating high-quality presentation decks.
The Indian platform uses advanced language models to streamline the presentation-making process, offering features like automated slide design, brand-aligned templates, and real-time collaboration. It also integrates text-to-image AI models, allowing users to generate custom visuals effortlessly. With a freemium model introduced in 2024, the startup has attracted tens of thousands of paying users, further solidifying its market presence.
With backing from key investors, including entrepreneurs from Paytm, CRED, and Freshworks, Presentations.ai is now working on an AI-powered assistant that can generate slides within any application. The company is also expanding its enterprise sales team to target businesses looking for more efficient ways to create presentations.
Meta has introduced a new policy framework outlining when it may restrict the release of its AI systems due to security concerns. The Frontier AI Framework categorises AI models into ‘high-risk’ and ‘critical-risk’ groups, with the latter referring to those capable of aiding catastrophic cyber or biological attacks. If an AI system is classified as a critical risk, Meta will suspend its development until safety measures can be implemented.
The company’s evaluation process does not rely solely on empirical testing but also considers input from internal and external researchers. This approach reflects Meta’s belief that existing evaluation methods are not yet robust enough to provide definitive risk assessments. Despite its historically open approach to AI development, the company acknowledges that some models could pose unacceptable dangers if released.
By outlining this framework, Meta aims to demonstrate its commitment to responsible AI development while distinguishing its approach from other firms with fewer safeguards. The policy comes amid growing scrutiny of AI’s potential misuse, especially as open-source models gain wider adoption.
With Germany’s parliamentary elections just weeks away, lawmakers are warning that authoritarian states, including Russia, are intensifying disinformation efforts to destabilise the country. Authorities are particularly concerned about a Russian campaign, known as Doppelgänger, which has been active since 2022 and aims to undermine Western support for Ukraine. The campaign has been linked to fake social media accounts and misleading content in Germany, France, and the US.
CSU MP Thomas Erndl confirmed that Russia is attempting to influence European elections, including in Germany. He argued that disinformation campaigns are contributing to the rise of right-wing populist parties, such as the AfD, by sowing distrust in state institutions and painting foreigners and refugees as a problem. Erndl emphasised the need for improved defences, including modern technologies like AI to detect disinformation, and greater public awareness and education.
The German Foreign Ministry recently reported the identification of over 50,000 fake X accounts associated with the Doppelgänger campaign. These accounts mimic credible news outlets like Der Spiegel and Welt to spread fabricated articles, amplifying propaganda. Lawmakers stress the need for stronger cooperation within Europe and better tools for intelligence agencies to combat these threats, even suggesting that a shift in focus from privacy to security may be necessary to tackle the issue effectively.
Greens MP Konstantin von Notz highlighted the security risks posed by disinformation campaigns, warning that authoritarian regimes like Russia and China are targeting democratic societies, including Germany. He called for stricter regulation of online platforms, stronger counterintelligence efforts, and increased media literacy to bolster social resilience. As the election date approaches, lawmakers urge both government agencies and the public to remain vigilant against the growing threat of foreign interference.