Google has started rolling out its AI-powered Scam Detection feature for the Pixel Phone app, initially available only in the beta version for US users. First announced during Google I/O 2024, the feature uses onboard AI to help users identify potential scam calls. Currently, the update is accessible to Pixel 6 and newer models, with plans to expand to other Android devices in the future.
Scam Detection analyses the audio from incoming calls directly on the device, issuing alerts if suspicious activity is detected. For example, if a caller claims to be from a bank and pressures the recipient to transfer funds urgently, the app provides visual and audio warnings. The processing occurs locally on the phone, utilising the Pixel 9’s Gemini Nano chip or similar on-device machine learning models on earlier Pixel versions, ensuring no data is sent to the cloud.
This feature is part of Google’s ongoing efforts to tackle digital fraud, as the rise in generative AI has made scam calls more sophisticated. It joins the suite of security tools on the Pixel Phone app, including Call Screen, which uses a bot to screen calls before involving the user. Google’s localised approach aims to keep users’ information secure while enhancing their safety.
Currently, Scam Detection requires manual activation through the app’s settings, as it isn’t enabled by default. Google is seeking feedback from early adopters to refine the feature further before a wider release to other Android devices.
The Guardian has announced its departure from X, citing concerns over harmful content, such as racist and conspiracy-based posts. The decision marks a significant retreat for one of the UK’s prominent news outlets from the social media platform, which Elon Musk acquired in 2022. According to an editorial, the Guardian stated that the downsides of remaining on X now outweigh any potential benefits.
With over 10.7 million followers, the Guardian’s exit reflects rising concerns about X’s moderation policies. Critics argue that Musk’s relaxed approach has fostered an environment that tolerates misinformation and hate speech. Musk responded to the Guardian’s decision by dismissing the publication as “irrelevant” on X.
The Guardian’s move comes as other high-profile users, including former CNN anchor Don Lemon, also announce plans to leave X. Lemon expressed disappointment in the platform, saying it no longer supports meaningful debate. The UK has seen an increase in concerns about X’s impact, with British police, charities, and public health organisations also reconsidering their use of the platform.
The British government, however, still maintains a presence on X, though it refrains from paid promotions. Instead, it directs advertising efforts towards platforms like Instagram and Facebook. Observers note that the Guardian’s exit may prompt other media outlets to evaluate their stance on social media engagement.
A report by the Alan Turing Institute warns that AI has fuelled harmful narratives and spread disinformation during a major year for elections. Conducted by the Institute’s Centre for Emerging Technology and Security (CETaS), the study explores how generative AI tools, including deepfake technology and bot farms, have been used to amplify conspiracy theories and sway public opinion. While no concrete evidence links AI directly to changes in election outcomes, the study points to growing concerns over AI’s influence on voter trust.
Researchers observed AI-driven bot farms that mimicked genuine voters and used fake celebrity endorsements to spread conspiracies during key elections. These tactics, they argue, have eroded trust in democratic institutions and heightened public fear of AI’s potential misuse. Lead author Sam Stockwell noted that while evidence remains limited on AI changing electoral results, the urgent need for transparency and better access to social media data is clear.
The Institute has outlined steps to counteract AI’s potential threats to democracy, suggesting stricter deterrents against disinformation, enhanced detection of deepfake content, improved media guidance, and stronger societal defences against misinformation. These recommendations aim to create a safer information environment as AI technology continues to advance.
In response to AI’s growing presence, major AI companies, including those behind ChatGPT and Meta AI, have tightened security to prevent misuse. However, some startups, like Haiper, still lag behind, with fewer safeguards in place, leading to concerns over potentially harmful AI content reaching the public.
Meta Platforms announced it will soon give Instagram and Facebook users in Europe the option to receive less personalised ads. The decision comes in response to pressure from EU regulators and aims to address concerns about data privacy and targeted advertising. Instead of highly tailored ads, users will be shown adverts based on general factors like age, gender, and location, as well as the content they view in a given session.
The move aligns with the European Union‘s push to regulate major tech companies, supported by legislation like the Digital Markets Act (DMA), which was introduced earlier this year to promote fair competition and enhance user privacy. Additionally, Meta will offer a 40% price reduction on ad-free subscriptions for European customers.
The changes follow a recent ruling by Europe’s highest court, which supported privacy activist Max Schrems and ruled that Meta must limit the use of personal data from Facebook for advertising purposes. Meanwhile, the European Union is set to fine Apple under these new antitrust rules, marking a significant step in the enforcement of stricter regulations for Big Tech.
The incoming European Commissioner for Tech Sovereignty, Security, and Democracy, Henna Virkkunen, expressed dissatisfaction with the limited action taken by EU member states to exclude high-risk telecom suppliers, such as China’s Huawei and ZTE, from critical infrastructure. During her confirmation hearing in the European Parliament, Virkkunen noted that although the European Commission adopted 5G security measures in 2020, fewer than half of the EU member states have implemented restrictions on these suppliers. She indicated that this issue will be addressed in the planned revision of the Cyber Security Act next year and stressed the need for more serious action from national governments.
Virkkunen also pointed out that while the EU had adopted the 5G Cybersecurity Toolbox to protect telecom networks, only 11 of the 27 member states have fully implemented measures, including bans and restrictions on high-risk vendors. In addition to her efforts to strengthen cybersecurity, Virkkunen plans to propose a Digital Networks Act in 2025 to overhaul telecom regulations and boost investment and connectivity. On the topic of US Big Tech compliance with EU rules, she reaffirmed the importance of cooperation but emphasised that all companies must adhere to EU regulations, including those set out in the Digital Services Act.
The UK government is considering fines of up to £10,000 for social media executives who fail to remove illegal knife advertisements from their platforms. This proposal is part of Labour’s effort to halve knife crime in the next decade by addressing the ‘unacceptable use’ of online spaces to market illegal weapons and promote violence.
Under the plans, police would have the power to issue warnings to online companies and require the removal of specific content, with further penalties imposed on senior officials if action is not taken swiftly.The government also aims to tighten laws around the sale of ninja swords, following the tragic case of 16-year-old Ronan Kanda, who was killed with a weapon bought online.
Home Secretary Yvette Cooper stated that these new sanctions are part of a broader mission to reduce knife crime, which has devastated many communities. The proposals, backed by a coalition including actor Idris Elba, aim to ensure that online marketplaces take greater responsibility in preventing the sale of dangerous weapons.
Instagram may soon let users create AI-generated profile pictures directly within the app, according to new findings by developer Alessandro Paluzzi. A screenshot Paluzzi shared on Threads suggests users will see an option to ‘Create an AI profile picture’ while updating their profile image. This addition hints at Instagram’s push toward integrating AI more closely with user experiences.
Meta appears to be exploring similar AI-powered features across its platforms, including WhatsApp and Facebook. The company has made strides with its Llama AI models, designed to generate creative images from text prompts. Meta AI’s capabilities are already visible on WhatsApp, where a test feature has allowed some users to create images from scratch, though its rollout has been slow.
For now, Instagram users are limited to using avatars generated from actual images. An AI-generated option would offer a more creative and flexible way to personalise their profiles, adding a fresh layer of expression through custom images generated by prompts.
Meta has not confirmed any launch date for this feature on Instagram or other apps. While the latest Instagram beta does not yet include it, more updates are expected, and users could soon find themselves with a new tool for designing unique profile pictures.
Ecosia, the Berlin-based eco-conscious search engine and Qwant, France’s privacy-focused search platform, are teaming up to build a European search index. The joint venture, named European Search Perspective (EUP), seeks to reduce reliance on tech giants like Google and Microsoft, whose search APIs have become increasingly costly. This collaboration is set to foster innovation, particularly in integrating generative AI technologies into search experiences.
Both companies currently rely on Big Tech for their search backends but are determined to develop a sustainable alternative that aligns with their unique values. EUP’s index, expected to launch in early 2025, will serve traffic in France before expanding to Germany and other European languages. The partnership will enable Qwant and Ecosia to retain their distinct user experiences while benefiting from shared resources and investment.
Privacy and data sovereignty are at the heart of the initiative. Unlike major competitors, EUP’s index won’t personalise results based on user data, maintaining a privacy-first approach. This move aligns with Europe’s growing emphasis on strategic autonomy in technology, especially as AI advances create both opportunities and risks. As the first step toward a more independent tech ecosystem, EUP represents a significant shift in Europe’s search market, challenging the dominance of US tech giants and laying the groundwork for a more diverse, innovative digital future.
India’s financial crime agency is intensifying its probe into Flipkart and Amazon over alleged violations of foreign investment laws, with plans to summon executives from both companies after recent raids on their sellers. The Enforcement Directorate (ED) seized documents in last week’s raids, which a senior government source claims substantiate violations of India’s foreign investment laws. Under these laws, foreign e-commerce companies are restricted to operating as marketplaces without holding inventory, though the ED alleges that both Amazon and Flipkart have been exerting control over certain sellers.
This investigation adds to the growing regulatory scrutiny faced by the two e-commerce giants, which hold significant market shares in India’s $70 billion e-commerce sector. Previous findings from India’s antitrust authority suggested that both companies favour select sellers, allowing them to bypass marketplace-only regulations. One prominent Amazon seller, Appario, was reportedly raided and found to receive exclusive support from Amazon, including reduced fees and advanced retail tools.
The ED’s latest actions follow a pattern of increased regulatory focus on large e-commerce and delivery platforms, with recent antitrust findings indicating similar preferential treatment by food delivery services Zomato and Swiggy. As India’s retail landscape continues to expand, regulatory bodies are pushing for stricter compliance to ensure fair competition and protect smaller businesses.
Australian Prime Minister Anthony Albanese announced a groundbreaking proposal on Thursday to implement a social media ban for children under 16. The proposed legislation would require social media platforms to verify users’ ages and ensure that minors are not accessing their services. Platforms that fail to comply would face substantial fines, while users or their parents would not face penalties for violating the law. Albanese emphasised that this initiative aims to protect children from the harmful effects of social media, stressing that parents and families could count on the government’s support.
The bill would not allow exemptions for children whose parents consent to their use of social media, and it would not ‘grandfather’ existing users who are underage. Social media platforms such as Instagram, TikTok, Facebook, X, and YouTube would be directly affected by the legislation. Minister for Communications, Michelle Rowland, mentioned that these platforms had been consulted on how the law could be practically enforced, but no exemptions would be granted.
While some experts have voiced concerns about the blanket nature of the proposed ban, suggesting that it might not be the most effective solution, social media companies, including Meta (the parent company of Facebook and Instagram), have expressed support for age verification and parental consent tools. Last month, over 140 international experts signed an open letter urging the government to reconsider the approach. This debate echoes similar discussions in the US, where there have been efforts to restrict children’s access to social media for mental health reasons.