Publishers set to earn from Comet Plus, Perplexity’s new initiative

Perplexity has announced Comet Plus, a new service that will pay premium publishers to provide high-quality news content as an alternative to clickbait. The company has not disclosed its roster of partners or payment structure, though reports suggest a pool of $42.5 million.

Publishers have long criticised AI services for exploiting their work without compensation. Perplexity, backed by Amazon’s Jeff Bezos, said Comet Plus will create a fairer system and reward journalists for producing trusted content in the era of AI.

The platform introduces a revenue model based on three streams: human visits, search citations, and agent actions. Perplexity argues this approach better reflects how people consume information today, whether by browsing manually, seeking AI-generated answers, or using AI agents.

The company stated that the initiative aims to rebuild trust between readers and publishers, while ensuring that journalism thrives in a changing digital economy. The initial group of publishing partners will be revealed later.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Netflix limits AI use in productions with new rules

Netflix has issued detailed guidance for production companies on the approved use of generative AI. The guidelines allow AI tools for early ideation tasks such as moodboards or reference images, but stricter oversight applies beyond that stage.

The company outlined five guiding principles. These include ensuring generated content does not replicate copyrighted works, maintaining security of inputs, avoiding use of AI in final deliverables, and prohibiting storage or reuse of production data by AI tools.

Enterprises or vendors working on Netflix content must pass the platform’s AI compliance checks at every stage.

Netflix has already used AI to reduce VFX costs on projects like The Eternaut, but has moved to formalise boundaries around how and when the technology is applied.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Malicious apps on Google Play infected 19 million users with banking trojan

Security researchers from Zscaler’s ThreatLabz team uncovered 77 malicious Android applications on the Google Play Store, collectively downloaded over 19 million times, that distributed the Anatsa banking trojan, TeaBot, and other malware families.

Anatsa, active since at least 2020, has evolved to target over 831 banking, fintech and cryptocurrency apps globally, including platforms in Germany and South Korea. These campaigns now use direct payload installation with encrypted runtime strings and device checks to evade detection.

Deploying as decoy tools, often document readers, the apps triggered a silent download of malicious code after installation. The Trojan automatically gained accessibility permissions to display overlays, capture credentials, log keystrokes, and intercept messages. Additional malware such as Joker, its variant Harly, and adware were also detected.

Following disclosure, Google removed the identified apps from the Play Store. Users are advised to enable Google Play Protect, review app permissions carefully, limit downloads to trusted developers, and consider using antivirus tools to stay protected.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Travellers claim ChatGPT helps cut flight costs by hundreds of pounds

ChatGPT is increasingly used as a travel assistant, with some travellers claiming it can save hundreds of pounds on flights. Finance influencer Casper Opala shares cost-saving tips online and said the AI tool helped him secure a flight for £70 that initially cost more than £700.

Opala shared a series of prompts that allow ChatGPT to identify hidden routes, budget airlines not listed on major platforms, and potential savings through alternative airports or separate bookings. He also suggested using the tool to monitor prices for several days or compare one-way fares with return tickets.

While many money-saving tricks have existed for years, ChatGPT condenses the process, collecting results in seconds. Opala says this efficiency is a strong starting point for cheaper travel deals.

Experts, however, warn that ChatGPT is not connected to live flight booking systems. TravelBook’s Laura Pomer noted that the AI can sometimes present inaccurate or outdated fares, meaning users should always verify results before booking.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Vietnam accelerates modernization of foreign affairs through technology and AI

The Ministry of Foreign Affairs of Vietnam spearheads an extensive digital transformation initiative in line with the Politburo’s Resolution No. 57-NQ/TW issued in December 2024. This resolution highlights the necessity of advancements in science, technology, and national digital transformation.

Under the guidance of Deputy Prime Minister and Minister Bui Thanh Son, the Ministry is committed to modernising its operations and improving efficiency, reflecting Vietnam’s broader digital evolution strategy across all sectors.

Key implementations of this transformation include the creation of three major digital platforms: an electronic information portal providing access to foreign policies and online public services, an online document management system for internal digitalisation, and an integrated data-sharing platform for connectivity and multi-dimensional data exchange.

The Ministry has digitised 100% of its administrative procedures, linking them to a national-level system, showcasing a significant stride towards administrative reform and efficiency. Additionally, the Ministry has fully adopted social media channels, including Facebook and Twitter, indicating its efforts to enhance foreign information dissemination and public engagement.

A central component of this initiative is the ‘Digital Literacy for All’ movement, inspired by President Ho Chi Minh’s historic ‘Popular Education’ campaign. This movement focuses on equipping diplomatic personnel with essential digital skills, transforming them into proficient ‘digital civil servants’ and ‘digital ambassadors.’ The Ministry aims to enhance its diplomatic functions in today’s globally connected environment by advancing its ability to navigate and utilise modern technologies.

The Ministry plans to develop its digital infrastructure further, strengthen data management, and integrate AI for strategic planning and predictive analysis.

Establishing a digital data warehouse for foreign information and enhancing human resources by nurturing technology experts within the diplomatic sector are also on the agenda. These actions reflect a strong commitment to fostering a professional and globally adept diplomatic industry, poised to safeguard national interests and thrive in the digital age.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

YouTube under fire for AI video edits without creator consent

Anger grows as YouTube secretly alters some uploaded videos using machine learning. The company admitted that it had been experimenting with automated edits, which sharpen images, smooth skin, and enhance clarity, without notifying creators.

Although tools like ChatGPT or Gemini did not generate these changes, they still relied on AI.

The issue has sparked concern among creators, who argue that the lack of consent undermines trust.

YouTuber Rhett Shull publicly criticised the platform, prompting YouTube liaison Rene Ritchie to clarify that the edits were simply efforts to ‘unblur and denoise’ footage, similar to smartphone processing.

However, creators emphasise that the difference lies in transparency, since phone users know when enhancements are applied, whereas YouTube users were unaware.

Consent remains central to debates around AI adoption, especially as regulation lags and governments push companies to expand their use of the technology.

Critics warn that even minor, automatic edits can treat user videos as training material without permission, raising broader concerns about control and ownership on digital platforms.

YouTube has not confirmed whether the experiment will expand or when it might end.

For now, viewers noticing oddly upscaled Shorts may be seeing the outcome of these hidden edits, which have only fuelled anger about how AI is being introduced into creative spaces.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI controversy surrounds Will Smith’s comeback shows

Footage from Will Smith’s comeback tour has sparked claims that AI was used to alter shots of the crowd. Viewers noticed faces appearing blurred or distorted, along with extra fingers and oddly shaped hands in several clips.

Some accused Smith of boosting audience shots with AI, while others pointed to YouTube, which has been reported to apply AI upscaling without creators’ knowledge.

Guitarist and YouTuber Rhett Shull recently suggested the platform had altered his videos, raising concerns that artists might be wrongly accused of using deepfakes.

The controversy comes as the boundary between reality and fabrication grows increasingly uncertain. AI has been reshaping how audiences perceive authenticity, from fake bands to fabricated images of music legends.

Singer SZA is among the artists criticising the technology, highlighting its heavy energy use and potential to undermine creativity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots found unreliable in suicide-related responses, according to a new study

A new study by the RAND Corporation has raised concerns about the ability of AI chatbots to answer questions related to suicide and self-harm safely.

Researchers tested ChatGPT, Claude and Gemini with 30 different suicide-related questions, repeating each one 100 times. Clinicians assessed the queries on a scale from low to high risk, ranging from general information-seeking to dangerous requests about methods of self-harm.

The study revealed that ChatGPT and Claude were more reliable at handling low-risk and high-risk questions, avoiding harmful instructions in dangerous scenarios. Gemini, however, produced more variable results.

While all three ΑΙ chatbots sometimes responded appropriately to medium-risk questions, such as offering supportive resources, they often failed to respond altogether, leaving potentially vulnerable users without guidance.

Experts warn that millions of people now use large language models as conversational partners instead of trained professionals, which raises serious risks when the subject matter involves mental health. Instances have already been reported where AI appeared to encourage self-harm or generate suicide notes.

The RAND team stressed that safeguards are urgently needed to prevent such tools from producing harmful content in response to sensitive queries.

The study also noted troubling inconsistencies. ChatGPT and Claude occasionally gave inappropriate details when asked about hazardous methods, while Gemini refused even basic factual queries about suicide statistics.

Researchers further observed that ChatGPT showed reluctance to recommend therapeutic resources, often avoiding direct mention of safe support channels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New WhatsApp features help manage unwanted groups

WhatsApp is expanding its tools to give users greater control over the groups they join and the conversations they take part in.

When someone not saved in a user’s contacts adds them to a group, WhatsApp now provides details about that group so they can immediately decide whether to stay or leave. If a user chooses to exit, they can also report the group directly to WhatsApp.

Privacy settings allow people to decide who can add them to groups. By default, the setting is set to ‘Everyone,’ but it can be adjusted to ‘My contacts’ or ‘My contacts except…’ for more security. Messages within groups can also be reported individually, with users having the option to block the sender.

Reported messages and groups are sent to WhatsApp for review, including the sender’s or group’s ID, the time the message was sent, and the message type.

Although blocking an entire group is impossible, users can block or report individual members or administrators if they are sending spam or inappropriate content. Reporting a group will send up to five recent messages from that chat to WhatsApp without notifying other members.

Exiting a group remains straightforward: users can tap the group name and select ‘Exit group.’ With these tools, WhatsApp aims to strengthen user safety, protect privacy, and provide better ways to manage unwanted interactions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FTC cautions US tech firms over compliance with EU and UK online safety laws

The US Federal Trade Commission (FTC) has warned American technology companies that following European Union and United Kingdom rules on online content and encryption could place them in breach of US legislation.

In a letter sent to chief executives, FTC Chair Andrew Ferguson said that restricting access to content for American users to comply with foreign legal requirements might amount to a violation of Section 5 of the Federal Trade Commission Act, which prohibits unfair or deceptive commercial practices.

Ferguson cited the EU’s Digital Services Act and the UK’s Online Safety Act, as well as reports of British efforts to gain access to encrypted Apple iCloud data, as examples of measures that could put companies at risk under US law.

Although Section 5 has traditionally been used in cases concerning consumer protection, Ferguson noted that the same principles could apply if companies changed their services for US users due to foreign regulation. He argued that such changes could ‘mislead’ American consumers, who would not reasonably expect their online activity to be governed by overseas restrictions.

The FTC chair invited company leaders to meet with his office to discuss how they intend to balance demands from international regulators while continuing to fulfil their legal obligations in the United States.

Earlier this week, a senior US intelligence official said the British government had withdrawn a proposed legal measure aimed at Apple’s encrypted iCloud data after discussions with US Vice President JD Vance.

The issue has arisen amid tensions over the enforcement of UK online safety rules. Several online platforms, including 4chan, Gab, and Kiwi Farms, have publicly refused to comply, and British authorities have indicated that internet service providers could ultimately be ordered to block access to such sites.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!