Facebook has introduced new tools designed to help creators increase engagement and build stronger communities on the platform. The update includes fan challenges, custom badges for top contributors, and new insights to track audience loyalty.
Fan challenges allow creators with over 100,000 followers to issue prompts inviting fans to share content on a theme or event. Contributions are displayed in a dedicated feed, with a leaderboard ranking entries by reactions.
Challenges can run for a week or stretch over several months, giving creators flexibility in engaging their audiences.
Meta has also launched custom fan badges for creators with more than one million followers, enabling them to rename Top Fan badges each month. The feature gives elite-level fans extra recognition and strengthens the sense of community. Fans can choose whether to accept the custom badge.
To complement these features, Facebook adds new metrics showing the number of Top Fans on a page. These insights help creators measure engagement efforts and reward their most dedicated followers.
The tools are now available to eligible creators worldwide.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has introduced new parental controls for ChatGPT, giving families greater oversight of how teens use the AI platform. The tools, which are live for all users, allow parents to link accounts with their children and manage settings through a simple control dashboard.
The system introduces stronger safeguards for teen accounts, including filters on graphic or harmful content and restrictions on roleplay involving sex, violence or extreme beauty ideals.
Parents can also fine-tune features such as voice mode, memory, image generation, or set quiet hours when ChatGPT cannot be accessed.
A notification mechanism has been added to alert parents if a teen shows signs of acute distress, escalating to emergency services in critical cases. OpenAI said the controls were shaped by consultation with experts, advocacy groups, and policymakers and will be expanded as research evolves.
To complement the parental controls, a new online resource hub has been launched to help families learn how ChatGPT works and explore positive uses in study, creativity and daily life.
OpenAI also plans to roll out an age-prediction system that automatically applies teen-appropriate settings.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
On 12 September 2025, the European Data Protection Board (EDPB) adopted draft guidelines detailing how online platforms should reconcile requirements under the GDPR and the Digital Services Act (DSA). The draft is now open for public consultation through 31 October.
The guidelines address key areas of tension, including proactive investigations, notice-and-action systems, deceptive design, recommender systems, age safety and transparency in advertising. They emphasise that DSA obligations must be implemented in ways consistent with GDPR principles.
For instance, the guidelines suggest that proactive investigations of illegal content should generally be grounded on ‘legitimate interests’, include safeguards for accuracy, and avoid automated decisions with legal effects.
The guidance also clarifies that the DSA does not override the GDPR. Platforms subject to both must ensure lawful, fair and transparent processing while integrating risk analysis and privacy by design. The draft guidelines include practical examples and cross-references to existing EDPB documents.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Hackers have targeted up to two million Cisco devices using a newly disclosed vulnerability in the company’s networking software. The flaw, tracked as CVE-2025-20352, affects all supported versions of Cisco IOS and IOS XE, which power many routers and switches.
Cisco confirmed that attackers have exploited the weakness in the wild, crashing systems, implanting malware, and potentially extracting sensitive data. The campaign builds on previous activity by the same threat group, which has also exploited Cisco Adaptive Security Appliance devices.
Attackers gained access after local administrator credentials were compromised, allowing them to implant malware and execute commands. The company’s Product Security Incident Response Team urged customers to upgrade immediately to fixed software releases to secure their systems.
The Canadian Centre for Cyber Security has warned organisations about sophisticated malware exploiting flaws in outdated Cisco ASA devices, urging immediate patching and stronger defences to protect critical systems.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Apple is internally testing its upcoming Siri upgrade with a chatbot-style tool called Veritas, according to a report by Bloomberg. The app enables employees to experiment with new capabilities and provide structured feedback before a public launch.
Veritas enables testers to type questions, engage in conversations, and revisit past chats, making it similar to ChatGPT and Gemini. Apple is reportedly using the feedback to refine Siri’s features, including data search and in-app actions.
The tool remains internal and is not planned for public release. Its purpose is to make Siri’s upgrade process more efficient and guide Apple’s decision on future chatbot-like experiences.
Apple executives have said they prefer integrating AI into daily tasks instead of offering a separate chatbot. Craig Federighi confirmed at WWDC that Apple is focused on natural task assistance rather than a standalone product.
Bloomberg reports that the new Siri will use Apple’s own AI models alongside external systems like Google’s Gemini, with a launch expected next spring.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Google is pushing AI deeper into its services, with AI Overviews already reaching billions of users and AI Mode now added to Search. Chrome is also being rebranded as an AI-first browser.
Not all users welcome these changes. Concerns remain about accuracy, intrusive design and Google’s growing control over how information is displayed. Unlike other features, AI elements in Search cannot be turned off directly, leaving users reliant on third-party solutions.
One such solution is the new ‘Bye Bye, Google AI’ extension, which hides AI-generated results and unwanted blocks such as sponsored links, shopping sections and discussion forums.
The extension works across Chromium-based browsers, though it relies on CSS and may break when Google updates its interface.
A debate that reflects wider unease about AI in Search.
While Google claims it improves user experience, critics argue it risks spreading false information and keeping traffic within Google’s ecosystem rather than directing users to original publishers.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new malware campaign targets WordPress sites, utilising steganography and persistent backdoors to maintain unauthorised admin access. It uses two components that work together to maintain control.
The attack begins with malicious files disguised as legitimate WordPress components. These files are heavily obfuscated, create administrator accounts with hardcoded credentials, and bypass traditional detection tools. However, this ensures attackers can retain access even after security teams respond.
Researchers say the malware exploits WordPress plugin infrastructure and user management functions to set up redundant access points. It then communicates with command-and-control servers, exfiltrating system data and administrator credentials to attacker-controlled endpoints.
This campaign can allow threat actors to inject malicious code, redirect site visitors, steal sensitive data, or deploy additional payloads. Its persistence and stealth tactics make it difficult to detect, leaving websites vulnerable for long periods.
The main component poses as a fake plugin called ‘DebugMaster Pro’ with realistic metadata. Its obfuscated code creates admin accounts, contacts external servers, and hides by allowing known admin IPs.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk’s xAI has sued OpenAI, alleging a coordinated and unlawful campaign to steal its proprietary technology. The complaint alleges OpenAI targeted former xAI staff to steal source code, training methods, and data centre strategies.
The lawsuit claims OpenAI recruiter Tifa Chen offered large packages to engineers who then allegedly uploaded xAI’s source code to personal devices. Notable incidents include Xuechen Li confessing to code theft and Jimmy Fraiture allegedly transferring confidential files via AirDrop repeatedly.
Legal experts note the case centres on employee poaching and the definition of xAI’s ‘secret sauce,’ including GPU racking, vendor contracts, and operational playbooks.
Liability may depend on whether OpenAI knowingly directed recruiters, while the company could defend itself by showing independent creation with time-stamped records.
xAI is seeking damages, restitution, and injunctions requiring OpenAI to remove its materials and destroy models built using them. The lawsuit is Musk’s latest legal action against OpenAI, following a recent antitrust case with Apple over alleged market dominance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Ant International has introduced AI SHIELD, a security framework to protect AI systems used in financial services. The toolkit aims to reduce risks such as fraud, bias, and misuse in AI applications like fraud detection, payment authorisation, and customer chatbots.
At the centre of AI SHIELD is the AI Security Docker, which applies safeguards throughout development and deployment. The framework includes authentication of AI agents, continuous monitoring to block threats in real time, and ongoing adversarial testing.
Ant said the system will support over 100 million merchants and 1.8 billion users worldwide across services like Alipay+, Antom, Bettr, and WorldFirst. It will also defend against deepfake attacks and account takeovers, with the firm claiming its EasySafePay 360 tool can cut such incidents by 90%.
The initiative is part of Ant’s wider role in setting industry standards, including its work with Google on the Agent Payments Protocol, which defines how AI agents transact securely with user approval.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Hitachi has unveiled a global AI Factory built on NVIDIA’s reference architecture to accelerate the development of physical AI solutions.
The new platform uses Hitachi iQ systems powered by NVIDIA Blackwell GPUs, alongside the Spectrum-X networking platform, to deliver unified AI infrastructure for research and deployment.
Hitachi said the AI Factory will enhance its HMAX family of AI-enabled solutions across energy, mobility, industry, and technology sectors. It will allow models to interpret data from sensors and cameras, make decisions, and act in real-world environments.
The facility integrates NVIDIA AI Enterprise software and Omniverse libraries, enabling simulation and digital twin capabilities. Both firms describe the initiative as a key driver of social innovation, combining advanced AI computing with industrial applications.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!