Teens struggle to spot misinformation despite daily social media use

Misinformation online now touches every part of life, from fake products and health advice to political propaganda. Its influence extends beyond beliefs, shaping actions like voting behaviour and vaccination decisions.

Unlike traditional media, online platforms rarely include formal checks or verification, allowing false content to spread freely.

It is especially worrying as teenagers increasingly use social media as a main source of news and search results. Despite their heavy usage, young people often lack the skills needed to spot false information.

In one 2022 Ofcom study, only 11% of 11 to 17-year-olds could consistently identify genuine posts online.

Research involving 11 to 14-year-olds revealed that many wrongly believed misinformation only related to scams or global news, so they didn’t see themselves as regular targets. Rather than fact-check, teens relied on gut feeling or social cues, such as comment sections or the appearance of a post.

These shortcuts make it easier for misinformation to appear trustworthy, especially when many adults also struggle to verify online content.

The study also found that young people thought older adults were more likely to fall for misinformation, while they believed their parents were better than them at spotting false content. Most teens felt it wasn’t their job to challenge false posts, instead placing the responsibility on governments and platforms.

In response, researchers have developed resources for young people, partnering with organisations like Police Scotland and Education Scotland to support digital literacy and online safety in practical ways.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Louis Vuitton Australia confirms customer data breach after cyberattack

Louis Vuitton has admitted to a significant data breach in Australia, revealing that an unauthorised third party accessed its internal systems and stole sensitive client details.

The breach, first detected on 2 July, included names, contact information, birthdates, and shopping preferences — though no passwords or financial data were taken.

The luxury retailer emailed affected customers nearly three weeks later, urging them to stay alert for phishing, scam calls, or suspicious texts.

While Louis Vuitton claims it acted quickly to contain the breach and block further access, questions remain about the delay in informing customers and the number of individuals affected.

Authorities have been notified, and cybersecurity specialists are now investigating. The incident adds to a growing list of cyberattacks on major Australian companies, prompting experts to call for stronger data protection laws and the right to demand deletion of personal information from corporate databases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT stuns users by guessing object in viral video using smart questions

A video featuring ChatGPT Live has gone viral after it correctly guessed an object hidden in a user’s hand using only a series of questions.

The clip, shared on the social media platform X, shows the chatbot narrowing down its guesses until it lands on the correct answer — a pen — within less than a minute. The video has fascinated viewers by showing how far generative AI has come since its initial launch.

Multimodal AI like ChatGPT can now process audio, video and text together, making interactions more intuitive and lifelike.

Another user attempted the same challenge with Gemini AI by holding an AC remote. Gemini described it as a ‘control panel for controlling temperature’, which was close but not entirely accurate.

The fun experiment also highlights the growing real-world utility of generative AI. During Google’s I/O conference during the year, the company demonstrated how Gemini Live can help users troubleshoot and repair appliances at home by understanding both spoken instructions and visual input.

Beyond casual use, these AI tools are proving helpful in serious scenarios. A UPSC aspirant recently explained how uploading her Detailed Application Form to a chatbot allowed it to generate practice questions.

She used those prompts to prepare for her interview and credited the AI with helping her boost her confidence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI device brings early skin cancer diagnosis to remote communities

A Scottish research team has developed a pioneering AI-powered tool that could transform how skin cancer is diagnosed in some of the world’s most isolated regions.

The device, created by PhD student Tess Watt at Heriot-Watt University, enables rapid diagnosis without needing internet access or direct contact with a dermatologist.

Patients use a compact camera connected to a Raspberry Pi computer to photograph suspicious skin lesions.

The system then compares the image against thousands of preloaded examples using advanced image recognition and delivers a diagnosis in real time. These results are then shared with local GP services, allowing treatment to begin without delay.

The self-contained diagnostic system is among the first designed specifically for remote medical use. Watt said that home-based healthcare is vital, especially with growing delays in GP appointments.

The device, currently 85 per cent accurate, is expected to improve further with access to more image datasets and machine learning enhancements.

The team plans to trial the tool in real-world settings after securing NHS ethical approval. The initial rollout is aimed at rural Scottish communities, but the technology could benefit global populations with poor access to dermatological care.

Heriot-Watt researchers also believe the device will aid patients who are infirm or housebound, making early diagnosis more accessible than ever.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity CEO predicts that AI browser could soon replace recruiters and assistants

Perplexity AI CEO Aravind Srinivas believes that the company’s new AI-powered browser, Comet, could soon replace two key white-collar roles in most offices: recruiters and executive assistants.

Speaking on The Verge podcast, Srinivas explained that with the integration of more advanced reasoning models like GPT-5 or Claude 4.5, Comet will be able to handle tasks traditionally assigned to these positions.

He also described how a recruiter’s week-long workload could be reduced to a single AI prompt.

From sourcing candidates to scheduling interviews, tracking responses in Google Sheets, syncing calendars, and even briefing users ahead of meetings, Comet is built to manage the entire process—often without any follow-up input.

The tool remains in an invite-only phase and is currently available to premium users.

Srinivas also framed Comet as the early foundation of a broader AI operating system for knowledge workers, enabling users to issue natural language commands for complex tasks.

He emphasised the importance of adopting AI early, warning that those who fail to keep pace with the technology’s rapid growth—where breakthroughs arrive every few months—risk being left behind in the job market.

In a separate discussion, he urged younger generations to reduce time spent scrolling on Instagram and instead focus on mastering AI tools. According to him, the shift is inevitable, and those who embrace it now will hold a long-term professional advantage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DuckDuckGo adds new tool to block AI-generated images from search results

Privacy-focused search engine DuckDuckGo has launched a new feature that allows users to filter out AI-generated images from search results.

Although the company admits the tool is not perfect and may miss some content, it claims it will significantly reduce the number of synthetic images users encounter.

The new filter uses open-source blocklists, including a more aggressive ‘nuclear’ option, sourced from tools like uBlock Origin and uBlacklist.

Users can access the setting via the Images tab after performing a search or use a dedicated link — noai.duckduckgo.com — which keeps the filter always on and also disables AI summaries and the browser’s chatbot.

The update responds to growing frustration among internet users. Platforms like X and Reddit have seen complaints about AI content flooding search results.

In one example, users searching for ‘baby peacock’ reported seeing just as many or more AI images than real ones, making it harder to distinguish between fake and authentic content.

DuckDuckGo isn’t alone in trying to tackle unwanted AI material. In 2024, Hiya launched a Chrome extension aimed at spotting deepfake audio across major platforms.

Microsoft’s Bing has also partnered with groups like StopNCII to remove explicit synthetic media from its results, showing that the fight against AI content saturation is becoming a broader industry trend.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Irish hospital turns to AI for appointment management

Beaumont Hospital in Dublin plans to deploy AI to predict patient no-shows and late cancellations, aiming to reduce wasted resources.

Instead of relying solely on reminders, the hospital will pilot AI software costing up to €110,000, using patient data to forecast missed appointments. Currently, no-shows account for 15.5% of its outpatient slots.

The system will integrate with Beaumont’s existing two-way text messaging service. Rather than sending uniform reminders, the AI model will tailor messages based on the likelihood of attendance while providing hospital staff with real-time insights to better manage clinic schedules.

The pilot is expected to begin in late 2025 or early 2026, potentially expanding into a full €1.2 million contract.

The move forms part of Beaumont Hospital’s strategic plan through 2030 to reduce outpatient non-attendance. It follows the broader adoption of AI in Irish healthcare, including Mater Hospital’s recent launch of an AI and Digital Health centre designed to tackle clinical challenges using new technologies.

Instead of viewing AI as a future option, Irish hospitals now increasingly treat it as an immediate solution to operational inefficiencies, hoping it will transform healthcare delivery and improve patient service.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Library cuts across Massachusetts deepen digital divide

Massachusetts libraries face sweeping service reductions as federal funding cuts threaten critical educational and digital access programmes. Local and major libraries are bracing for the loss of key resources including summer reading initiatives, online research tools, and English language classes.

The Massachusetts Board of Library Commissioners (MBLC) said it has already lost access to 30 of 34 databases it once offered. Resources such as newspaper archives, literacy support for the blind and incarcerated, and citizenship classes have also been cancelled due to a $3.6 million shortfall.

Communities unable to replace federal grants with local funds will be disproportionately affected. With over 800 library applications for mobile internet hot spots now frozen, officials warn that students and jobseekers may lose vital lifelines to online learning, healthcare and employment.

The cuts are part of broader efforts by the Trump administration to shrink federal institutions, targeting what it deems anti-American programming. Legislators and library leaders say the result will widen the digital divide and undercut libraries’ role as essential pillars of equitable access

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mistral’s chatbot Le Chat takes on ChatGPT with major upgrade

France-based AI startup Mistral has rolled out a major update to Le Chat, its AI chatbot, introducing new features aimed at challenging rivals like ChatGPT, Gemini and Claude. The update includes Deep Research, voice interaction, reasoning capabilities and a refreshed image editor.

According to the company’s latest blog post, the new Deep Research mode transforms Le Chat into a structured assistant that can clarify needs, search sources and deliver summarised findings. The tool enables users to receive comprehensive responses in a neatly formatted report.

In addition, Mistral unveiled Vocal mode, allowing users to speak to the chatbot as if they were talking to a person. The feature is powered by the firm’s voice input model, Voxtral, which handles voice recognition in real time.

The company also introduced Think mode, based on its Magistral reasoning model. Designed for multilingual and complex tasks, the feature provides thoughtful and clear responses, even when answering legal or professional queries in languages like Spanish or Japanese.

For users juggling multiple conversations or tasks, the new Projects tool groups related chats into separate spaces. Each project includes a dedicated Library for storing files and content, while also remembering individual tools and settings.

Users can upload documents directly into Projects and revisit past chats or references. Content from the Library can also be pulled into the active conversation, supporting a more seamless and personalised experience.

A revamped image editor rounds out the update, offering users the ability to tweak AI-generated visuals while maintaining consistency in character design and fine details. Mistral says the upgrade helps improve image customisation without compromising visual integrity.

All features are now available through Le Chat’s web platform at ‘chat.mistral.ai’ or via the company’s mobile apps on Android and iOS. The update reflects Mistral’s growing ambition to differentiate itself in the increasingly competitive AI assistant market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers hide malware using DNS TXT records

Hackers are increasingly exploiting DNS records to deliver malware undetected, according to new research from DomainTools.

Instead of relying on typical delivery methods such as emails or downloads, attackers now hide malicious code within DNS TXT records, part of the Domain Name System, often overlooked by security systems.

The method involves converting malware into hexadecimal code, splitting it into small segments, and storing each chunk in the TXT record of subdomains under domains like whitetreecollective.com.

Once attackers gain limited access to a network, they retrieve these chunks via ordinary-looking DNS queries, reassembling them into functioning malware without triggering antivirus or firewall alerts.

The rising use of encrypted DNS protocols like DNS-over-HTTPS and DNS-over-TLS makes detecting such queries harder, especially without in-house DNS resolvers equipped for deep inspection.

Researchers also noted that attackers are using DNS TXT records for malware and embedding harmful text designed to manipulate AI systems through prompt injection.

Ian Campbell of DomainTools warns that even organisations with strong security measures struggle to detect such DNS-based threats due to the hidden nature of the traffic.

Instead of focusing solely on traditional defences, organisations are advised to monitor DNS traffic closely, log and inspect queries through internal resolvers, and restrict DNS access to trusted sources. Educating teams on these emerging threats remains essential for maintaining robust cybersecurity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!