Researchers at the Marine Biological Laboratory in Massachusetts are using AI and advanced visualisation tools to study how memories are formed in the human brain. Their work focuses on understanding how experiences produce lasting biological changes linked to long-term memory.
The project is led by Andre Fenton of New York University and Abhishek Kumar of the University of Wisconsin–Madison. Using NVIDIA RTX GPUs and HP Z workstations, the team analyses large-scale brain imaging data with custom AI tools and the syGlass virtual reality platform.
Researchers centred on the hippocampus, a brain structure central to memory. Scientists are examining specific protein markers in neurons to reveal how memories are encoded, even though these markers represent only a small fraction of the brain’s overall protein landscape.
High-resolution 3D imaging previously created a major data bottleneck. AI-supported workflows now allow researchers to capture, inspect, and store terabytes of volumetric data, enabling more detailed analysis of brain cell structure and function.
Researchers say understanding memory at a molecular level could support earlier insights into neurological and psychiatric conditions. The tools are also being used for education, allowing students to explore brain data interactively while contributing to ongoing research.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI chatbots modelled on Jesus are becoming increasingly popular over Christmas, offering companionship or faith guidance to people who may feel emotionally vulnerable during the holidays.
Experts warn that such tools could gradually reshape religious belief and practice. Training data is controlled by a handful of technology firms, which means AI systems may produce homogenised and biased interpretations instead of reflecting the diversity of real-world faith communities.
Users who are young or unfamiliar with AI may also struggle to judge the accuracy or intent behind the answers they receive.
Researchers say AI chatbots are currently used as a supplement rather than a replacement for religious teaching.
However, concern remains that people may begin to rely on AI for spiritual reassurance during sensitive moments. Scholars recommend limiting use over the holidays and prioritising conversations with family, friends or trusted religious leaders instead of seeking emotional comfort from a chatbot.
Experts also urge users to reflect carefully on who designs these systems and why. Fact-checking answers and grounding faith in recognised sources may help reduce the risk of distortion as AI plays a growing role in people’s daily lives.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has introduced new Personalisation settings in ChatGPT that allow users to fine-tune warmth, enthusiasm and emoji use. The changes are designed to make conversations feel more natural, instead of relying on a single default tone.
ChatGPT users can set each element to More, Less or Default, alongside existing tone styles such as Professional, Candid and Quirky. The update follows previous adjustments, where OpenAI first dialled back perceived agreeableness, then later increased warmth after users said the system felt overly cold.
Experts have raised concerns that highly agreeable AI could encourage emotional dependence, even as users welcome a more flexible conversational style.
Some commentators describe the feature as empowering, while others question whether customising a chatbot’s personality risks blurring emotional boundaries.
The new tone controls continue broader industry debates about how human-like AI should become. OpenAI hopes that added transparency and user choice will balance personal preference with responsible design, instead of encouraging reliance on a single conversational style.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has welcomed Apple’s latest interoperability updates in iOS 26.3, crediting the Digital Markets Act for compelling the company to open its ecosystem.
The new features are currently in beta and allow third-party accessories to integrate more smoothly with iPhones and iPads, instead of favouring Apple’s own devices.
Proximity pairing will let headphones and other accessories connect through a simplified one-tap process, similar to AirPods. Notification forwarding to non-Apple wearables will also become available, although alerts can only be routed to one device at a time.
Apple is providing developers with the tools needed to support the features, which apply only within the EU.
The DMA classifies Apple as a gatekeeper and requires fairer access for rivals, with heavy financial penalties for non-compliance.
Apple has repeatedly warned that the rules risk undermining security and privacy, yet the company has already introduced DMA-driven changes such as allowing alternative app stores and opening NFC access.
Analysts expect the moves to reduce ecosystem lock-in and increase competition across the EU market. iOS 26.3 is expected to roll out fully across Europe from 2026 following the beta cycle, while further regulatory scrutiny may push Apple to extend interoperability even further.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Many small businesses in the US are facing a sharp rise in cyber attacks, yet large numbers still try to manage the risk on their own.
A recent survey by Guardz found that more than four in ten SMBs have already experienced a cyber incident, while most owners believe the overall threat level is continuing to increase.
Rather than relying on specialist teams, over half of small businesses still leave critical cybersecurity tasks to untrained staff or the owner. Only a minority have a formal incident response plan created with a cybersecurity professional, and more than a quarter do not carry cyber insurance.
Phishing, ransomware and simple employee mistakes remain the most common dangers, with negligence seen as the biggest internal risk.
Recovery times are improving, with most affected firms able to return to normal operations quickly and very few suffering lasting damage.
However, many still fail to conduct routine security assessments, and outdated technology remains a widespread concern. Some SMBs are increasing cybersecurity budgets, yet a significant share still spend very little or do not know how much is being invested.
More small firms are now turning to managed service providers instead of trying to cope alone.
The findings suggest that preparation, professional support and clearly defined response plans can greatly improve resilience, helping organisations reduce disruption and maintain business continuity when an attack occurs.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI is said to be testing a new feature for ChatGPT that would mark a shift from Custom GPTs toward a more modular system of Skills.
Reports suggest the project, internally codenamed Hazelnut, will allow users and developers to teach the AI model standalone abilities, workflows and domain knowledge instead of relying only on role-based configurations.
The Skills framework is designed to allow multiple abilities to be combined automatically when a task requires them. The system aims to increase portability across the web version, desktop client and API, while loading instructions only when needed instead of consuming the entire context window.
Support for running executable code is also expected, providing the model with stronger reliability for logic-driven work, rather than relying entirely on generated text.
Industry observers note similarities to Anthropic’s Claude, which already benefits from a skill-like structure. Further features are expected to include slash-command interactions, a dedicated Skill editor and one-click conversion from existing GPTs.
Market expectations point to an early 2026 launch, signalling a move toward ChatGPT operating as an intelligent platform rather than a traditional chatbot.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Italy’s competition authority has ordered Meta to halt restrictions limiting rival AI chatbots on WhatsApp. Regulators say the measures may distort competition as Meta integrates its own AI services.
The Italian watchdog argues Meta’s conduct risks restricting market access and slowing technical development. Officials warned that continued enforcement could cause lasting harm to competition and consumer choice.
Meta rejected the ruling and confirmed plans to appeal, calling the decision unfounded. The company stated that WhatsApp Business was never intended to serve as a distribution platform for AI services.
The case forms part of a broader European push to scrutinise dominant tech firms. Regulators are increasingly focused on the integration of AI across platforms with entrenched market power.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A significant debate has erupted in South Korea after the National Assembly passed new legislation aimed at tackling so-called fake news.
The revised Information and Communications Network Act bans the circulation of false or fabricated information online. It allows courts to impose punitive damages up to five times the losses suffered when media outlets or YouTubers intentionally spread disinformation for unjust profit.
Journalists, unions and academics warn that the law could undermine freedom of expression and weaken journalism’s watchdog function instead of strengthening public trust.
Critics argue that ambiguity over who decides what constitutes fake news could shift judgement away from the courts and toward regulators or platforms, encouraging self-censorship and increasing the risk of abusive lawsuits by influential figures.
Experts also highlight the lack of strong safeguards in South Korea against malicious litigation compared with the US, where plaintiffs must prove fault by journalists.
The controversy reflects more profound public scepticism about South Korean media and long-standing reporting practices that sometimes rely on relaying statements without sufficient verification, suggesting that structural reform may be needed instead of rapid, punitive legislation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
US insurance firm Aflac has confirmed that a cyberattack disclosed in June affected around 22.65 million people. The breach involved the theft of sensitive personal and health information; however, the company initially did not specify the number of individuals affected.
In filings with the Texas attorney general, Aflac said the compromised data includes names, dates of birth, home addresses, government-issued identification numbers, driving licence details, and Social Security numbers. Medical and health insurance information was also accessed during the incident.
A separate filing with the Iowa attorney general suggested the attackers may be linked to a known cybercriminal organisation. Federal law enforcement and external cybersecurity specialists indicated the group had been targeting the insurance sector more broadly.
Security researchers have linked a wave of recent insurance-sector breaches to Scattered Spider, a loosely organised group of predominantly young, English-speaking hackers. The timing and targeting of the Aflac incident align with the group’s activity.
The US company stated that it has begun notifying the affected individuals. The company, which reports having around 50 million customers, did not respond to requests for comment. Other insurers, including Erie Insurance and Philadelphia Insurance Companies, reported breaches during the same period.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI is extending the clinical value of chest X-rays beyond lung and heart assessment. Researchers are investigating whether routine radiographs can support broader disease screening without the need for additional scans. Early findings suggest existing images may contain underused diagnostic signals.
A study in Radiology: Cardiothoracic Imaging examined whether AI could detect hepatic steatosis from standard frontal chest X-rays. Researchers analysed more than 6,500 images from over 4,400 patients across two institutions. Deep learning models were trained and externally validated.
The AI system achieved area-under-curve scores above 0.8 in both internal and external tests. Saliency maps showed predictions focused near the diaphragm, where part of the liver appears on chest X-rays. Results suggest that reliable signal extraction can be achieved from routine imaging.
Researchers argue the approach could enable opportunistic screening during standard care. Patients flagged by AI could be referred for a dedicated liver assessment when appropriate. The method adds clinical value without increasing imaging costs or radiation exposure.
Experts caution that the model is not a standalone diagnostic tool and requires further prospective validation. Integration with clinical and laboratory data remains necessary to reduce false positives. If validated, AI-enhanced X-rays could support scalable risk stratification.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!