MIT researchers tackle antimicrobial resistance with AI and synthetic biology

A pioneering research initiative at MIT is deploying AI and synthetic biology to combat the escalating global crisis of antimicrobial resistance, which has been fuelled by decades of antibiotic overuse and misuse.

The $3 million, three-year project, led by Professor James J. Collins at MIT’s Department of Biological Engineering, centres on developing programmable antibacterials designed to target specific pathogens.

The approach uses AI to design small proteins that turn off specific bacterial functions. These designer molecules would be produced and delivered by engineered microbes, offering a more precise alternative to traditional antibiotics.

Antimicrobial resistance impacts low and middle-income countries most severely, where limited diagnostic infrastructure causes treatment delays. Drug-resistant infections continue to rise globally, whilst the development of new antibacterial tools has stagnated.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

eSafety escalates scrutiny of Roblox safety measures

Australia’s online safety regulator has notified Roblox of plans to directly test how the platform has implemented a set of child safety commitments agreed last year, amid growing concerns over online grooming and sexual exploitation.

In September last year, Roblox made nine commitments following months of engagement with eSafety, aimed at supporting compliance with obligations under the Online Safety Act and strengthening protections for children in Australia.

Measures included making under-16s’ accounts private by default, restricting contact between adults and minors without parental consent, disabling chat features until age estimation is complete, and extending parental controls and voice chat restrictions for younger users.

Roblox told eSafety at the end of 2025 that it had delivered all agreed commitments, after which the regulator continued monitoring implementation. eSafety Commissioner Julie Inman Grant said serious concerns remain over reports of child exploitation and harmful material on the platform.

Direct testing will now examine how the measures work in practice, with support from the Australian Government. Enforcement action may follow, including penalties of up to $49.5 million, alongside checks against new age-restricted content rules from 9 March.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cisco warns AI agents need checks before joining workforces

The US-based conglomerate Cisco is promoting a future in which AI agents work alongside employees rather than operate as mere tools. Jeetu Patel, the company’s president, revealed that Cisco has already produced a product written entirely with AI-generated code and expects several more by the end of 2026.

A shift to spec-driven development that allows smaller human teams to work with digital agents instead of relying on larger groups of developers.

Human oversight will still play a central role. Coders will be asked to review AI-generated outputs as they adjust to a workplace where AI influences every stage of development. Patel argues that AI should be viewed as part of every loop rather than kept at the edge of decision-making.

Security concerns dominate the company’s planning. Patel warns that AI agents acting as digital co-workers must undergo background checks in the same way that employees do.

Cisco is investing billions in security systems to protect agents from external attacks and to prevent agents that malfunction or act independently from harming society.

Looking ahead, Cisco expects AI to deliver insights that extend beyond human knowledge. Patel believes that the most significant gains will emerge from breakthroughs in science, health, energy and poverty reduction rather than simple productivity improvements.

He also positions Cisco as a core provider of infrastructure designed to support the next stage of the AI era.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

South Korea launches labour–government body to address AI automation pressures

A new consultative body has been established in South Korea to manage growing anxiety over AI and rapid industrial change.

The Ministry of Employment and Labour joined forces with the Korean Confederation of Trade Unions to create a regular channel for negotiating how workplaces should adapt as robots and AI systems become more widespread across key industries.

The two sides will meet monthly to seek agreement on major labour issues. The union argued for a human-centred transition instead of a purely technological one, urging the government to strengthen protections for workers affected by restructuring and AI-powered production methods.

Officials in South Korea responded by promising that policy decisions will reflect direct input gathered from employees on the ground.

Concerns heightened after Hyundai Motor confirmed plans to mass-produce Atlas humanoid robots by 2028 and introduce them across its assembly lines. The project forms part of the company’s ambition to build a ‘physical AI’ future where machines perform risky or repetitive tasks in place of humans.

The debate intensified as new labour statistics showed a sharp decline in employment within professional and scientific technical services, where AI deployment is suspected of reducing demand for new hires.

KCTU warned that industrial transformation could widen inequality unless government policy prioritises people over profit.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

BlockFills freezes withdrawals as Bitcoin drops below $65,000

BlockFills, an institutional digital asset trading and lending firm, has suspended client deposits and withdrawals, citing market volatility as Bitcoin experiences significant declines.

A notice sent to clients last week stated the suspension was intended ‘to further the protection of our clients and the firm.’ The Chicago-based company serves approximately 2,000 institutional clients and provides crypto-backed lending to miners and hedge funds.

Clients were informed they could continue trading under certain restrictions, though positions requiring additional margin could be closed.

The suspension comes as Bitcoin fell below $65,000 last week, down roughly 25% in 2026 and approximately 45% from its October peak near $120,000. In the digital asset industry, withdrawal halts are often interpreted as warning signs of potential liquidity constraints.

Several crypto firms, including FTX, BlockFi, and Celsius, imposed similar restrictions during prior downturns before entering bankruptcy proceedings.

BlockFills has not specified how long the suspension will last. A company spokesperson said the firm is ‘working hand in hand with investors and clients to bring this issue to a swift resolution and to restore liquidity to the platform.’

Founded in 2018 with backing from Susquehanna and CME Group, there is currently no public evidence of insolvency.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI adoption leaves workers exhausted as a new study reveals rising workloads

Researchers from UC Berkeley’s Haas School of Business examined how AI shapes working habits inside a mid-sized technology firm, and the outcome raised concerns about employee well-being.

Workers embraced AI voluntarily because the tools promised faster results instead of lighter schedules. Over time, staff absorbed extra tasks and pushed themselves beyond sustainable limits, creating a form of workload creep that drained energy and reduced job satisfaction.

Once the novelty faded, employees noticed that AI had quietly intensified expectations. Engineers reported spending more time correcting AI-generated material passed on by colleagues, while many workers handled several tasks at once by combining manual effort with multiple automated agents.

Constant task-switching gave a persistent sense of juggling responsibilities, which lowered the quality of their focus.

These researchers also found that AI crept into personal time, with workers prompting tools during breaks, meetings, or moments intended for rest.

As a result, the boundaries between professional and private time weakened, leaving many employees feeling less refreshed and more pressured to keep up with accelerating workflows.

The study argues that AI increased the density of work rather than reducing it, undermining promises that automation would ease daily routines.

Evidence from other institutions reinforces the pattern, with many firms reporting little or no productivity improvement from AI. Researchers recommend clearer company-level AI guidelines to prevent overuse and protect staff from escalating workloads driven by automation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Facebook boosts user creativity with new Meta AI animations

Meta has introduced a new group of Facebook features that rely on Meta AI to expand personal expression across profiles, photos and Stories.

Users gain the option to animate their profile pictures, turning a still image into a short motion clip that reflects their mood instead of remaining static. Effects such as waves, confetti, hearts and party hats offer simple tools for creating a more playful online presence.

The update also includes Restyle, a tool that reimagines Stories and Memories through preset looks or AI-generated prompts. Users may shift an ordinary photograph into an illustrated, anime or glowy aesthetic, or adjust lighting and colour to match a chosen theme instead of limiting themselves to basic filters.

Facebook will highlight Memories that work well with the Restyle function to encourage wider use.

Feed posts receive a change of their own through animated backgrounds that appear gradually across accounts. People can pair text updates with visual backdrops such as ocean waves or falling leaves, creating messages that stand out instead of blending into the timeline.

Seasonal styles will arrive throughout the year to support festive posts and major events.

Meta aims to encourage more engaging interactions by giving users easy tools for playful creativity. The new features are designed to support expressive posts that feel more personal and more visually distinctive, helping users craft share-worthy moments across the platform.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI tool accelerates detection of foodborne bacteria

Researchers have advanced an AI system designed to detect bacterial contamination in food, dramatically improving accuracy and speed. The upgraded tool distinguishes bacteria from microscopic food debris, reducing diagnostic errors in automated screening.

Traditional testing relies on cultivating bacterial samples, taking days, and requiring specialist laboratory expertise. The deep learning model analyses bacterial microcolony images, enabling reliable detection within about three hours.

Accuracy gains stem from expanded model training. Earlier versions, trained solely on bacterial datasets, misclassified food debris as bacteria in more than 24% of cases.

Adding debris imagery to training eliminated misclassifications and improved detection reliability across food samples. The system was tested on pathogens including E. coli, Listeria, and Bacillus subtilis, alongside debris from chicken, spinach, and cheese.

Researchers say faster, more precise early detection could reduce foodborne outbreaks, protect public health, and limit costly product recalls as the technology moves toward commercial deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India enforces a three-hour removal rule for AI-generated deepfake content

Strict new rules have been introduced in India for social media platforms in an effort to curb the spread of AI-generated and deepfake material.

Platforms must label synthetic content clearly and remove flagged posts within three hours instead of allowing manipulated material to circulate unchecked. Government notifications and court orders will trigger mandatory action, creating a fast-response mechanism for potentially harmful posts.

Officials argue that rapid removal is essential as deepfakes grow more convincing and more accessible.

Synthetic media has already raised concerns about public safety, misinformation and reputational harm, prompting the government to strengthen oversight of online platforms and their handling of AI-generated imagery.

The measure forms part of a broader push by India to regulate digital environments and anticipate the risks linked to advanced AI tools.

Authorities maintain that early intervention and transparency around manipulated content are vital for public trust, particularly during periods of political sensitivity or high social tension.

Platforms are now expected to align swiftly with the guidelines and cooperate with legal instructions. The government views strict labelling and rapid takedowns as necessary steps to protect users and uphold the integrity of online communication across India.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Young Europeans lead surge in generative AI adoption

Generative AI tools saw significant uptake among young Europeans in 2025, with usage rates far outpacing the broader population. Data shows that 63.8% of individuals aged 16–24 across the EU engaged with generative AI, nearly double the 32.7% recorded among citizens aged 16–74.

Adoption patterns indicate that younger users are embedding AI into everyday routines at a faster pace. Private use led the trend, with 44.2% of young people applying generative AI in personal contexts, compared with 25.1% of the general population.

Educational deployment also stood out, reaching 39.3% among youth, while only 9.4% of the wider population reported similar academic use.

The professional application presented the narrowest gap between age groups. Around 15.8% of young users reported workplace use of generative AI tools, closely aligned with 15.1% among the overall population- a reflection of many young people still transitioning into the labour market.

Country-level data highlights notable regional differences. Greece (83.5%), Estonia (82.8%), and Czechia (78.5%) recorded the highest youth adoption rates, while Romania (44.1%), Italy (47.2%), and Poland (49.3%) ranked lowest.

The findings coincide with Safer Internet Day, observed on 10 February, underscoring the growing importance of digital literacy and online safety as AI usage accelerates.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot