Space Forge, a UK company, has successfully activated a compact factory in orbit, proving its onboard furnace can operate at temperatures of around 1,000C. The breakthrough represents a major advance for space-based manufacturing.
The microwave-sized satellite was launched earlier this year and is operated remotely from mission control in Cardiff. Engineers have been monitoring its systems to validate manufacturing processes in space conditions.
Microgravity and vacuum environments allow semiconductor atoms to align more precisely than on Earth. These conditions produce significantly purer materials for electronics used in networks, electric vehicles and aerospace systems.
The company plans to build a larger orbital factory capable of producing materials for thousands of chips. Future missions will also test a heat shield designed to return manufactured products safely to Earth.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Growing concern over data privacy and subscription fatigue has led an independent developer to create WitNote, an AI note-taking tool that runs entirely offline.
The software allows users to process notes locally on Windows and macOS rather than relying on cloud-based services where personal information may be exposed.
WitNote supports lightweight language models such as Qwen2.5-0.5B that can run with limited storage requirements. Users may also connect to external models through API keys if preferred.
Core functions include rewriting, summarising and extending content, while a WYSIWYG Markdown editor provides a familiar workflow without network delays, instead of relying on web-based interfaces.
Another key feature is direct integration with Obsidian Markdown files, allowing notes to be imported instantly and managed in one place.
The developer says the project remains a work in progress but commits to ongoing updates and user-driven improvements, even joining Apple’s developer programme personally to support smoother installation.
For users seeking AI assistance while protecting privacy and avoiding monthly fees, WitNote positions itself as an appealing offline alternative that keeps full control of data on the local machine.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A ransomware attack has disrupted the Oltenia Energy Complex, Romania’s largest coal-based power producer, after hackers encrypted key IT systems in the early hours of 26 December.
The state-controlled company confirmed that the Gentlemen ransomware strain locked corporate files and disabled core services, including ERP platforms, document management tools, email and the official website.
The organisation isolated affected infrastructure and began restoring services from backups on new systems instead of paying a ransom. Operations were only partially impacted and officials stressed that the national energy system remained secure, despite the disruption across business networks.
A criminal complaint has been filed. Additionally, both the National Directorate of Cyber Security of Romania and the Ministry of Energy have been notified.
Investigators are still assessing the scale of the breach and whether sensitive data was exfiltrated before encryption. The Gentlemen ransomware group has not yet listed the energy firm on its dark-web leak site, a sign that negotiations may still be underway.
An attack that follows a separate ransomware incident that recently hit Romania’s national water authority, underlining the rising pressure on critical infrastructure organisations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI dictation reached maturity during the years after many attempts of patchy performance and frustrating inaccuracies.
Advances in speech-to-text engines and large language models now allow modern dictation tools to recognise everyday speech more reliably while keeping enough context to format sentences automatically instead of producing raw transcripts that require heavy editing.
Several leading apps have emerged with different strengths. Wispr Flow focuses on flexibility with style options and custom vocabulary, while Willow blends automation with privacy by storing transcripts locally.
Monologue also prioritises privacy by allowing users to download the model and run transcription entirely on their own machines. Superwhisper caters for power users by supporting multiple downloadable models and transcription from audio or video files.
Other tools take different approaches. VoiceTypr offers an offline-first design with lifetime licensing, Aqua promotes speed and phrase-based shortcuts, Handy provides a simple free open source starting point, and Typeless gives one of the most generous free allowances while promising strong data protection.
Each reflects a wider trend where developers try to balance convenience, privacy, control and affordability.
Users now benefit from cleaner, more natural-sounding transcripts instead of the rigid audio typing tools of previous years. AI dictation has become faster, more accurate and far more usable for everyday note-taking, messaging and work tasks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A Moscow court has dismissed a class action lawsuit filed against Russia’s state media regulator Roskomnadzor and the Ministry of Digital Development by users of WhatsApp and Telegram. The ruling was issued by a judge at the Tagansky District Court.
The court said activist Konstantin Larionov failed to demonstrate he was authorised to represent messaging app users. The lawsuit claimed call restrictions violated constitutional rights, including freedom of information and communication secrecy.
The case followed Roskomnadzor’s decision in August to block calls on WhatsApp and Telegram, a move officials described as part of anti-fraud efforts. Both companies criticised the restrictions at the time.
Larionov and several dozen co-plaintiffs said the measures were ineffective, citing central bank data showing fraud mainly occurs through traditional calls and text messages. The plaintiffs also argued the restrictions disproportionately affected ordinary users.
Larionov said the group plans to appeal the decision and continue legal action. He has described the lawsuit as an attempt to challenge what he views as politically motivated restrictions on communication services in Russia.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
European governments are intensifying their efforts to safeguard satellites from cyberattacks as space becomes an increasingly vital front in modern security and hybrid warfare. Once seen mainly as technical infrastructure, satellites are now treated as strategic assets, carrying critical communications, navigation, and intelligence data that are attractive targets for espionage and disruption.
Concerns intensified after a 2022 cyberattack on the Viasat satellite network coincided with Russia’s invasion of Ukraine, exposing how vulnerable space systems can be during geopolitical crises. Since then, the EU institutions have warned of rising cyber and electronic interference against satellites and ground stations, while several European countries have flagged growing surveillance activities linked to Russia and China.
To reduce risks, Europe is investing in new infrastructure and technologies. One example is a planned satellite ground station in Greenland, backed by the European Space Agency, designed to reduce dependence on the highly sensitive Arctic hub in Svalbard. That location currently handles most European satellite data traffic but relies on a single undersea internet cable, making it a critical point of failure.
At the same time, the EU is advancing with IRIS², a secure satellite communication system designed to provide encrypted connectivity and reduce reliance on foreign providers, such as Starlink. Although the project promises stronger security and European autonomy, it is not expected to be operational for several years.
Experts warn that technology alone is not enough. European governments are still clarifying who is responsible for defending space systems, while the cybersecurity industry struggles to adapt tools designed for Earth-based networks to the unique challenges of space. Better coordination, clearer mandates, and specialised security approaches will be essential as space becomes more contested.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The presidency of the Council of the European Union next year is expected to see Ireland lead a European drive for ID-verified social media accounts.
Tánaiste Simon Harris said the move is intended to limit anonymous abuse, bot activity and coordinated disinformation campaigns that he views as a growing threat to democracy worldwide.
A proposal that would require users to verify their identity instead of hiding behind anonymous profiles. Harris also backed an Australian-style age verification regime to prevent children from accessing social media, arguing that existing digital consent rules are not being enforced.
Media Minister Patrick O’Donovan is expected to bring forward detailed proposals during the presidency.
The plan is likely to trigger strong resistance from major social media platforms with European headquarters in Ireland, alongside criticism from the US.
However, Harris believes there is growing political backing across Europe, pointing to signals of support from French President Emmanuel Macron and UK Prime Minister Keir Starmer.
Harris said he wanted constructive engagement with technology firms rather than confrontation, while insisting that stronger safeguards are now essential.
He argued that social media companies already possess the technology to verify users and restrict harmful accounts, and that European-level coordination will be required to deliver meaningful change.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Korea has blamed weak femtocell security at KT Corp for a major mobile payment breach that triggered thousands of unauthorised transactions.
Officials said the mobile operator used identical authentication certificates across femtocells and allowed them to stay valid for ten years, meaning any device that accessed the network once could do so repeatedly instead of being re-verified.
More than 22,000 users had identifiers exposed, and 368 people suffered unauthorised payments worth 243 million won.
Investigators also discovered that ninety-four KT servers were infected with over one hundred types of malware. Authorities concluded the company failed in its duty to deliver secure telecommunications services because its overall management of femtocell security was inadequate.
The government has now ordered KT to submit detailed prevention plans and will check compliance in June, while also urging operators to change authentication server addresses regularly and block illegal network access.
Officials said some hacking methods resembled a separate breach at SK Telecom, although there is no evidence that the same group carried out both attacks. KT said it accepts the findings and will soon set out compensation arrangements and further security upgrades instead of disputing the conclusions.
A separate case involving LG Uplus is being referred to police after investigators said affected servers were discarded, making a full technical review impossible.
The government warned that strong information security must become a survival priority as South Korea aims to position itself among the world’s leading AI nations.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Protecting AI agents from manipulation has become a top priority for OpenAI after rolling out a major security upgrade to ChatGPT Atlas.
The browser-based agent now includes stronger safeguards against prompt injection attacks, where hidden instructions inside emails, documents or webpages attempt to redirect the agent’s behaviour instead of following the user’s commands.
Prompt injection poses a unique risk because Atlas can carry out actions that a person would normally perform inside a browser. A malicious email or webpage could attempt to trigger data exposure, unauthorised transactions or file deletion.
Criminals exploit the fact that agents process large volumes of content across an almost unlimited online surface.
OpenAI has developed an automated red-team framework that uses reinforcement learning to simulate sophisticated attackers.
When fresh attack patterns are discovered, the models behind Atlas are retrained so that resistance is built into the agent rather than added afterwards. Monitoring and safety controls are also updated using real attack traces.
These new protections are already live for all Atlas users. OpenAI advises people to limit logged-in access where possible, check confirmation prompts carefully and give agents well-scoped tasks instead of broad instructions.
The company argues that proactive defence is essential as agentic AI becomes more capable and widely deployed.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers are warning that AI chatbots may treat dialect speakers unfairly instead of engaging with them neutrally. Studies across English and German dialects found that large language models often attach negative stereotypes or misunderstand everyday expressions, leading to discriminatory replies.
A study in Germany tested ten language models using dialects such as Bavarian and Kölsch. The systems repeatedly described dialect speakers as uneducated or angry, and the bias became stronger when the dialect was explicitly identified.
Similar findings emerged elsewhere, including UK council services and AI shopping assistants that struggled with African American English.
Experts argue that such patterns risk amplifying social inequality as governments and businesses rely more heavily on AI. One Indian job applicant even saw a chatbot change his surname to reflect a higher caste, showing how linguistic bias can intersect with social hierarchy instead of challenging it.
Developers are now exploring customised AI models trained with local language data so systems can respond accurately without reinforcing stereotypes.
Researchers say bias can be tuned out of AI if handled responsibly, which could help protect dialect speakers rather than marginalise them.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!