DualEntry raises $90m to scale AI-first ERP platform

New York ERP startup DualEntry has emerged from stealth with $90 million in Series A funding, co-led by Lightspeed and Khosla Ventures. Investors include GV, Contrary, and Vesey Ventures, bringing the total funding to more than $100 million within 18 months of the company’s founding.

The capital will accelerate the growth of its AI-native ERP platform, which has processed $100 billion in journal entries. The platform targets mid-market finance teams, aiming to automate up to 90% of manual tasks and scale without external IT support or add-ons.

Early adopters include fintech firm Slash, which runs its $100M+ ARR operation with a single finance employee. DualEntry offers a comprehensive ERP suite that covers general ledger, accounts receivable, accounts payable, audit controls, FP&A, and live bank connections.

The company’s NextDay Migration tool enables complete onboarding within 24 hours, securely transferring all data, including subledgers and attachments. With more than 13,000 integrations across banking, CRM, and HR systems, DualEntry establishes a centralised source of accounting information.

Founded in 2024 by Benedict Dohmen and Santiago Nestares, the startup positions itself as a faster, more flexible alternative to legacy systems such as NetSuite, Sage Intacct, and Microsoft Dynamics, while supporting starter tools like QuickBooks and Xero.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI transcription tool aims to speed up police report writing

The Washington County Sheriff’s Office in Oregon is testing an AI transcription service to speed up police report writing. The tool, Draft One, analyses Axon body-worn camera footage to generate draft reports for specific calls, including theft, trespassing, and DUII incidents.

Corporal David Huey stated that the technology is designed to provide deputies more time in the field. He noted that reports that took around 90 minutes can now be completed in 15 to 20 minutes, freeing officers to focus on policing rather than paperwork.

Deputies in the 60-day pilot must review and edit all AI-generated drafts. At least 20 percent of each report must be manually adjusted to ensure accuracy. Huey explained that the system deliberately inserts minor errors to ensure officers remain engaged with the content.

He added that human judgement remains essential for interpreting emotional cues, such as tense body language, which AI cannot detect solely from transcripts. All data generated by Draft One is securely stored within Axon’s network.

After the pilot concludes, the sheriff’s office and the district attorney will determine whether to adopt the system permanently. If successful, the tool could mark a significant step in integrating AI into everyday law enforcement operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FRA presents rights framework at EU Innovation Hub AI Cluster workshop in Tallinn

The EU Innovation Hub for Internal Security’s AI Cluster gathered in Tallinn on 25–26 September for a workshop focused on AI and its implications for security and rights.

The European Union Agency for Fundamental Rights (FRA) played a central role, presenting its Fundamental Rights Impact Assessment framework under the AI Act and highlighting its ongoing project on assessing high-risk AI.

A workshop that also provided an opportunity for FRA to give an update on its internal and external work in the AI field, reflecting the growing need to balance technological innovation with rights-based safeguards.

AI-driven systems in security and policing are increasingly under scrutiny, with regulators and agencies seeking to ensure compliance with EU rules on privacy, transparency and accountability.

In collaboration with Europol, FRA also introduced plans for a panel discussion on ‘The right to explanation of AI-driven individual decision-making’. Scheduled for 19 November in Brussels, the session will form part of the Annual Event of the EU Innovation Hub for Internal Security.

It is expected to draw policymakers, law enforcement representatives and rights advocates into dialogue about transparency obligations in AI use for security contexts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mexico drafts law to regulate AI in dubbing and animation

The Mexican government is preparing a law to regulate the use of AI in dubbing, animation, and voiceovers to prevent unauthorised voice cloning and safeguard creative rights.

Working with the National Copyright Institute and more than 128 associations, it aims to reform copyright legislation before the end of the year.

The plan would strengthen protections for actors, voiceover artists, and creative workers, while addressing contract conditions and establishing a ‘Made in Mexico’ seal for cultural industries.

A bill that is expected to prohibit synthetic dubbing without consent, impose penalties for misuse, and recognise voice and image as biometric data.

Industry voices warn that AI has already disrupted work opportunities. Several dubbing firms in Los Angeles have closed, with their projects taken over by companies specialising in AI-driven dubbing.

Startups such as Deepdub and TrueSync have advanced the technology, dubbing films and television content across languages at scale.

Unions and creative groups argue that regulation is vital to protect both jobs and culture. While AI offers efficiency in translation and production, it cannot yet replicate the emotional depth of human performance.

The law is seen as the first attempt of Mexico to balance technological innovation with the rights of workers and creators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta faces fines in Netherlands over algorithm-first timelines

A Dutch court has ordered Meta to give Facebook and Instagram users in the Netherlands the right to set a chronological feed as their default.

The ruling follows a case brought by digital rights group Bits of Freedom, which argued that Meta’s design undermines user autonomy under the European Digital Services Act.

Although a chronological feed is already available, it is hidden and cannot be permanent. The court said Meta must make the settings accessible on the homepage and Reels section and ensure they stay in place when the apps are restarted.

If Meta does not comply within two weeks, it faces a fine of €100,000 per day, capped at €5 million.

Bits of Freedom argued that algorithmic feeds threaten democracy, particularly before elections. The court agreed the change must apply permanently rather than temporarily during campaigns.

The group welcomed the ruling but stressed it was only a small step in tackling the influence of tech giants on public debate.

Meta has not yet responded to the decision, which applies only in the Netherlands despite being based on EU law. Campaigners say the case highlights the need for more vigorous enforcement to ensure digital platforms respect user choice and democratic values.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Oracle systems targeted in unverified data theft claims, Google warns

Google has warned that hackers are emailing company executives, claiming to have stolen sensitive data from Oracle business applications. The group behind the campaign identifies itself as affiliated with the Cl0p ransomware gang.

In a statement, Google said the attackers target executives at multiple organisations with extortion emails linked to Oracle’s E-Business Suite. The company stated that it lacks sufficient evidence to verify the claims or confirm whether any data has been taken.

Neither Cl0p nor Oracle responded to requests for comment. Google did not provide additional information about the scale or specific campaign targets.

The cl0p ransomware gang has been involved in several high-profile extortion cases, often using claims of data theft to pressure organisations into paying ransoms, even when breaches remain unverified.

Google advised recipients to treat such messages cautiously and report any suspicious emails to security teams while investigations continue.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China’s new K visa sparks public backlash

China’s new K visa, aimed at foreign professionals in science and technology, has sparked heated debate and online backlash. The scheme, announced in August and launched this week, has been compared by Indian media to the US H-1B visa.

Tens of thousands of social media users in China have voiced fears that the programme will worsen job competition in an already difficult market. Comments also included xenophobic remarks, particularly directed at Indian nationals.

State media outlets have stepped in, defending the policy as a sign of China’s openness while stressing that it is not a simple work permit or immigration pathway. Officials say the visa is designed to attract graduates and researchers from top institutions in STEM fields.

The government has yet to clarify whether the visa allows foreign professionals to work, adding to uncertainty. Analysts note that language barriers, cultural differences, and China’s political environment may pose challenges for newcomers despite Beijing’s drive to attract global talent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NIST pushes longer passphrases and MFA over strict rules

The US National Institute of Standards and Technology (NIST) has updated its password guidelines, urging organisations to drop strict complexity rules. NIST states that requirements such as mandatory symbols and frequent resets often harm usability without significantly improving security.

Instead, the agency recommends using blocklists for breached or commonly used passwords, implementing hashed storage, and rate limiting to resist brute-force attacks. Multi-factor authentication and password managers are encouraged as additional safeguards.

Password length remains essential. Short strings are easily cracked, but users should be allowed to create longer passphrases. NIST recommends limiting only extremely long passwords that slow down hashing.

The new approach replaces mandatory resets with changes triggered only after suspected compromise, such as a data breach. NIST argues this method reduces fatigue while improving overall account protection.

Businesses adopting these guidelines must audit their existing policies, reconfigure authentication systems, deploy blocklists, and train employees to adapt accordingly. Clear communication of the changes will be key to ensuring compliance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Gmail phishing attack hides malware inside fake PDFs

Researchers have uncovered a phishing toolkit disguised as a PDF attachment to bypass Gmail’s defences. Known as MatrixPDF, the technique blurs document text, embeds prompts, and uses hidden JavaScript to redirect victims to malicious sites.

The method exploits Gmail’s preview function, slipping past filters because the PDF contains no visible links. Users are lured into clicking a fake button to ‘open secure document,’ triggering the attack and fetching malware outside Gmail’s sandbox.

A second variation embeds scripts that connect directly to payload URLs when PDFs are opened in desktop or browser readers. Victims see permission prompts that appear legitimate, but allowing access launches downloads that compromise devices.

Experts warn that PDFs are trusted more than other file types, making this a dangerous evolution of social engineering. Once inside a network, attackers can move laterally, escalate privileges, and plant further malware.

Security leaders recommend restricting personal email access on corporate devices, increasing sandboxing capabilities, and expanding employee training initiatives. Analysts emphasise that awareness and recognition of suspicious files remain crucial in countering this new phishing threat.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattack halts Asahi beer production in Japan

Japanese beer maker Asahi Group Holdings has halted production at its main plant following a cyberattack that caused major system failures. Orders, shipments, and call centres were suspended across the company’s domestic operations, affecting most of its 30 breweries in Japan.

Asahi said it is still investigating the cause, believed to be a ransomware infection. The company confirmed there was no external leakage of personal information or employee data, but did not provide a timeline for restoring operations.

The suspension has raised concerns over possible shortages, as beer has limited storage capacity due to freshness requirements. Restaurants and retailers are expected to feel pressure if shipments continue to be disrupted.

The impact has also spread to other beverage companies such as Kirin and Sapporo, which share transport networks. Industry observers warn that supply chain delays could ripple across the food and drinks sectors in Japan.

In South Korea, the effect remains limited for now. Lotte Asahi Liquor, the official importer, declined to comment, but industry officials noted that if the disruption continues, import schedules could also be affected.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!