UN report: Telegram used by Southeast Asian crime syndicates

Criminal networks in Southeast Asia are increasingly exploiting Telegram for large-scale illicit activities, according to a new report from the United Nations. The encrypted messaging app is used to trade hacked data, including credit card details and passwords, across sprawling, poorly-moderated channels. The report also notes that unlicensed cryptocurrency exchanges on the platform provide money laundering services.

Fraud tools, such as deepfake software and data-stealing malware, are widely sold, enabling organised crime syndicates to innovate and expand their operations. One vendor in Chinese reportedly claimed to move millions in stolen cryptocurrency daily. Southeast Asia has become a hub for these activities, where criminal groups target victims worldwide, generating up to $36.5 billion annually.

The controversy surrounding Telegram escalated when its founder, Pavel Durov, was arrested in Paris for allowing criminal activity on the platform. Durov, who is now out on bail, has since announced steps to cooperate with law enforcement by sharing users’ information when requested legally and removing certain features used for illegal purposes.

As the UNODC report warns, the widespread use of Telegram for underground markets places consumers’ data at heightened risk. Criminals are not only exploiting technology like artificial intelligence but are also leveraging the platform’s ease of use to target victims globally.

Meta revamps Facebook to engage young adults

Facebook, once the go-to platform for connecting with family and friends, is shifting its focus to attract younger users, according to Tom Alison, head of Facebook at Meta. With younger generations favouring apps like Instagram and TikTok, Meta aims to revitalise Facebook by helping users expand their networks and make new connections, aligning with how young adults use the platform today.

To achieve this, Facebook is testing two new tabs, Local and Explore, aimed at helping users find nearby events, community groups, and content tailored to their interests. This initiative aligns with Meta’s efforts to compete with TikTok, which has 150 million US users, by introducing its short-form video feature, Reels, in 2021. Data reveals that young adults on Facebook spend 60% of their time watching videos, with over half engaging with Reels daily.

Facebook also reported a 24% increase in conversations initiated through its dating feature among young adults in the US and Canada. At a recent event in Austin, Texas, the platform promoted its new direction with the slogan ‘Not your mom’s Facebook,’ emphasising its push to attract a younger audience.

Ireland launches EU-wide investigation into Ryanair’s use of facial recognition technology

Ireland’s Data Protection Commissioner (DPC) launched an EU-wide investigation into Ryanair’s use of facial recognition technology for customers booking through some third-party websites. The probe aims to determine if this practice violates EU privacy laws. The DPC’s action follows complaints from Ryanair customers across Europe regarding the airline’s additional verification process for bookings made through online travel agents (OTAs) rather than directly with Ryanair.

Ryanair, the largest airline in Europe by passenger numbers, welcomes the investigation, emphasising that the verification process protects customers from unverified online travel agents (OTAs) that may provide inaccurate contact or payment information. According to the airline’s website, these additional identity checks are part of its safety and security protocols. Passengers who wish to avoid facial recognition can either arrive at the airport two hours before departure or undergo a manual verification process, which may take up to seven days to complete.

Ryanair stated that verification is not required for bookings made directly on its website, mobile app, or through OTAs that have entered into commercial agreements with the airline. Since the beginning of the year, Ryanair has established 14 such partnerships. The airline asserts that both its biometric and manual verification methods are fully compliant with the EU’s General Data Protection Regulation (GDPR).

X must pay fine over child protection dispute

An Australian court has upheld a ruling requiring Elon Musk’s X, previously known as Twitter, to pay a $418,000 fine. The fine was issued for failing to cooperate with a request from the eSafety Commissioner regarding anti-child-abuse measures on the platform.

X had contested the penalty, arguing that it was no longer bound by regulatory obligations following a corporate restructure under Musk’s ownership. However, the court ruled that the platform was still required to respond to the request made by the Australian internet safety regulator.

The eSafety Commissioner stated that accepting X’s argument could have set a worrying precedent for foreign companies merging to avoid regulatory responsibilities. Civil proceedings against X have also begun due to its noncompliance.

Musk’s platform has clashed with authorities in Australia before, notably in a case where X refused to remove content showing a stabbing incident. The company claimed that one country should not dictate global online content.

TikTok faces lawsuit in Texas over child privacy breach

Texas Attorney General Ken Paxton has filed a lawsuit against TikTok, accusing the platform of violating children’s privacy laws. The lawsuit alleges that TikTok shared personal information of minors without parental consent, in breach of Texas’s Securing Children Online through Parental Empowerment Act (SCOPE Act).

The legal action seeks an injunction and civil penalties, with fines up to $10,000 per violation. Paxton claims TikTok failed to provide adequate privacy tools for children and allowed data to be shared from accounts set to private. Targeted advertising to children was also a concern raised in the lawsuit.

TikTok’s parent company, ByteDance, is being held responsible for allegedly prioritising profits over child safety. Paxton stressed the importance of holding large tech companies accountable for their role in protecting minors online.

The case was filed in Galveston County court, with TikTok yet to comment on the matter. The lawsuit represents a broader concern about the protection of children’s online privacy in the digital age.

EU questions YouTube, TikTok, and Snapchat over algorithms

The European Commission has requested information from YouTube, Snapchat, and TikTok regarding the algorithms used to recommend content to users. Concerns have been raised about the influence of these systems on issues like elections, mental health, and protecting minors. The inquiry falls under the Digital Services Act (DSA), aiming to address potential systemic risks, including the spread of illegal content such as hate speech and drug promotion.

TikTok faces additional scrutiny about measures to prevent bad actors from manipulating the platform, especially during elections. These platforms must provide detailed information on their systems by 15 November. Failure to comply could result in further action, including potential fines.

The DSA mandates that major tech companies take more responsibility in tackling illegal and harmful content. In the past, the EU has initiated similar non-compliance proceedings with other tech giants like Meta, AliExpress, and TikTok over content regulation.

This latest request reflects the EU’s ongoing efforts to ensure greater accountability from social media platforms. The focus remains on protecting users and maintaining a fair and safe digital environment.

BMO names new chief AI and data officer

Bank of Montreal (BMO) has appointed Kristin Milchanowski as its chief AI and data officer, effective October 15. Formerly with EY, Milchanowski will lead the bank’s AI initiatives, focusing on data, robotics, and analytics. This new role builds on BMO’s ongoing investments in AI, aiming to enhance data management and governance while fostering a culture of innovation.

The financial sector views AI as a major opportunity, with potential uses like streamlining compliance tasks and enhancing customer service. However, integrating AI brings challenges, especially for firms managing sensitive data. Analysts suggest that AI-driven solutions could simplify processes and improve data-driven decision-making across the industry, offering significant benefits to financial services.

As AI adoption expands, US regulators seek public feedback to ensure these technologies foster fair and equitable access to financial services. Earlier this year, Morgan Stanley emphasised AI’s transformative potential, noting it could save financial advisers up to 15 hours of work per week, highlighting the significant impact AI could have on the industry.

EU GPAI Code of Practice drafting sparks disagreements

The European Commission has revealed ongoing disagreements between general-purpose AI providers and other stakeholders during the first Code of Practice plenary on 30 September. The Code will play a key role in interpreting the EU AI Act’s risk and transparency requirements until formal standards are finalised in 2026.

Nearly 1,000 participants, including industry, civil society, and academia, attended the virtual plenary. Feedback from a multi-stakeholder consultation and workshops will guide the drafting of the Code, with the first AI provider workshop scheduled for mid-October. A draft of the Code is expected by early November.

Key disagreements include how much data transparency should be required. Non-provider stakeholders support disclosing data sources such as licensed content and scraped data, while AI providers are less inclined to share information about open datasets. Differences also emerged on strict risk measures like third-party audits.

Given the large number of participants, including experts from academia, the drafting process will need careful management to ensure smooth progress. The final version of the Code of Practice is expected in April 2025.

Indian government redefines ministry roles in telecom and cybersecurity

The Indian government has recently redefined the roles of key ministries concerning telecom network security, cybersecurity, and cybercrime through amendments to the business allocation rules. As a result, this strategic reorganisation ensures that each ministry is assigned clear responsibilities, streamlining efforts to manage these vital areas more effectively.

The roles have been precisely delineated to enhance governance. Specifically, the Ministry of Communications is responsible for telecom security under the Telecommunication Act of 2023, which enables authorities to access traffic data, including from OTT services like WhatsApp. Meanwhile, cybersecurity falls under the Ministry of Electronics and Information Technology (MeitY), as outlined in the IT Act of 2000, with strategic guidance provided by the National Security Council Secretariat.

Furthermore, the Ministry of Home Affairs (MHA) oversees cybercrime, working closely with the Department of Telecommunications to address fraud and utilising tools such as Pratibimb to track mobile numbers involved in cybercriminal activities.

There is an ongoing debate on regulating OTT communication services. While telecom companies continue to push to regulate these services under the Telecom Act, the government in India has reiterated that OTT services like WhatsApp and Telegram fall under the Information Technology Act. This differentiation reflects the broader scope of the IT Act in handling digital communication services, even as pressure mounts for more stringent telecom-specific regulations.

AI startup claims to enhance chatbot capabilities

Augmented Intelligence, a new AI startup, has emerged from stealth with $44 million in funding and a bold claim that its AI platform, Apollo, can outperform traditional chatbots by combining symbolic AI and neural networks. While neural networks excel at language generation, symbolic AI uses task-specific rules to solve complex problems. Apollo, the company says, uses both approaches to power more efficient and “agentic” chatbots capable of not just answering questions but performing tasks like booking flights.

CEO Ohad Elhelo argues that most AI models, like OpenAI’s ChatGPT, struggle when they need to take actions or rely on external tools. In contrast, Apollo integrates seamlessly with a company’s systems and APIs, eliminating the need for extensive setup. Unlike many competitors, it doesn’t require training on a company’s private data, appealing to businesses concerned about data security. Augmented Intelligence also touts its explainability, offering logs that help companies understand and improve the AI’s performance.

The company has already secured a partnership with Google Cloud and claims its technology outperforms purely neural network-based models. While some of CEO Elhelo’s claims, such as eliminating AI ‘hallucinations’ remain unproven, Augmented Intelligence’s novel approach and recent $350 million valuation highlight growing interest in AI solutions that blend symbolic reasoning with neural processing.