Taiwan’s new rules to combat telecom fraud

Taiwan’s government is taking decisive action to combat telecom fraud through new regulations proposed by the Ministry of Digital Affairs. These regulations focus on the stringent management of four-digit telephone numbers beginning with ’19,’ typically allocated to government agencies and charitable organisations.

The primary goal is to safeguard these critical numbers from misuse. To this end, the government plans to impose penalties on telecom operators who breach the Fraud Hazard Prevention Act, including limiting the number of phone numbers they can receive. This measure aims to deter fraudulent activities effectively. Furthermore, organisations in Taiwan will need to obtain government approval before making any changes to the use of these numbers and must return them if their usage changes. To ensure compliance, the Ministry will conduct random inspections to monitor the proper use of these numbers.

Taiwan’s government is also enhancing its anti-fraud efforts by proposing amendments to the Subsidy, Reward, and Assistance Regulations for Promoting Industry Innovation. These changes will allow the Ministry to offer financial support, including subsidies and rewards, to digital industries developing technologies to prevent fraud. By encouraging technological innovation in this field, the government aims to strengthen fraud prevention measures and protect individuals and organisations against telecom-related fraud.

Australian police arrest alleged crime app mastermind

Australian authorities have charged a Sydney man with creating and managing an encrypted messaging app, Ghost, allegedly used by global crime networks. The man, 32, was arrested in western Sydney and appeared in court on Wednesday, facing multiple charges related to the platform’s role in organised crime. Ghost is said to have been used by syndicates from Australia, the Middle East, and South Korea for drug trafficking and contract killings.

Police, in collaboration with international forces, carried out extensive raids across Australia and beyond, with searches also conducted in Italy, Ireland, Sweden, and Canada. Up to 50 Australians allegedly involved with Ghost are now facing charges, with significant prison terms expected. More arrests are anticipated in both Australia and abroad.

Authorities have made a breakthrough by cracking Ghost’s encryption, preventing the deaths or serious injuries of 50 individuals in Australia. This marks the first time an Australian has been accused of running a global criminal messaging platform, a major milestone in the country’s fight against organised crime.

The Australian Federal Police Deputy Commissioner highlighted the complex nature of dismantling encrypted communication platforms. The success in accessing evidence from Ghost represents a major achievement in efforts to disrupt global criminal activity.

Tanzania embraces AI to tackle rising cybercrime

Tanzanian President Samia Suluhu Hassan has called for the integration of AI into the strategies of the Tanzania Police Force to address the escalating threat of cybercrime. Speaking at the 2024 Annual Senior Police Officers’ Meeting and the 60th Anniversary of the Tanzania Police Force, President Samia emphasised that in today’s digital age, leveraging advanced technology is crucial for effectively combating online threats. She highlighted the necessity for the police to adapt technologically to stay ahead of sophisticated cybercriminals, underlining the importance of embracing these advancements.

In her address, President Samia also drew attention to a troubling surge in cybercrime, with incidents increasing by 36.1% from 2022 to 2023. She noted that crimes such as fraud, false information dissemination, pornography distribution, and harassment have become more prevalent, with offenders frequently operating from outside Tanzania. The President’s remarks underscore the urgency of adopting advanced technological tools to address these growing challenges effectively and to enhance the police’s capability to counteract such threats.

Furthermore, President Samia emphasised the need to maintain peace and stability during the upcoming local government and general elections. She tasked the police with managing election-related challenges, including defamatory statements and misinformation, without resorting to internet shutdowns. President Samia underscored that while elections are temporary, safeguarding a stable environment is essential for ongoing development and progress by stressing the importance of preserving national peace amidst political activities.

Mistral AI lowers prices and launches free developer features

Mistral AI has launched a new free tier for developers to fine-tune and test apps using its AI models, as well as significantly reducing prices for API access to these models, the startup announced on Tuesday. The Paris-based company, valued at $6 billion, is introducing these updates to remain competitive with industry giants such as OpenAI and Google. These companies also offer free tiers for developers with limited usage. Mistral’s free tier, accessible through its platform ‘la Plateforme,’ enables developers to test its AI models at no cost. However, paid access is required for commercial production.

Mistral has reduced the prices of its AI models, including Mistral NeMo and Codestral, by over 50% and cut the cost of its largest model, Mistral Large, by 33%. This decision reflects the increasing commoditisation of AI models in the developer space, with providers vying to offer more advanced tools at lower prices.

Mistral has integrated image processing into its consumer AI chatbot, le Chat, through its new multimodal model, Pixtral 12B. This model allows users to scan, analyse, and search image files alongside text, marking another advancement in the startup’s expanding AI capabilities.

Slack to transform into AI-powered work operating system

Slack is undergoing a major transformation as it integrates AI features into its platform, aiming to evolve from a simple messaging service to a ‘work operating system.’ CEO Denise Dresser said Slack will now serve as a hub for AI applications from companies like Salesforce, Adobe, and Anthropic. New, pricier features include AI-generated summaries of conversations and the ability to interact with AI agents for tasks such as data analysis, web searches, and image generation.

This shift follows Salesforce’s 2021 acquisition of Slack and its broader move toward AI-driven solutions. Slack’s AI integration seeks to enhance productivity by offering tools to catch up on team discussions, analyse business data, and create branded content, all within the chat environment. However, questions remain about whether users will embrace and pay for these premium features and how this change aligns with Slack’s core identity as a workplace communication tool.

Concerns around data privacy have also surfaced as Slack leans further into AI. The company faced criticism earlier this year for handling customer data, which was used for training purposes, but maintains that it does not use user messages to train its AI models. As Slack continues integrating AI, it must address growing scepticism around managing and safeguarding data.

New Google update will identify AI-edited images

Google is planning to roll out new features that will enable the identification of images that have been generated or edited using AI in search results. This update will highlight such photos in the ‘About this image’ section across Google Search, Google Lens, and the Circle to Search feature on Android. In the future, this disclosure feature may also be extended to other Google platforms like YouTube.

To achieve this, Google will utilise C2PA metadata developed by the Coalition for Content Provenance and Authenticity. This metadata tracks an image’s history, including its creation and editing process. However, the adoption of C2PA standards is limited, and metadata can be altered or removed, which may impact the reliability of this identification method.

Despite the challenges, Google’s action addresses the increasing concerns about deepfakes and AI-generated content. There have been reports of a significant rise in scams involving such content, and losses related to deepfakes are expected to increase dramatically in the coming years. As public concern about deepfakes and AI-driven misinformation grows, Google’s initiative aims to provide more transparency in digital media.

AI-powered fact-checking tech in development by NEC

The Japanese Technology Corporation, NEC (Nippon Electric Company), is developing an AI technology designed to analyze and verify the trustworthiness of online information. The project, launched under Japan’s Ministry of Internal Affairs and Communications, aims to help combat false and misleading content on the internet. The system will be tested by fact-checking organizations, including the Japan Fact-check Center and major media outlets, with the goal of making it widely available by 2025.

The AI uses Large Language Models (LLMs) to assess different types of content such as text, images, video, and audio, detecting whether they have been manipulated or are misleading. The system then evaluates the information’s reliability, looking for inconsistencies and ensuring accurate sources. These reports allow for user-driven adjustments, such as removing unreliable information or adding new details, to enhance fact-checking operations helping organizations streamline their verification processes.

As the project progresses, NEC hopes to refine its AI system to assist fact-checkers more effectively, ensuring that false information can be identified and addressed in real time. The technology could become a vital tool for media and fact-checking organizations, addressing the growing problem of misinformation online.

Meta introduces new Instagram teen accounts

Meta is set to overhaul Instagram’s privacy settings for users under 18, introducing stricter controls to protect young users. Accounts for teenagers will now be private by default, ensuring only approved connections can message or tag them. The move comes amid growing concerns over the negative impact of social media on youth, with studies highlighting links to mental health issues such as depression and anxiety.

Parents will have more authority over their children’s accounts, including monitoring who they engage with and setting restrictions on app usage. Teens under 16 will need parental permission to change default settings. The update also includes new features like a 60-minute daily usage reminder and a default “sleep mode” to mute notifications overnight.

Social media platforms, including Meta’s Instagram, have faced numerous lawsuits, with critics arguing that these apps have addictive qualities and contribute to rising mental health problems in teenagers. Recent US legislation seeks to hold platforms accountable for their effects on young users, pushing Meta to introduce these changes.

The rollout will take place in the US, UK, Canada, and Australia within the next two months, with European Union users following later. Global adoption of the new teen accounts is expected by January next year.

TikTok faces legal battle over potential US ban

TikTok and its parent company ByteDance are locked in a high-stakes legal battle with the US government to prevent a looming ban on the app, used by 170 million Americans. The legal confrontation revolves around a US law that mandates ByteDance divest its US assets by 19 January or face a complete ban. Lawyers for TikTok argue that the law violates free speech and is an unprecedented move that contradicts America’s tradition of fostering an open internet. A federal appeals court in Washington recently heard arguments from both sides, with TikTok’s legal team pushing for an injunction to halt the law’s implementation.

The US government, represented by the Justice Department, contends that TikTok’s Chinese ownership poses a significant national security threat, citing the potential for China to access American user data or manipulate the flow of information. This concern is at the core of the new legislation passed by Congress earlier this year, highlighting the risks of having a popular social media platform under foreign control. The White House, while supportive of curbing Chinese influence, has stopped short of advocating for an outright ban.

ByteDance maintains that divesting TikTok is neither technologically nor commercially feasible, casting uncertainty over the app’s future as it faces potentially severe consequences amid a politically charged environment.

The case comes at a pivotal moment in the US political landscape, with both presidential candidates, Donald Trump and Kamala Harris, actively using TikTok to engage younger voters. The judges expressed concerns over the complexities involved, especially with monitoring the massive codebase that powers TikTok, making it difficult to assess risks in real-time. As the legal wrangling continues, a ruling is expected by 6 December, and the case may eventually reach the US Supreme Court.

23andMe to pay $30 million in data breach settlement

American personal genomics and biotechnology company 23andMe has agreed to a $30 million settlement after a data breach exposed the personal information of 6.9 million users. The breach, which occurred last year, compromised sensitive data, including DNA Relatives profiles and Family Tree information. Affected users will receive financial compensation and three years of security monitoring under the Privacy & Medical Shield + Genetic Monitoring program.

The lawsuit also accused 23andMe of failing to inform customers of Chinese and Ashkenazi Jewish descent that they were specifically targeted in the breach. The stolen information was later found for sale on the dark web. A federal judge must now approve the proposed settlement, which the company considers fair and beneficial for its users.

Despite its financial challenges, the company expects to cover $25 million of the settlement with cyber insurance. The breach, which began in April 2023 and lasted five months, affected nearly half of the company’s 14.1 million customers at the time. 23andMe disclosed the incident in an October 2023 blog post.

The company, led by co-founder Anne Wojcicki, is also facing financial difficulties. It posted a significant quarterly loss and has been attempting to go private. Shares of 23andMe have been trading below $1 since December 2023, a sharp drop from its original public offering price.