Government entities in Australia to assess foreign control risks in tech

Australia has instructed all government entities to review their technology assets for risks of foreign control or influence. The directive aims to address increasing cyber threats from hostile states and financially motivated attacks. The Australian Signals Directorate (ASD) recently warned of state-sponsored Chinese hacking targeting Australian networks.

The Department of Home Affairs has issued three legally-binding instructions requiring over 1,300 government entities to identify Foreign Ownership, Control or Influence (FOCI) risks in their technology, including hardware, software, and information systems. The organisations in question must report their findings by June 2025.

Additionally, government entities are mandated to audit all internet-facing systems and services, developing specific security risk management plans. They must also engage with the ASD for threat intelligence sharing by the end of the month, ensuring better visibility and enhanced cybersecurity.

The new cybersecurity measures are part of the Protective Security Policy Framework, following Australia’s ban on TikTok from government devices in April 2023 due to security risks. The head of the Australian Security Intelligence Organisation (ASIO) has highlighted the growing espionage and cyber sabotage threats, emphasising the interconnected vulnerabilities in critical infrastructure.

National blockchain ‘Nigerium’ aims to boost Nigeria’s tech security

The Nigerian Government has announced the development of a locally-made blockchain called ‘Nigerium’, designed to secure national data and enhance cybersecurity. The National Information Technology Development Agency (NITDA) is leading this initiative to address concerns about reliance on foreign blockchain technologies, such as Ethereum, which may not align with Nigeria’s interests.

NITDA Director General Kashifu Abdullahi introduced the ‘Nigerium’ project during a visit from the University of Hertfordshire Law School delegation in Abuja. He highlighted the need for a blockchain under Nigeria’s control to maintain data sovereignty and position the country as a leader in the competitive global tech landscape. The project, proposed by the University of Hertfordshire, aims to create a blockchain tailored to Nigeria’s unique requirements and regulatory framework.

The indigenous blockchain offers several advantages, including enhanced security, data control, and economic growth. By managing its own blockchain, Nigeria can safeguard sensitive information, improve cyber defence capabilities, and promote trusted transactions within its digital economy. The collaboration between the private and public sectors is crucial for the success of ‘Nigerium’, marking a significant step towards technological autonomy.

If successful, ‘Nigerium’ could place Nigeria at the forefront of blockchain technology in Africa, ensuring a secure and prosperous digital future. This initiative represents a strategic move towards maintaining data sovereignty and fostering innovation, positioning Nigeria to better control its technological destiny.

Macau government websites hit by cyberattack

Several Macau government websites were hacked, prompting a criminal investigation, Chinese state media reported on Wednesday. The hacked sites included those of the office of the secretary for security, the public security police, the fire services department, and the security forces services bureau, causing service disruptions.

Security officials in Macau’s Special Administrative Region believe the cyberattack originated from overseas. However, no further details have been disclosed at this time.

In response, authorities collaborated with telecommunications operators to restore the affected services as quickly as possible. The investigation into the source of the intrusion is ongoing.

Rising threat of deepfake pornography for women

As deepfake pornography becomes an increasing threat to women online, both international and domestic lawmakers face difficulties in creating effective protections for victims. The issue has gained prominence through cases like that of Amy Smith, a student in Paris who was targeted with manipulated nude images and harassed by an anonymous perpetrator. Despite reporting the crime to multiple authorities, Smith found little support due to the complexities of tracking faceless offenders across borders.

Recent data shows that deepfake pornography is predominantly used for malicious purposes, with 98% of such videos being explicit. The FBI has identified a rise in “sextortion schemes,” where altered images are used for blackmail. Public awareness of these crimes is often heightened by high-profile cases, but many victims are not celebrities and face immense challenges in seeking justice.

Efforts are underway to address these issues through new legislation. In the US, proposed bills aim to hold perpetrators accountable and require prompt removal of deepfake content from the internet. Additionally, President Biden’s recent executive order seeks to develop technology for detecting and tracking deepfake images. In Europe, the AI Act introduces regulations for AI systems but faces criticism for its limited scope. While these measures represent progress, experts caution that they may not fully prevent future misuse of deepfake technology.

Bumble fights AI scammers with new reporting tool

With the instances of scammers using AI-generated photos and videos on dating apps, Bumble has added a new feature that lets users report suspected AI-generated profiles. Now, users can select ‘Fake profile’ and then choose ‘Using AI-generated photos or videos’ among other reporting options such as inappropriate content, underage users, and scams. By allowing users to report such profiles, Bumble aims to reduce the misuse of AI in creating misleading profiles.

Earlier in February this year, Bumble introduced the ‘Deception Detector’, which combines AI and human moderators to detect and eliminate fake profiles and scammers. Following this measure, Bumble has witnessed a 45% overall reduction in reported spam and scams. Another notable feature of Bumble is its ‘Private Detector‘ AI tool that blurs unsolicited nude photos.

Risa Stein, Bumble’s VP of Product, emphasised the importance of creating a safe space and stated, ‘We are committed to continually improving our technology to ensure that Bumble is a safe and trusted dating environment. By introducing this new reporting option, we can better understand how bad actors and fake profiles are using AI disingenuously so our community feels confident in making connections.’

FTC bans NGL app from minors, issues $5 million fine for cyberbullying exploits

The US Federal Trade Commission (FTC) and the Los Angeles District Attorney’s Office have banned the anonymous messaging app NGL from serving children under 18 due to rampant cyberbullying and threats.

The FTC’s latest action, part of a broader crackdown on companies mishandling consumer data or making exaggerated AI claims, also requires NGL to pay $5 million and implement age restrictions to prevent minors from using the app. NGL, which marketed itself as a safe space for teens, was found to have exploited its young users by sending them fake, anonymous messages designed to prey on their social anxieties.

The app then charged users for information about the senders, often providing only vague hints. The FTC lawsuit, which names NGL’s co-founders, highlights the app’s deceptive practices and its failure to protect users. However, the case against NGL is a notable example of FTC Chair Lina Khan’s focus on regulating digital data and holding companies accountable for AI-related misconduct.

The FTC’s action is part of a larger effort to protect children online, with states like New York and Florida also passing laws to limit minors’ access to social media. Regulatory push like this one aims to address the growing concerns about the impact of social media on children’s mental health.

US authorities disrupt Russian AI-powered disinformation campaign

Authorities from multiple countries have issued warnings about a sophisticated disinformation campaign backed by Russia that leverages AI-powered software to spread false information both in the US and internationally. The operation, known as Meliorator, is reportedly being carried out by affiliates of RT (formerly Russia Today), a Russian state-sponsored media outlet, to create fake online personas and disseminate misleading content. Since at least 2022, Meliorator has been employed to spread disinformation targeting the US, Poland, Germany, the Netherlands, Spain, Ukraine, and Israel, as detailed in a joint advisory released by US, Canadian, and Dutch security services.

Meliorator is designed to create fake social media profiles that appear to be real individuals, primarily from the US. These bots can generate original posts, follow users, like, comment, repost, and gain followers. They are capable of mirroring and amplifying existing Russian disinformation narratives. The identities of these bots are crafted based on specific parameters like location, political ideologies, and biographical data. Meliorator can also group bots with similar ideologies to enhance their personas.

Moreover, most bot accounts had over 100,000 followers to avoid detection and followed genuine accounts aligned with their fabricated political leanings. As of June 2024, Meliorator was only operational on X, but there are indications that its functionality might have expanded to other social media networks.

The US Justice Department (DOJ) announced the seizure of two domain names and the search of nearly a thousand social media accounts used by Russian actors to establish an AI-enhanced bot farm with Meliorator’s assistance. The bot farm operators registered fictitious social media accounts using private email servers linked to the seized domain names. The FBI took control of these domains, while social media platform X (formerly Twitter) voluntarily suspended the remaining identified bot accounts for violating terms of service.

FBI Director Christopher Wray emphasised that this marks a significant step in disrupting a Russian-sponsored AI-enhanced disinformation bot farm. The goal of the bot farm was to use AI to scale disinformation efforts, undermining partners in Ukraine and influencing geopolitical narratives favouring the Russian government. These accounts commonly posted pro-Kremlin content, including videos of President Vladimir Putin and criticism of the Ukrainian government.

US authorities have linked the development of Meliorator to a former deputy editor-in-chief at RT in early 2022. RT viewed this bot farm as an alternative means of distributing information beyond its television broadcasts, especially after going off the air in the US in early 2022. The Kremlin approved and financed the bot farm, with Russia’s Federal Security Service (FSB) having access to the software to advance its goals.

The DOJ highlighted that the use of US-based domain names by the FSB violates the International Emergency Economic Powers Act, and the associated payments breach US money laundering laws. Deputy Attorney General Lisa Monaco stated that the DOJ and its partners will not tolerate the use of AI by Russian government actors to spread disinformation and sow division among Americans.

Why does it matter?

The disruption of the Russian operation comes just four months before the US presidential election, a period during which security experts anticipate heightened hacking and covert social media influence attempts by foreign adversaries. Attorney General Merrick Garland noted that this is the first public accusation against a foreign government for using generative AI in a foreign influence operation.

AI-powered workplace innovation: Tech Mahindra partners with Microsoft

Tech Mahindra has partnered with Microsoft to enhance workplace experiences for over 1,200 customers and more than 10,000 employees across 15 locations by adopting Copilot for Microsoft 365. The collaboration aims to boost workforce efficiency and streamline processes through Microsoft’s trusted cloud platform and generative AI capabilities. Additionally, Tech Mahindra will deploy GitHub Copilot for 5,000 developers, anticipating a productivity increase of 35% to 40%.

Mohit Joshi, CEO and Managing Director of Tech Mahindra, highlighted the transformative potential of the partnership, emphasising the company’s commitment to shaping the future of work with cutting-edge AI technology. Tech Mahindra plans to extend Copilot’s capabilities with plugins to leverage multiple data sources, enhancing creativity and productivity. The focus is on increasing efficiency, reducing effort, and improving quality and compliance across the board.

As part of the initiative, Tech Mahindra has launched a dedicated Copilot practice to help customers unlock the full potential of AI tools, including workforce training for assessment and preparation. The company will offer comprehensive solutions to help customers assess, prepare, pilot, and adopt business solutions using Copilot for Microsoft 365, providing a scalable and personalised user experience.

Judson Althoff, Executive Vice President and Chief Commercial Officer at Microsoft, remarked that the collaboration would empower Tech Mahindra’s employees with new generative AI capabilities, enhancing workplace experiences and increasing developer productivity. The partnership aligns with Tech Mahindra’s ongoing efforts to enhance workforce productivity using GenAI tools, demonstrated by the recent launch of a unified workbench on Microsoft Fabric to accelerate the adoption of complex data workflows.

Basel Committee of banking regulators proposes principles to reduce risk from third-party tech firms

The Basel Committee of banking regulators, consisting of regulators from the G20 and other nations, proposed 12 principles for banks and emphasised that the board of directors holds ultimate responsibility for overseeing third-party arrangements and that they must assume full responsibility for outsourced services and document their risk management strategies for service outages and disruptions.

Banks’ increasing reliance on third-party tech companies like Microsoft, Amazon, and Google for cloud computing services raises regulatory concerns about the potential financial sector impact if a widely used provider experiences downtime. Moreover, increased dependence on third-party services has led to heightened scrutiny due to frequent cyberattacks that threaten banks’ operational resilience and can potentially disrupt customer services. As such, banks should implement strong business continuity plans to ensure operations during disruptions.

In the consultative document, the committee also highlighted the importance of maintaining documentation for critical decisions in banks’ records, such as third-party strategies and board minutes.

Why does this matter?

With the financial sector becoming increasingly reliant on technology and tech companies to provide financial services, it makes them more susceptible to cyber-attacks or incidents, potentially affecting the larger economy. As such, there is an increasing worldwide need to improve the financial sector’s digital resilience. Previously, Europe’s Digital Operational Resilience Act (DORA), scheduled to be operational starting January next year, has also recognised this issue.

ChatGPT vs Google: The battle for search dominance

OpenAI’s ChatGPT, launched in 2022, has revolutionised the way people seek answers, shifting from traditional methods to AI-driven interactions. This AI chatbot, along with competitors like Anthropic’s Claude, Google’s Gemini, and Microsoft’s CoPilot, has made AI a focal point in information retrieval. Despite these advancements, traditional search engines like Google remain dominant.

Google’s profits surged by nearly 60% due to increased advertising revenue from Google Search, and its global market share reached 91.1% in June, even as ChatGPT’s web visits declined by 12%.

Google is not only holding its ground but also leveraging AI technology to enhance its services. Analysts at Bank of America credit Gemini, Google’s AI, with contributing to the growth in search queries. By integrating Gemini into products such as Google Cloud and Search, Google aims to improve their performance, blending traditional search capabilities with cutting-edge AI innovations.

However, Google’s dominance faces significant legal challenges. The U.S. Department of Justice has concluded a major antitrust case against Google, accusing the company of monopolising the digital search market, with a verdict expected by late 2024.

Additionally, Google is contending with another antitrust lawsuit filed by the U.S. government over alleged anticompetitive behaviour in the digital advertising space. These legal challenges could reshape the digital search landscape, potentially providing opportunities for AI chatbots and other emerging technologies to gain a stronger foothold in the market.