Government agencies in Australia must disclose their use of AI within six months under a new policy effective from 1st September. The policy mandates that agencies prepare a transparency statement detailing their AI adoption and usage, which must be publicly accessible. Agencies must also designate a technology executive responsible for ensuring the policy’s implementation.
The transparency statements, updated annually or after significant changes, will include information on compliance, monitoring effectiveness, and measures to protect the public from potential AI-related harm. Although staff training on AI is strongly encouraged, it is not a mandatory requirement under the new policy.
The policy was developed in response to concerns about public trust, recognising that a lack of transparency and accountability in AI use could hinder its adoption. The government in Australia aims to position itself as a model of safe and responsible AI usage by integrating the new policy with existing frameworks and legislation.
Minister for Finance and the APS, Katy Gallagher, emphasised the importance of the policy in guiding agencies to use AI responsibly, ensuring Australians’ confidence in the government’s application of these technologies.
T-Mobile has been fined $60 million by a US committee focused on national security for failing to prevent and report unauthorised access to sensitive data. The penalty, imposed by the Committee on Foreign Investment in the US (CFIUS), is linked to violations of a mitigation agreement T-Mobile signed during its 2020 acquisition of Sprint Corp.
The data breach occurred in 2020 and 2021, during the integration of Sprint into T-Mobile’s operations. T-Mobile, controlled by Deutsche Telekom, explained that technical issues affected a small number of law enforcement data requests, but emphasised that the information never left the law enforcement community and was swiftly addressed.
The $60 million fine is the largest ever imposed by CFIUS, signalling a stronger approach to enforcement. Officials noted that the transparency of the penalty is intended to deter future violations, highlighting the committee’s commitment to holding companies accountable.
In the past 18 months, CFIUS has issued six penalties, including the one against T-Mobile, far surpassing the number of fines levied in the previous decades. The delay in T-Mobile’s reporting hampered the committee’s efforts to investigate and mitigate potential risks to US national security.
A recent report by the Center for Countering Digital Hate (CCDH) has revealed that Instagram failed to remove abusive comments directed at female politicians who may run in the 2024 US elections. The study examined over half a million comments on posts by prominent female figures from the Democratic and Republican parties, including Vice President Kamala Harris and Senator Marsha Blackburn.
Over 20,000 comments were flagged as ‘toxic,’ with a significant number containing sexist, racist abuse and even death and rape threats. Despite violating Instagram’s community standards, 93% of the harmful comments remained on the platform.
Meta, the parent company of Instagram, highlighted the tools available to users to filter out offensive content but acknowledged the need to review the CCDH report and promised to act on any content that breaches their policies. The report further emphasised that women of colour were particularly vulnerable to online abuse during the 2020 election and criticised social media algorithms for amplifying harmful content. Advocacy groups are increasingly calling on social media platforms to better enforce their safety guidelines to protect users from targeted abuse.
Enzo Biochem has agreed to pay $4.5 million to settle claims that it failed to protect sensitive patient data, leading to a significant cyberattack in April 2023. The breach compromised the personal and health information of approximately 2.4 million patients, including Social Security numbers and health histories. The settlement, announced by New York Attorney General Letitia James, involves payments to New York, New Jersey, and Connecticut.
The attack was made possible by shared login credentials among Enzo employees, including one password that hadn’t been updated in ten years. The attackers installed malware on the company’s systems, which went undetected for several days due to insufficient monitoring. The company has since taken steps to enhance its security measures, such as enforcing stronger passwords, implementing two-factor authentication, and improving its response plan for future incidents.
Enzo began notifying affected patients in June 2023. The breach impacted 1.46 million New Yorkers, including 405,000 whose Social Security numbers were compromised. New York will receive $2.8 million from the settlement. Attorney General James emphasised the importance of protecting patient information, particularly in the context of medical services.
Enzo Biochem has not commented on the settlement. The company previously exited the clinical lab testing business in August of the previous year. The settlement marks a significant reminder of the importance of robust cybersecurity protocols in protecting sensitive data.
Dutch copyright enforcement group BREIN has successfully taken down a large language dataset that trains AI models without proper permissions. The dataset contained information gathered from tens of thousands of books, news sites, and Dutch language subtitles from numerous films and TV series. BREIN’s Director, Bastiaan van Ramshorst, noted the difficulty in determining whether and how extensively AI companies had already used the dataset.
The removal comes as the EU prepares to enforce its AI Act, requiring companies to disclose the datasets used in training AI models. The person responsible for offering the Dutch dataset complied with a cease and desist order and removed it from the website where it was available.
Why does this matter?
The following action follows similar moves in other countries, such as Denmark, where a copyright protection group took down a large dataset called ‘Books3’ last year. BREIN did not disclose the individual’s identity behind the dataset, citing Dutch privacy regulations.
Companies from US and China are leading the race in AI research, with Alphabet, the parent company of Google, at the forefront. A recent study from Georgetown University revealed that Alphabet has published the most frequently cited AI academic papers over the past decade. Seven of the top ten positions are held by US companies, including Microsoft and Meta, reflecting their dominance in the field.
Chinese firms are not far behind, with Tencent, Alibaba, and Huawei securing spots within the top ten. These companies have shown remarkable growth, particularly in the number of papers accepted at major conferences. Huawei has outpaced its competitors with a 98.2% annual growth rate in this area, followed by Alibaba at 53.5%.
The competition extends beyond academic publications to patents. Baidu, a leading Chinese tech firm, topped the list of patent applications with over 10,000 submissions from 2013 to 2023. Baidu’s growth has been particularly striking, with a 228% increase in patent applications year-on-year in 2020. US companies hold three spots in the top ten for patents, with IBM making the list.
Samsung Electronics is the only Korean company to make the top 100, ranking No. 14 for highly cited AI articles and No. 4 for patents. However, Samsung’s growth in these areas has been slower compared to other global leaders, with modest increases in conference paper acceptances in recent years.
Polish billionaire Rafal Brzoska and his wife plan to take legal action against Meta, the parent company of Facebook and Instagram, due to fake advertisements circulating on these platforms. These ads falsely feature Brzoska’s image and spread misinformation about his wife. The couple has yet to decide where to file the lawsuit, which is part of a broader effort to hold Meta accountable for allowing such ads to persist even after being alerted to the issue.
Brzoska, known for founding the Polish parcel locker company InPost, stated that he first notified Meta about the problem in early July but has yet to see a resolution. He and his wife are considering various legal jurisdictions, including possibly filing a lawsuit in the United States if they don’t see action in Europe. They intend to demand that Meta cease profiting from misleading content that infringes on their rights and seek substantial compensation, which they plan to donate to charity.
The situation has prompted action from the President of the Personal Data Protection Office in Poland, who recently mandated that Meta Platforms Ireland Limited stop displaying false advertisements featuring the Brzoskas on Facebook and Instagram in Poland for three months.
A Meta spokesperson responded that the company removes false ads when discovered and collaborates with local authorities to combat scammers. They acknowledged the ongoing challenge of scammers who constantly adapt to evade detection, reaffirming their commitment to working with businesses, local governments, and law enforcement to address these issues.
The Competition Commission of India (CCI) issued a confidential order on 7 August, requiring all parties involved in the case to return the reports. The CCI emphasised the need to maintain the confidentiality of sensitive information to prevent unauthorised disclosures. Although the order did not specify what information Apple was concerned about, a source indicated that Apple was worried about disclosing revenue figures from its India app store and its market share.
The reports from the CCI’s antitrust investigations unit in 2022 and 2024 concluded that Apple had exploited its dominant position in the iOS app store market. The recall of these reports, now involving revisions to remove confidential information, will affect other parties, such as Match Group and the Indian startup group ADIF, which includes financial giant Paytm.
Why does it matter?
The CCI’s decision to recall the reports follows a private complaint by Apple, who argued that versions shared with parties contained its confidential business information. The recall is rare and is expected to delay the proceedings by two to three months, according to lawyers familiar with the CCI’s processes.
Globally, Apple is under scrutiny for its market practices. In June, European Union antitrust regulators accused Apple of violating tech rules, potentially leading to substantial fines. Apple also faces an inquiry regarding new fees imposed on app developers. Despite these allegations, Apple maintains that it is a minor player in India’s smartphone market, where Google’s Android system dominates. By mid-2024, iOS powered just 3.5% of India’s 690 million smartphones, although Apple’s presence in the country has grown significantly over the past five years.
In a groundbreaking case in the UK, a 27-year-old man named Hugh Nelson has admitted to using AI technology to create indecent images of children, a crime for which he is expected to be jailed. Nelson pleaded guilty to multiple charges at Bolton Crown Court, including attempting to incite a minor into sexual activity, distributing and making indecent images, and publishing obscene content. His sentencing is scheduled for 25 September.
The case, described by Greater Manchester Police (GMP) as ‘deeply horrifying,’ marks the first instance in the region—and possibly nationally—where AI technology was used to transform ordinary photographs of children into indecent images. Detective Constable Carly Baines, who led the investigation, emphasised the global reach of Nelson’s crimes, noting that arrests and safeguarding measures have been implemented in various locations worldwide.
Authorities hope this case will influence future legislation, as the use of AI in such offences is not yet fully addressed by current UK laws. The Crown Prosecution Service highlighted the severity of the crime, warning that the misuse of emerging technologies to generate abusive imagery could lead to an increased risk of actual child abuse.
An Austrian advocacy group, NOYB, has filed a complaint against the social media platform X, owned by Elon Musk, accusing the company of using users’ data to train its AI systems without their consent. The complaint, led by privacy activist Max Schrems, was lodged with authorities in nine European Union countries, pressuring Ireland’s Data Protection Commission (DPC), the primary EU regulator for major US tech firms because their EU operations are based in Ireland.
Despite this fact, NOYB’s complaint primarily focuses on X’s lack of cooperation and the inadequacy of its mitigation measures rather than questioning the legality of the data processing itself. Schrems emphasised the need for X to fully comply with the EU law by obtaining user consent before using their data. X has yet to respond to the latest complaint but intends to work with the DPC on AI-related issues.
In a related case, Meta, Facebook’s parent company, delayed the launch of its AI assistant in Europe after the Irish DPC advised against it, following similar complaints from NOYB regarding using personal data for AI training.