According to a Meta security report, Russia’s use of generative AI in online deception campaigns could have been more effective. Meta, the parent company of Facebook and Instagram, reported that while AI-powered tactics offer some productivity and content-generation gains for malicious actors, they have yet to advance these efforts significantly. Despite growing concerns about generative AI being used to manipulate elections, Meta has successfully disrupted such influence operations.
The report highlights that Russia remains a leading source of ‘coordinated inauthentic behaviour’ on social media, particularly since its invasion of Ukraine in 2022. These operations have primarily targeted Ukraine and its allies, with expectations that as the US election nears, Russia-backed campaigns will increasingly attack candidates who support Ukraine. Meta’s approach to detecting these campaigns focuses on account behaviour rather than content alone, as influence operations often span multiple online platforms.
Meta has observed that posts on X are sometimes used to bolster fabricated content. While Meta shares its findings with other internet companies, it notes that X has significantly reduced its content moderation efforts, making it a haven for disinformation. Researchers have also raised concerns about X, now owned by Elon Musk, being a platform for political misinformation. Musk, who supports Donald Trump, has been criticised for using his influence on the platform to spread falsehoods, including sharing an AI-generated deepfake video of Vice President Kamala Harris.
Elon Musk’s media platform X announced last Saturday that it would cease operations in Brazil immediately, citing ‘censorship orders’ from Brazilian judge Alexandre de Moraes. According to X, de Moraes allegedly threatened to arrest one of the company’s legal representatives in Brazil if they did not comply with orders to remove certain content from the platform. X shared images of a document purportedly signed by the judge, stating that the representative, Rachel Nova Conceicao, would face a daily fine and possible arrest if the platform did not comply.
Due to demands by “Justice” @Alexandre in Brazil that would require us to break (in secret) Brazilian, Argentinian, American and international law, 𝕏 has no choice but to close our local operations in Brazil.
In response, X decided to close its operations in Brazil to protect its staff, although the service remains available to Brazilian users. The Brazilian Supreme Court, where de Moraes serves, declined to comment on the authenticity of the document shared by X.
TikTok has contested claims made by the US Department of Justice in a federal appeals court, asserting that the government has inaccurately characterised the app’s ties to China. The company is challenging a law that mandates its Chinese parent company, ByteDance, to divest TikTok’s US assets by January 19 or face a ban. TikTok argues that the app’s content recommendation engine and user data are securely stored in the US, with content moderation conducted domestically.
The law, signed by President Joe Biden in April, reflects concerns over potential national security risks, with accusations that TikTok allows Chinese authorities to access American data and influence content. TikTok, however, contends that the law infringes on free speech rights, arguing that its content curation should be protected by the US Constitution.
The legislation also impacts app stores and internet hosting services, barring support for TikTok unless it is sold. The swift passage of the measure in Congress highlights ongoing fears regarding data security and espionage risks associated with the app.
Google is intensifying its AI initiatives in India, with a focus on addressing language barriers and improving agricultural efficiency. Abhishek Bapna, Director of Product Management at Google DeepMind, emphasized the economic importance of breaking language barriers, particularly in areas like healthcare and banking. Google’s AI chatbot, Gemini, supports over 40 languages globally, including nine Indian languages, and aims to enhance language quality further.
In collaboration with the Indian Institute of Science, Google’s Project Vaani provides over 14,000 hours of speech data from 80 districts, empowering developers to create more efficient AI models for India’s multilingual environment. Additionally, the IndicGenBench benchmark helps fine-tune language models for Indian languages. These efforts are crucial to improving the accuracy and reach of AI in the country.
Google is also piloting its Agricultural Landscape Understanding (ALU) Research API in Telangana, designed to boost farm yields and enhance market access. The initiative aligns with Google’s broader goals of improving livelihoods and addressing climate change, offering granular data-driven insights at the farm field level.
These initiatives are expected to not only assist farmers but also attract end users like banks and insurance companies. Once the pilot program is completed, Google plans to scale the project to work with state governments across India.
Government agencies in Australia must disclose their use of AI within six months under a new policy effective from 1st September. The policy mandates that agencies prepare a transparency statement detailing their AI adoption and usage, which must be publicly accessible. Agencies must also designate a technology executive responsible for ensuring the policy’s implementation.
The transparency statements, updated annually or after significant changes, will include information on compliance, monitoring effectiveness, and measures to protect the public from potential AI-related harm. Although staff training on AI is strongly encouraged, it is not a mandatory requirement under the new policy.
The policy was developed in response to concerns about public trust, recognising that a lack of transparency and accountability in AI use could hinder its adoption. The government in Australia aims to position itself as a model of safe and responsible AI usage by integrating the new policy with existing frameworks and legislation.
Minister for Finance and the APS, Katy Gallagher, emphasised the importance of the policy in guiding agencies to use AI responsibly, ensuring Australians’ confidence in the government’s application of these technologies.
The US Federal Trade Commission (FTC) has finalised a rule prohibiting companies from buying or selling fake online reviews. New regulation allows the FTC to impose fines of up to $51,744 per violation, targeting deceptive practices that harm consumers and distort competition.
The rule addresses various forms of manipulation, including fake reviews from non-existent customers, company insiders, or AI. It also bans purchasing fabricated views or followers on social media and using intimidation to remove negative reviews. While the rule does not require platforms to verify consumer reviews, it represents a significant step towards a more honest online marketplace.
Trade groups and businesses like Google, Amazon, and Yelp have supported the rule. Yelp’s General Counsel, Aaron Schur, stated that enforcing the rule would improve the review landscape and promote fair competition among businesses.
Consumer advocates, such as Teresa Murray from the US Public Interest Research Group, praised the rule as essential protection for online shoppers. The hope is that the fear of penalties will encourage companies to adhere to ethical practices, benefiting both consumers and businesses.
WhatsApp is expanding its sticker options, offering users more ways to express themselves through its platform. Despite the availability of hundreds of emojis and sticker packs, many users may struggle to find the perfect expression for their emotions. To address this, WhatsApp has integrated AI and GIPHY, enhancing the experience.
Users can now access an extensive collection by tapping the sticker icon and searching with text or emojis. Additionally, WhatsApp allows users to create custom ones from their existing images. By simply opening a photo, the app automatically removes the background, leaving only the subject for further customisation.
They can be cropped, drawn upon, and decorated before being saved automatically in the sticker section. WhatsApp now lets users preview and reorganize packs by dragging them within the sticker tray, offering greater control over their collection.
A recent report by the Center for Countering Digital Hate (CCDH) has revealed that Instagram failed to remove abusive comments directed at female politicians who may run in the 2024 US elections. The study examined over half a million comments on posts by prominent female figures from the Democratic and Republican parties, including Vice President Kamala Harris and Senator Marsha Blackburn.
Over 20,000 comments were flagged as ‘toxic,’ with a significant number containing sexist, racist abuse and even death and rape threats. Despite violating Instagram’s community standards, 93% of the harmful comments remained on the platform.
Meta, the parent company of Instagram, highlighted the tools available to users to filter out offensive content but acknowledged the need to review the CCDH report and promised to act on any content that breaches their policies. The report further emphasised that women of colour were particularly vulnerable to online abuse during the 2020 election and criticised social media algorithms for amplifying harmful content. Advocacy groups are increasingly calling on social media platforms to better enforce their safety guidelines to protect users from targeted abuse.
YouTube has shut down the video channel of the Portuguese ultranationalist group Grupo 1143 for violating its hate speech policies. The action came after the New York Times contacted the platform while investigating how online hate speech can incite real-world violence, using Portugal as a case study. YouTube stated that it prohibits content glorifying hateful supremacist propaganda and took down the channel linked to the group, which is led by neo-Nazi activist Mario Machado.
Machado, who has a criminal history including charges of assault and racial discrimination, criticised the shutdown on X, claiming it was an attempt by the ‘global Left’ to silence his nationalist organisation. Despite the YouTube ban, Grupo 1143’s accounts on X and Telegram remain active. The group, known for organising anti-immigration and anti-Islam protests, is currently under investigation by Portuguese authorities for its possible connection to violent attacks on migrants earlier this year, although it denies any involvement.
YouTube’s hate speech policy strictly bans content promoting violence or hatred based on attributes like immigration status, nationality, or religion. In the first quarter of 2024, YouTube removed over 157,000 videos worldwide for violating these policies.
Meta Platforms announced that access to Instagram in Turkey has been restored after a nine-day block. The social media giant expressed its satisfaction with the platform being operational again and confirmed ongoing discussions with Turkish authorities regarding content that violates its policies. Meta emphasised its commitment to removing content related to dangerous organisations and individuals while applying allowances for newsworthy content where appropriate.
The Turkish government had blocked Instagram on 2 August, citing the platform’s failure to comply with local laws. The ban was linked to allegations that Instagram had restricted posts expressing condolences over the assassination of Ismail Haniyeh, a leader of the Palestinian militant group Hamas. The block led to significant protests from users and businesses reliant on the platform. Meta clarified that it did not change its policies but agreed to reassess its actions concerning policy-violating content in Turkey.
Turkey, which ranks fifth globally in Instagram usage with over 57 million users, saw the platform reinstated after Meta’s assurances to cooperate with Turkish authorities. Despite the platform’s return, tensions remain, highlighted by the recent arrest of a woman who criticised the Instagram ban.