Donald Trump has shared AI-generated images on social media, showing Taylor Swift fans endorsing his presidential campaign. The images, which are clearly fake, have sparked controversy, particularly since Swift has not publicly supported any candidates in the 2024 US election.
Trump, however, embraced the images, responding with ‘I accept!’ on his platform. The posts were also shared by an account that reposts his content on X (formerly Twitter). Despite their obvious fabrication, the posts have drawn significant attention online.
( @realDonaldTrump – Truth Social Post ) ( Donald J. Trump – Aug 18, 2024, 3:50 PM ET )
— Donald J. Trump 🇺🇸 TRUTH POSTS (@TruthTrumpPosts) August 18, 2024
Taylor Swift, who endorsed Joe Biden in the last election, has not commented on these fake images. Her history with AI-generated content has been fraught, including deepfake videos that once led to a temporary ban on her searches on X.
Swift’s potential legal actions against AI content providers remain a topic of interest. However, the source of these recent fake posts remains unknown, raising concerns about the use of AI in political propaganda.
Amid the heated political landscape in the United States, major companies like Google and Netflix are facing calls for boycotts due to alleged political affiliations. These online campaigns, mainly driven by false information, suggest that these companies support Kamala Harris in the upcoming election. However, these claims are baseless and have been debunked by fact-checkers.
The boycott calls have gained traction on platforms like X, owned by Elon Musk, who has shown support for Donald Trump. Fake accounts on X have broadly spread these false narratives, leading to widespread calls for users to cancel their Netflix subscriptions and avoid Google’s services. Despite Netflix’s clarification that any donations were personal and not connected to the company, the misinformation has continued to spread, illustrating the vulnerability of brands in today’s politically charged environment.
The disinformation campaigns highlight how quickly false information can manipulate public opinion and consumer behaviour, especially in the lead-up to an election. Musk’s influence on X and his criticisms of companies like Google have fueled these misleading narratives.
Surveys indicate that many consumers prefer companies to stay neutral in political matters, yet the polarised environment makes this difficult. The controversy has also led to a decline in advertising on X as brands seek to distance themselves from platforms that enable disinformation.
The impact of these boycotts and the broader disinformation campaigns underscores the challenges companies face in maintaining their reputation and trust in an increasingly divided society. As the election approaches, the risk of such campaigns influencing public opinion and consumer actions remains high.
According to a Meta security report, Russia’s use of generative AI in online deception campaigns could have been more effective. Meta, the parent company of Facebook and Instagram, reported that while AI-powered tactics offer some productivity and content-generation gains for malicious actors, they have yet to advance these efforts significantly. Despite growing concerns about generative AI being used to manipulate elections, Meta has successfully disrupted such influence operations.
The report highlights that Russia remains a leading source of ‘coordinated inauthentic behaviour’ on social media, particularly since its invasion of Ukraine in 2022. These operations have primarily targeted Ukraine and its allies, with expectations that as the US election nears, Russia-backed campaigns will increasingly attack candidates who support Ukraine. Meta’s approach to detecting these campaigns focuses on account behaviour rather than content alone, as influence operations often span multiple online platforms.
Meta has observed that posts on X are sometimes used to bolster fabricated content. While Meta shares its findings with other internet companies, it notes that X has significantly reduced its content moderation efforts, making it a haven for disinformation. Researchers have also raised concerns about X, now owned by Elon Musk, being a platform for political misinformation. Musk, who supports Donald Trump, has been criticised for using his influence on the platform to spread falsehoods, including sharing an AI-generated deepfake video of Vice President Kamala Harris.
Elon Musk’s media platform X announced last Saturday that it would cease operations in Brazil immediately, citing ‘censorship orders’ from Brazilian judge Alexandre de Moraes. According to X, de Moraes allegedly threatened to arrest one of the company’s legal representatives in Brazil if they did not comply with orders to remove certain content from the platform. X shared images of a document purportedly signed by the judge, stating that the representative, Rachel Nova Conceicao, would face a daily fine and possible arrest if the platform did not comply.
Due to demands by “Justice” @Alexandre in Brazil that would require us to break (in secret) Brazilian, Argentinian, American and international law, 𝕏 has no choice but to close our local operations in Brazil.
In response, X decided to close its operations in Brazil to protect its staff, although the service remains available to Brazilian users. The Brazilian Supreme Court, where de Moraes serves, declined to comment on the authenticity of the document shared by X.
TikTok has contested claims made by the US Department of Justice in a federal appeals court, asserting that the government has inaccurately characterised the app’s ties to China. The company is challenging a law that mandates its Chinese parent company, ByteDance, to divest TikTok’s US assets by January 19 or face a ban. TikTok argues that the app’s content recommendation engine and user data are securely stored in the US, with content moderation conducted domestically.
The law, signed by President Joe Biden in April, reflects concerns over potential national security risks, with accusations that TikTok allows Chinese authorities to access American data and influence content. TikTok, however, contends that the law infringes on free speech rights, arguing that its content curation should be protected by the US Constitution.
The legislation also impacts app stores and internet hosting services, barring support for TikTok unless it is sold. The swift passage of the measure in Congress highlights ongoing fears regarding data security and espionage risks associated with the app.
Google is intensifying its AI initiatives in India, with a focus on addressing language barriers and improving agricultural efficiency. Abhishek Bapna, Director of Product Management at Google DeepMind, emphasized the economic importance of breaking language barriers, particularly in areas like healthcare and banking. Google’s AI chatbot, Gemini, supports over 40 languages globally, including nine Indian languages, and aims to enhance language quality further.
In collaboration with the Indian Institute of Science, Google’s Project Vaani provides over 14,000 hours of speech data from 80 districts, empowering developers to create more efficient AI models for India’s multilingual environment. Additionally, the IndicGenBench benchmark helps fine-tune language models for Indian languages. These efforts are crucial to improving the accuracy and reach of AI in the country.
Google is also piloting its Agricultural Landscape Understanding (ALU) Research API in Telangana, designed to boost farm yields and enhance market access. The initiative aligns with Google’s broader goals of improving livelihoods and addressing climate change, offering granular data-driven insights at the farm field level.
These initiatives are expected to not only assist farmers but also attract end users like banks and insurance companies. Once the pilot program is completed, Google plans to scale the project to work with state governments across India.
Government agencies in Australia must disclose their use of AI within six months under a new policy effective from 1st September. The policy mandates that agencies prepare a transparency statement detailing their AI adoption and usage, which must be publicly accessible. Agencies must also designate a technology executive responsible for ensuring the policy’s implementation.
The transparency statements, updated annually or after significant changes, will include information on compliance, monitoring effectiveness, and measures to protect the public from potential AI-related harm. Although staff training on AI is strongly encouraged, it is not a mandatory requirement under the new policy.
The policy was developed in response to concerns about public trust, recognising that a lack of transparency and accountability in AI use could hinder its adoption. The government in Australia aims to position itself as a model of safe and responsible AI usage by integrating the new policy with existing frameworks and legislation.
Minister for Finance and the APS, Katy Gallagher, emphasised the importance of the policy in guiding agencies to use AI responsibly, ensuring Australians’ confidence in the government’s application of these technologies.
The US Federal Trade Commission (FTC) has finalised a rule prohibiting companies from buying or selling fake online reviews. New regulation allows the FTC to impose fines of up to $51,744 per violation, targeting deceptive practices that harm consumers and distort competition.
The rule addresses various forms of manipulation, including fake reviews from non-existent customers, company insiders, or AI. It also bans purchasing fabricated views or followers on social media and using intimidation to remove negative reviews. While the rule does not require platforms to verify consumer reviews, it represents a significant step towards a more honest online marketplace.
Trade groups and businesses like Google, Amazon, and Yelp have supported the rule. Yelp’s General Counsel, Aaron Schur, stated that enforcing the rule would improve the review landscape and promote fair competition among businesses.
Consumer advocates, such as Teresa Murray from the US Public Interest Research Group, praised the rule as essential protection for online shoppers. The hope is that the fear of penalties will encourage companies to adhere to ethical practices, benefiting both consumers and businesses.
WhatsApp is expanding its sticker options, offering users more ways to express themselves through its platform. Despite the availability of hundreds of emojis and sticker packs, many users may struggle to find the perfect expression for their emotions. To address this, WhatsApp has integrated AI and GIPHY, enhancing the experience.
Users can now access an extensive collection by tapping the sticker icon and searching with text or emojis. Additionally, WhatsApp allows users to create custom ones from their existing images. By simply opening a photo, the app automatically removes the background, leaving only the subject for further customisation.
They can be cropped, drawn upon, and decorated before being saved automatically in the sticker section. WhatsApp now lets users preview and reorganize packs by dragging them within the sticker tray, offering greater control over their collection.
A recent report by the Center for Countering Digital Hate (CCDH) has revealed that Instagram failed to remove abusive comments directed at female politicians who may run in the 2024 US elections. The study examined over half a million comments on posts by prominent female figures from the Democratic and Republican parties, including Vice President Kamala Harris and Senator Marsha Blackburn.
Over 20,000 comments were flagged as ‘toxic,’ with a significant number containing sexist, racist abuse and even death and rape threats. Despite violating Instagram’s community standards, 93% of the harmful comments remained on the platform.
Meta, the parent company of Instagram, highlighted the tools available to users to filter out offensive content but acknowledged the need to review the CCDH report and promised to act on any content that breaches their policies. The report further emphasised that women of colour were particularly vulnerable to online abuse during the 2020 election and criticised social media algorithms for amplifying harmful content. Advocacy groups are increasingly calling on social media platforms to better enforce their safety guidelines to protect users from targeted abuse.