Geologists voice concerns about potential censorship and bias in Chinese AI chatbot

Geologists are expressing concerns about potential Chinese censorship and bias in GeoGPT, a new AI chatbot backed by the International Union of Geological Sciences (IUGS). Developed under the Deep-time Digital Earth (DDE) program, which is heavily funded by China, GeoGPT aims to assist geoscientists, particularly in developing countries, by providing access to extensive geological data. However, issues around transparency and censorship have been highlighted by experts, raising questions about the chatbot’s reliability.

Critics like Prof. Paul Cleverley have pointed out potential censorship and lack of transparency in GeoGPT’s responses. Although DDE representatives claim that the chatbot’s information is purely geoscientific and free from state influence, tests with its underlying AI, Qwen, developed by Alibaba, suggest that certain sensitive questions may be avoided or answered inadequately. That contrasts with responses from other AI models like ChatGPT, which provide more direct information on similar queries.

Further concerns are raised about the involvement of Chinese funding and the potential for biassed data usage. Geoscientific research, which includes valuable information about natural resources, could be strategically filtered. Additionally, the terms of use for GeoGPT prohibit generating content that undermines national security or incites subversion, aligning with Chinese laws, which may influence the chatbot’s outputs.

The IUGS president, John Ludden, has stated that GeoGPT’s database will be made public once appropriate governance is ensured. However, with the project being predominantly funded by Chinese sources, geoscientists remain sceptical about the impartiality and transparency of GeoGPT’s data and responses.

ByteDance challenges US TikTok ban in court

ByteDance and its subsidiary company TikTok are urging a US court to overturn a law that would ban the popular app in the USA by 19 January. The new legal act, signed by President Biden in April, demands ByteDance divest TikTok’s US assets or face a ban, which the company argues is impractical on technological, commercial, and legal grounds.

ByteDance contends that the law, driven by concerns over potential Chinese access to American data, violates free speech rights and unfairly targets TikTok while ‘ignores many applications with substantial operations in China that collect large amounts of US user data, as well as the many US companies that develop software and employ engineers in China.’ They argue that the legislation represents a substantial departure from the US tradition of supporting an open internet and sets a dangerous precedent.

The US Court of Appeals for the District of Columbia will hear oral arguments on this case on 16 September, a decision that could shape the future of TikTok in the US. ByteDance claims lengthy negotiations with the US government, which ended abruptly in August 2022, proposed various measures to protect US user data, including a ‘kill switch’ for the government to suspend TikTok if necessary. Additionally, the company made public a 100-plus page draft national security agreement to protect US TikTok user data and claims it has spent more than $2 billion on the effort. However, they believe the administration prefers to shut down the app rather than finalise a feasible agreement.

The Justice Department, defending the law, asserted that it addresses national security concerns appropriately. Moreover, the case follows a similar attempt by former President Trump to ban TikTok, which was blocked by the courts in 2020. This time, the new law would prohibit app stores and internet hosting services from supporting TikTok unless ByteDance divests it.

TikTok’s fate in US to be decided before election

A US appeals court has scheduled oral arguments for 16 September to address legal challenges against a new law requiring ByteDance, the China-based parent company of TikTok, to divest its US assets by 19 January or face a ban. The law, signed by President Joe Biden on 24 April, aims to eliminate Chinese ownership of TikTok due to national security concerns. TikTok, ByteDance, and a group of TikTok creators have filed lawsuits to block the law, arguing that it significantly impacts American life, with 170 million Americans using the app.

The hearing will coincide with the final weeks of the 2024 presidential election, and both parties are seeking a ruling by 6 December to allow for a potential Supreme Court review. The law also prohibits app stores like Apple and Google from offering TikTok and bars internet hosting services from supporting it unless ByteDance divests. Such a measure reflects US lawmakers’ fears that China could use TikTok to access American data or conduct espionage.

Pope Francis to address AI ethics at G7 summit

Pope Francis is set to make history at the upcoming G7 summit in Italy’s Puglia region by becoming the first pope to address the gathering’s discussions on AI. His participation underscores his commitment to ensuring that AI development aligns with human values and serves the common good. The 87-year-old pontiff recognises the potential of AI for positive change but also emphasises the need for careful regulation to prevent its misuse and safeguard against potential risks.

At the heart of the pope’s message is the call for an ethical framework to guide AI development and usage. Through initiatives like the ‘Rome Call for AI Ethics’, the Vatican seeks to promote transparency, inclusion, responsibility, and impartiality in AI endeavours. Notably, major tech companies like Microsoft, IBM, Cisco Systems, and international organisations have endorsed these principles.

During the G7 summit, Pope Francis is expected to advocate for international cooperation in AI regulation. He emphasises the importance of addressing global inequalities in access to technology and mitigating threats like AI-controlled weapons and the spread of misinformation. His presence at the summit signifies a proactive engagement with contemporary issues, reflecting his vision of a Church actively involved in shaping the world’s future.

The pope’s decision to address AI at the G7 summit follows concerns about the rise of ‘deepfake’ technology, exemplified by manipulated images of himself circulating online. He recognises the transformative potential of AI in the 21st century and seeks to ensure its development aligns with human dignity and social justice. Through his participation, Pope Francis aims to contribute to the creation of an ethical and regulatory framework that promotes the responsible use of AI for the benefit of all humanity.

Australia drops legal challenge against Musk’s X over violent video removal

Australia’s cyber safety regulator has decided to drop its legal challenge against Elon Musk-owned X (formerly Twitter) concerning the removal of videos depicting the stabbing of an Assyrian church bishop in Sydney. The decision follows a setback in May when a federal court judge rejected a request to extend a temporary order for X to block the videos, which Australian authorities deemed a terrorist attack.

eSafety Commissioner Julie Inman Grant highlighted the issue of graphic material being accessible online, especially to children, and criticised X’s initial refusal to remove the violent content globally. Grant emphasised the original intent to prevent the footage from going viral, which could incite further violence and harm the community, defending the regulator’s actions despite the legal outcome.

Why does it matter?

The incident, which involved a 16-year-old boy charged with a terrorism offence, also led to a public clash between Musk and Australian officials, including Prime Minister Anthony Albanese. Musk’s criticisms of the regulatory order as censorship sparked controversy, while other major platforms like Meta, TikTok, Reddit, and Telegram complied with removal requests. X had opted to geo-block the content in Australia, a solution deemed ineffective by the regulator due to users employing virtual private networks.

Former Meta engineer sues over Gaza post suppression

A former Meta engineer has accused the company of bias in its handling of Gaza-related content, alleging he was fired for addressing bugs that suppressed Palestinian Instagram posts. Ferras Hamad, a Palestinian-American who worked on Meta’s machine learning team, filed a lawsuit in California state court for discrimination and wrongful termination. Hamad claims Meta exhibited a pattern of bias against Palestinians, including deleting internal communications about the deaths of Palestinian relatives and investigating the use of the Palestinian flag emoji while not probing similar uses of the Israeli or Ukrainian flag emojis.

Why does it matter?

The lawsuit reflects ongoing criticisms by human rights groups of Meta’s content moderation regarding Israel and the Palestinian territories. These concerns were amplified following the conflict that erupted in Gaza after a Hamas attack in Israel and Israel’s subsequent offensive.

Hamad’s firing, he asserts, was linked to his efforts to fix issues that restricted Palestinian Instagram posts from appearing in searches and feeds, including a misclassified video by a Palestinian photojournalist.

Despite his manager confirming the task was part of his duties, Hamad was later investigated and fired, allegedly for violating a policy on working with accounts of people he knew personally, which he denies.

Human rights groups protest Meta’s alleged censorship of pro-Palestinian content

Meta’s annual shareholder meeting on Wednesday sparked online protests from human rights groups, calling for an end to what they describe as systemic censorship of pro-Palestinian content on the company’s platforms and within its workforce. Nearly 200 Meta employees have recently urged CEO Mark Zuckerberg to address alleged internal censorship and biases on public platforms, advocating for greater transparency and an immediate ceasefire in Gaza.

Activists argue that after years of pressing Meta and other platforms for fairer content moderation, shareholders might exert more influence on the company than public pressure alone. Nadim Nashif, founder of the social media watchdog group 7amleh, highlighted that despite a decade of advocacy, the situation has deteriorated, necessitating new strategies like shareholder engagement to spur change.

Recently this month, a public statement from Meta employees followed an internal petition in 2023 with over 450 signatures, whose author faced an investigation by HR for allegedly violating company rules. The latest letter condemns Meta’s actions as creating a ‘hostile and unsafe work environment’ for Palestinian, Arab, Muslim, and ‘anti-genocide’ colleagues, with many employees claiming censorship and dismissiveness from leadership.

During the shareholder meeting, Meta focused on its AI projects and managing disinformation, sidestepping the issue of Palestinian content moderation. Despite external audit findings and a letter from US Senator Elizabeth Warren criticising Meta’s handling of pro-Palestinian content, the company did not immediately address the circulating letters and petitions.

EU investigates disinformation on X after Slovakia PM shooting

The EU enforcers responsible for overseeing the Digital Services Act (DSA) are intensifying their scrutiny of disinformation campaigns on X, formerly known as Twitter and owned by Elon Musk, in the aftermath of the recent shooting of Slovakia’s prime minister, Robert Fico. X has been under formal investigation since December for disseminating disinformation and the efficacy of its content moderation tools, particularly its ‘Community Notes’ feature. Despite ongoing investigations, no penalties have been imposed thus far.

Elon Musk’s personal involvement in amplifying a post by right-wing influencer Ian Miles Cheong linking the shooting to Robert Fico’s purported rejection of the World Health Organization’s pandemic prevention plan has drawn further attention to X’s role in spreading potentially harmful narratives. In response to inquiries during a press briefing, EU officials confirmed they are closely monitoring content on the platform to assess the effectiveness of X’s measures in combating disinformation.

In addition to disinformation concerns, X’s introduction of its generative AI chatbot, Grok, in the EU has raised regulatory eyebrows. Grok, known for its politically incorrect responses, has been delayed in certain aspects until after the upcoming European Parliament elections due to perceived risks to civic discourse and election integrity. The EU is in close communication with X regarding the rollout of Grok, indicating the regulatory scrutiny surrounding emerging AI technologies and their potential impact on online discourse and democratic processes.

Social media users block celebrities in solidarity with Palestine

Every year, the Met Gala courts controversy, but in 2024, a TikTok audio track ignited outrage before the event even began. Influencer Haley Kalil faced backlash for using a snippet from the film ‘Marie Antoinette,’ featuring the infamous line, ‘Let them eat cake,’ as she showcased her lavish attire. Critics likened the spectacle to ‘The Hunger Games,’ criticising the disconnect between opulence and global suffering, particularly in light of ongoing conflicts like the Gaza crisis.

Social media platforms have become battlegrounds for shaping public discourse, especially concerning the Israel-Palestine conflict. With audiences primarily exposed to the issue through digital channels, platforms like Instagram and TikTok have become outlets for frustration and activism. Simultaneously, a grassroots movement known as ‘Blockout 2024’ emerged, urging users to block celebrities to diminish their influence and redirect attention to pressing global issues.

The Blockout movement has gained momentum, with thousands participating in calls to action on social media. Alongside blocking celebrities, efforts to provide direct aid to Gaza have intensified. Influencers are facing pressure to promote fundraising initiatives like Operation Olive Branch, highlighting the role of social media in mobilising support for humanitarian causes amidst geopolitical conflicts.

While the impact of social media activism remains uncertain, the Blockout movement seems to be a shift towards digital solidarity and accountability. As the conflict unfolds through short-form videos and Instagram posts, the conversation sparked by events like the Met Gala indicates a growing intersection between celebrity culture, social media activism, and global crises.

Malaysia condemns Meta for removing posts on prime minister’s meeting with Hamas leader

Malaysia’s communications minister has criticised Meta Platforms for removing Facebook posts by local media covering Prime Minister Anwar Ibrahim’s meeting with a Hamas leader in Qatar. Anwar clarified that while he has diplomatic relations with Hamas’s political leadership, he is not involved in its military activities.

Expressing Malaysia’s support for the Palestinian cause, the government has asked Meta to explain the removal of posts by two media outlets about Anwar’s meeting. Additionally, a Facebook account covering Palestinian issues was closed.

Communications Minister Fahmi Fadzil condemned Meta’s actions, noting the posts’ relevance to the prime minister’s official visit to Qatar. He emphasised concerns about Meta’s disregard for media freedom.

Last October, Fahmi warned of potential actions against Meta and other social media platforms if they obstructed pro-Palestinian content since Malaysia consistently advocates for a two-state solution to the Israel-Palestine conflict.