Rep. Jim Jordan, Chairman of the US House Judiciary Committee, has subpoenaed Alphabet, the parent company of Google, demanding documents that show whether YouTube removed content due to requests from the Biden administration.
Jordan has long argued that Big Tech companies, including Google, have collaborated with the US government to suppress conservative speech. He believes that these actions constitute unlawful censorship, with YouTube allegedly playing a role.
This subpoena comes after the Committee’s successful investigation into Meta, which led the company to admit that it had bowed to pressure from the Biden administration, adjusting its content moderation policies and promising to restore free speech on its platforms.
Jordan is now pushing Alphabet to follow Meta’s lead and provide transparency on its own content moderation practices.
Google has responded by stating that its content policies are enforced independently, asserting its commitment to free expression.
However, the company has yet to provide a detailed response to Jordan’s subpoena or the claims of governmental influence. Also, this ongoing investigation signals that the scrutiny of Big Tech’s role in content moderation is far from over.
For more information on these topics, visit diplomacy.edu.
Amazon Prime Video is introducing AI-powered dubbing for select movies and series in English and Spanish, aiming to expand its reach and enhance accessibility.
The feature, launching on Wednesday, will initially be available on 12 licensed titles that currently lack dubbing support.
With over 200 million customers worldwide, Prime Video‘s adoption of AI technology follows a growing trend among media companies using artificial intelligence to enhance viewer experiences.
Other firms, such as Disney’s ESPN, have also explored AI-driven solutions to personalise content and attract younger audiences.
The integration of AI-assisted dubbing reflects a broader industry shift towards technology-driven innovation in content distribution.
By using AI to bridge language barriers, Prime Video seeks to engage a wider audience and improve the global accessibility of its library.
For more information on these topics, visit diplomacy.edu.
A Russian court has fined Google 3.8 million roubles (£32,600) for hosting YouTube videos that allegedly instructed Russian soldiers on how to surrender. The ruling is part of Moscow’s ongoing crackdown on content it deems illegal, particularly regarding the war in Ukraine. Google has not yet responded to the decision.
Authorities in Russia have frequently ordered foreign tech companies to remove content they claim spreads misinformation. Critics argue that the government is deliberately slowing YouTube‘s download speeds to limit access to material critical of President Vladimir Putin. Moscow denies the accusation, blaming Google for failing to upgrade its infrastructure.
President Putin has previously accused Google of being used by Washington to serve political interests. The latest fine is one of many imposed on the company as part of Russia’s broader control over digital platforms.
For more information on these topics, visit diplomacy.edu.
The UK government has partnered with AI startup Anthropic to explore the use of its chatbot, Claude, in public services. The collaboration aims to improve access to public information and streamline interactions for citizens.
The initiative aligns with Prime Minister Keir Starmer’s ambition to establish the UK as a leader in AI and enhance public service efficiency through innovative technologies.
Technology minister Peter Kyle highlighted the importance of this partnership, emphasising its role in positioning the UK as a hub for advanced AI development.
Claude has already been employed by the European Parliament to simplify access to its archives, demonstrating its potential in reducing time for document retrieval and analysis.
This step underscores Britain’s commitment to leveraging cutting-edge AI for the benefit of individuals and businesses nationwide.
For more information on these topics, visit diplomacy.edu.
Amazon has removed references to ‘inclusion and diversity‘ from its latest annual report, signalling a shift away from diversity, equity and inclusion (DEI) initiatives. The change follows an internal memo from December, in which Amazon announced it was winding down certain DEI programmes by the end of 2024. Instead of maintaining separate initiatives, the company plans to integrate DEI efforts into broader corporate processes.
Tech giants such as Meta and Google have also been scaling back diversity programmes, facing pressure from conservative groups threatening legal action. Disney has similarly adjusted its DEI approach, removing mentions of its ‘Reimagine Tomorrow‘ programme while introducing an initiative to hire US military veterans. The trend reflects a broader corporate retreat from diversity-focused policies that gained traction after the 2020 protests against racial injustice.
Political opposition to DEI has grown, with President Donald Trump’s administration vowing to eliminate diversity policies in the private sector. In response, attorneys general from twelve US states, including New York and California, have reaffirmed their commitment to enforcing civil rights protections against workplace discrimination. The debate over DEI’s future remains contentious as businesses and lawmakers continue to clash over its role in corporate America.
Google has removed a key passage from its AI principles that previously committed to steering clear of potentially harmful applications, including weapons. The now-missing section, titled ‘AI applications we will not pursue,’ explicitly stated that the company would not develop technologies likely to cause harm, as seen in archived versions of the page reviewed by Bloomberg.
The change has sparked concern among AI ethics experts. Margaret Mitchell, former co-lead of Google’s ethical AI team and now chief ethics scientist at Hugging Face, criticised the move. ‘Having that removed is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically, it means Google will probably now work on deploying technology directly that can kill people,’ she said.
With ethics guardrails shifting, questions remain about how Google will navigate the evolving AI landscape—and whether its revised stance signals a broader industry trend toward prioritising market dominance over ethical considerations.
US authorities are considering whether to add Chinese online retailers Shein and Temu to the Department of Homeland Security’s forced labour list, according to a Semafor report. The Trump administration has not reached a final decision and may opt against the move, sources said.
Both companies strongly denied any involvement in forced labour. Shein stated it complies fully with the US Uyghur Forced Labor Prevention Act, while Temu emphasised its strict prohibition of involuntary labour through its Third-Party Code of Conduct.
Discussions on the retailers’ status come as tensions between the US and China escalate. Beijing recently imposed targeted tariffs on US imports and warned companies such as Google about possible sanctions, responding to the latest trade measures introduced by Washington.
The European Commission has unveiled new guidelines restricting how AI can be used in workplaces and online services. Employers will be prohibited from using AI to monitor workers’ emotions, while websites will be banned from using AI-driven techniques that manipulate users into spending money. These measures are part of the EU’s Artificial Intelligence Act, which takes full effect in 2026, though some rules, including the ban on certain practices, apply from February 2024.
The AI Act also prohibits social scoring based on unrelated personal data, AI-enabled exploitation of vulnerable users, and predictive policing based solely on biometric data. AI-powered facial recognition CCTV for law enforcement will be heavily restricted, except under strict conditions. The EU has given member states until August to designate authorities responsible for enforcing these rules, with breaches potentially leading to fines of up to 7% of a company’s global revenue.
Europe’s approach to AI regulation is significantly stricter than that of the United States, where compliance is voluntary, and contrasts with China‘s model, which prioritises state control. The guidelines aim to provide clarity for businesses and enforcement agencies while ensuring AI is used ethically and responsibly across the region.
South Sudan has lifted a temporary ban on Facebook and TikTok, imposed following the spread of graphic videos allegedly showing the killings of South Sudanese nationals in Sudan. The National Communications Authority confirmed on 27 January that the disturbing content, which had sparked violent protests and retaliatory killings across South Sudan, has been removed from the platforms.
The videos, which documented ethnically targeted attacks in Sudan’s El Gezira state, had led to widespread outrage. Rights groups blamed the Sudanese army and its allies for the violence, while the army denounced the incidents as isolated violations. South Sudanese authorities urged for a balanced approach to addressing online incitement while protecting the public’s rights.
The unrest highlights the volatile relationship between social media and violence in the region. Authorities continue to call for action to address the root causes of such content while promoting accountability and safety.
A robotic puppy named ‘Jennie’ is offering a new way to provide companionship to people living with dementia, anxiety, and other mental health challenges. Developed by Tombot, Jennie is an AI-powered pet designed to mimic the comfort and emotional support of a real dog without the difficulties of pet care. Inspired by Tombot CEO Tom Stevens’ personal experience with his mother’s Alzheimer’s diagnosis, the robotic puppy was created to help reduce loneliness and distress.
Jennie stands out with her lifelike design, a collaboration with Jim Henson’s Creature Shop, best known for the Muppets. Equipped with advanced touch sensors and voice command technology, Jennie responds naturally to petting and verbal instructions, creating a realistic experience for users. Her sound effects, crafted from recordings of Labrador puppies, and an all-day battery life make her a practical and emotionally engaging alternative to traditional pets.
Research supports Jennie’s role in easing symptoms like agitation and hallucinations in dementia patients while also helping reduce anxiety and loneliness in broader mental health contexts. With over 7,500 preorders already received, Jennie’s impact is growing as Tombot explores registering her as a medical device, potentially expanding her reach to hospitals and care facilities worldwide.
Priced around $1,500, Jennie offers an accessible solution for those unable to care for live animals due to health or housing constraints. The US based company continues to improve Jennie’s capabilities with software updates, ensuring this robotic puppy remains a dynamic source of comfort for years to come.