Man uses AI avatar in New York court

A 74-year-old man representing himself in a New York State appeal has apologised after using an AI-generated avatar during court proceedings.

Jerome Dewald submitted a video featuring a youthful digital figure to deliver part of his legal argument, prompting confusion and criticism from the judges. One justice described the move as misleading, expressing frustration over the lack of prior disclosure.

Dewald later explained he intended to ease his courtroom anxiety and present his case more clearly, not to deceive.

In a letter to the judges, he acknowledged that transparency should have taken priority and accepted responsibility for the confusion caused. His case, a contract dispute with a former employer, remains under review by the appellate court.

The incident has reignited debate over the role of AI in legal settings. Recent years have seen several high-profile cases where AI-generated content introduced errors or false information, highlighting the risks of using generative technology without proper oversight.

Legal experts say such incidents are becoming increasingly common as AI tools become more accessible.

For more information on these topics, visit diplomacy.edu.

Meta rolls out restricted teen accounts across platforms

Meta is expanding its ‘Teen Accounts’ feature to Facebook and Messenger following its initial launch on Instagram last September

The rollout begins in the US, UK, Australia, and Canada, with plans to reach more countries soon. 

These accounts are designed to give younger users an app experience with stronger safety measures, automatically activating restrictions to limit exposure to harmful content and interactions.

Teen users will be automatically placed in a more controlled environment that restricts who can message, comment, or tag them. 

Only friends and previously contacted users can reach out via Messenger or see their stories, but tagging and mentions are also limited. 

These settings require parental approval for any changes, and teens under 16 must have consent to alter key safety features.

On Instagram, Meta is introducing stricter safeguards. Users under 16 now need parental permission to go live or to turn off the tool that blurs images containing suspected nudity in direct messages. 

Meta also implements reminders to limit screen time, prompting teens to log off after one hour and enabling overnight ‘Quiet mode’ to reduce late-night use.

The initiative follows increasing pressure on social media platforms to address concerns around teen mental health. 

In recent years, US lawmakers and the Surgeon General have highlighted the risks associated with young users’ exposure to unregulated digital environments. 

Some states have even mandated parental consent for teen access to social platforms.

Meta reports that over 54 million Instagram accounts have migrated to Teen Accounts. 

According to the company, 97% of users aged 13 to 15 keep the default protections in place. 

A study commissioned by Meta and Ipsos found that 94% of surveyed parents support Teen Accounts, with 85% saying the controls help ensure more positive online experiences for their children.

As digital safety continues to evolve as a priority, Meta’s expansion of Teen Accounts signals the willingness to build more accountable, youth-friendly online spaces across its platforms.

For more information on these topics, visit diplomacy.edu.

Trump administration pushes for pro-AI shift in US federal agencies

The White House announced on Monday a shift in how US federal agencies will approach AI, prioritising innovation over the stricter regulatory framework previously established under President Biden. 

A new memorandum from the Office of Management and Budget instructs agencies to appoint chief AI officers and craft policies to expand the use of AI technologies across government operations.

This pivot includes repealing two Biden-era directives emphasising transparency and safeguards against AI misuse. 

The earlier rules required federal agencies to implement protective measures for civil rights and limit unchecked acquisition of AI tools. 

These protections have now been replaced with a call for a more ‘forward-leaning and pro-innovation’ stance, removing what the current administration views as excessive bureaucratic constraints.

Federal agencies are now expected to develop AI strategies within six months. These plans must identify barriers to responsible AI implementation and improve how the technology is used enterprise-wide. 

The administration also encouraged the development of specific policies for generative AI, emphasising maximising the use of American-made solutions and enhancing interoperability between systems.

The policy change is part of President Trump’s broader rollback of previous AI governance, including his earlier revocation of a 2023 executive order signed by Biden that required developers to disclose sensitive training data. 

The new framework aims to streamline AI procurement processes and eliminate what the administration labels unnecessary reporting burdens while still maintaining basic privacy protections.

Federal agencies have already begun integrating AI into their operations. The Federal Aviation Administration, for example, has applied machine learning to analyse safety reports and identify emerging aviation risks. 

Under the new guidelines, such initiatives are expected to accelerate, signalling a broader federal embrace of AI across sectors.

For more information on these topics, visit diplomacy.edu.

Russia fines Telegram over extremist content

A Moscow court has fined the messaging platform Telegram 7 million roubles (approximately $80,000) for failing to remove content allegedly promoting terrorist acts and inciting anti-government protests, according to TASS (Russian state news agency).

The court ruled that Telegram did not comply with legal obligations to take down materials deemed extremist, including calls to sabotage railway systems in support of Ukrainian forces and to overthrow the Russian government.

The judgement cited specific Telegram channels accused of distributing such content. Authorities argue that these channels played a role in encouraging public unrest and potentially supporting hostile actions against the Russian state.

The decision adds to the long-standing tension between Russia’s media watchdogs and Telegram, which remains one of the most widely used messaging platforms across Russia and neighbouring countries.

Telegram has not stated in response to the fine, and it is unclear whether the company plans to challenge the court’s ruling. 

The platform was founded by Russian-born entrepreneur Pavel Durov and is currently headquartered in Dubai, boasting close to a billion users globally. 

Telegram’s decentralised nature and encrypted messaging features have made it popular among users seeking privacy, but it has also drawn criticism from governments citing national security concerns.

Durov himself returned to Dubai in March after months in France following his 2024 arrest linked to accusations that Telegram was used in connection with fraud, money laundering, and the circulation of illegal content.

Although he has denied any wrongdoing, the incident has further strained the company’s relationship with authorities in Russia.

This latest legal action reflects Russia’s ongoing crackdown on digital platforms accused of facilitating dissent or undermining state control.

With geopolitical tensions still high, especially surrounding the conflict in Ukraine, platforms like Telegram face increasing scrutiny and legal pressure in multiple jurisdictions.

Senator Warner warns TikTok deal deadline extension breaks the law

Senator Mark Warner, the top Democrat on the Senate Intelligence Committee, has criticised President Donald Trump’s recent move to extend the deadline for ByteDance to divest TikTok’s US operations. 

Warner argued that the 75-day extension violates the law passed in 2024, which mandates a complete separation between TikTok’s American entity and its Chinese parent company due to national security concerns.

The deal currently under consideration would allow ByteDance to retain a significant equity stake and maintain an operational role in the new US-based company. 

According to Warner, this arrangement fails to satisfy the legal requirement of eliminating Chinese influence over TikTok’s US operations. 

He emphasised that any legitimate divestiture must include a complete technological and organisational break, preventing ByteDance from accessing user data or source code.

The White House and TikTok have not issued statements in response to Warner’s criticism. In its second term, Trump’s administration has stated it is in contact with four groups regarding a potential TikTok acquisition. 

However, no agreement has been finalised, and China has yet to publicly support a sale of TikTok’s US assets, one of the primary obstacles to completing the deal.

Under the 2024 law, ByteDance was required to divest TikTok’s US business by 19 January or face a ban

Trump, who retook office on 20 January, chose not to enforce the ban immediately and instead signed an executive order extending the deadline. 

The Justice Department further complicated the issue when it told Apple and Google that the law would not be enforced, allowing the app to remain available for download.

As the deadline extension continues to stir controversy, lawmakers like Warner insist that national security and legislative integrity are at stake.

For more information on these topics, visit diplomacy.edu.

Copyright lawsuits against OpenAI and Microsoft combined in AI showdown

Twelve copyright lawsuits filed against OpenAI and Microsoft have been merged into a single case in the Southern District of New York.

The US judicial panel on multidistrict litigation decided to consolidate, despite objections from many plaintiffs who argued their cases were too distinct.

The lawsuits claim that OpenAI and Microsoft used copyrighted books and journalistic works without consent to train AI tools like ChatGPT and Copilot.

The plaintiffs include high-profile authors—Ta-Nehisi Coates, Sarah Silverman, Junot Díaz—and major media outlets such as The New York Times and Daily News.

The panel justified the centralisation by citing shared factual questions and the benefits of unified pretrial proceedings, including streamlined discovery and avoidance of conflicting rulings.

OpenAI has defended its use of publicly available data under the legal doctrine of ‘fair use.’

A spokesperson stated the company welcomed the consolidation and looked forward to proving that its practices are lawful and support innovation. Microsoft has not yet issued a comment on the ruling.

The authors’ attorney, Steven Lieberman, countered that this is about large-scale theft. He emphasised that both Microsoft and OpenAI have, in their view, infringed on millions of protected works.

Some of the same authors are also suing Meta, alleging the company trained its models using books from the shadow library LibGen, which houses over 7.5 million titles.

Simultaneously, Meta faced backlash in the UK, where authors protested outside the company’s London office. The demonstration focused on Meta’s alleged use of pirated literature in its AI training datasets.

The Society of Authors has called the actions illegal and harmful to writers’ livelihoods.

Amazon also entered the copyright discussion this week, confirming its new Kindle ‘Recaps’ feature uses generative AI to summarise book plots.

While Amazon claims accuracy, concerns have emerged online about the reliability of AI-generated summaries.

In the UK, lawmakers are also reconsidering copyright exemptions for AI companies, facing growing pressure from creative industry advocates.

The debate over how AI models access and use copyrighted material is intensifying, and the decisions made in courtrooms and parliaments could radically change the digital publishing landscape.

For more information on these topics, visit diplomacy.edu.

Sam Altman’s AI cricket post fuels India speculation

A seemingly light-hearted social media post by OpenAI CEO Sam Altman has stirred a wave of curiosity and scepticism in India. Altman shared an AI-generated anime image of himself as a cricket player dressed in an Indian jersey, which quickly went viral among Indian users.

While some saw it as a fun gesture, others questioned the timing and motives, speculating whether it was part of a broader strategy to woo Indian audiences. This isn’t the first time Altman has publicly praised India.

In recent weeks, he lauded the country’s rapid adoption of AI technology, calling it ‘amazing to watch’ and even said it was outpacing the rest of the world. His comments marked a shift from a more dismissive stance during a 2023 visit when he doubted India’s potential to compete with OpenAI’s large-scale models.

However, during his return visit in February 2025, he expressed interest in collaborating with Indian authorities on affordable AI solutions. The timing of Altman’s praise coincides with a surge in Indian users on OpenAI’s platforms, now the company’s second-largest market.

Meanwhile, OpenAI faces a legal tussle with several Indian media outlets over their alleged content misuse. Despite this, the potential of India’s booming AI market—projected to hit $8 billion by 2025—makes the country a critical frontier for global tech firms.

Experts argue that Altman’s overtures are more about business than sentiment. With increasing competition from rival AI models like DeepSeek and Gemini, maintaining and growing OpenAI’s Indian user base has become vital. As technology analyst Nikhil Pahwa said, ‘There’s no real love; it’s just business.’

For more information on these topics, visit diplomacy.edu.

TikTok deal stalled amid US-China trade tensions

Negotiations to divest TikTok’s US operations have been halted following China’s indication that it would not approve the deal. The development came after President Donald Trump announced increased tariffs on Chinese imports.

The proposed arrangement involved creating a new US-based company to manage TikTok’s American operations, with US investors holding a majority stake and ByteDance retaining less than 20%. This plan had received approvals from existing and new investors, ByteDance, and the US government.

In response to the stalled negotiations, President Trump extended the deadline for ByteDance to sell TikTok’s US assets by 75 days, aiming to allow more time for securing necessary approvals.

He emphasised the desire to continue collaborating with TikTok and China to finalise the deal, expressing a preference to avoid shutting the app in the US.

The future of TikTok in the US remains unpredictable as geopolitical tensions and trade disputes continue to influence the negotiations.

On one side, such a reaction from the Chinese government could have been expected in exchange for the increase of US tariffs on Chinese products; on the other side, by extending the deadline, Trump would be able to maintain his protectionist policy while collecting sympathies from 170 million US citizens using the app, which now is a victim in their eyes as it faces potential banning if the US-China trade war doesn’t calm down and a resolution is not reached within the extended timeframe.

For more information on these topics, visit diplomacy.edu.

European Commission targets end-to-end encryption and proposes expanding Europol’s powers into an EU-level FBI equivalent

The European Commission announced ProtectEU, a new internal security strategy that sets out the broad priorities it intends to pursue in the coming years in response to evolving security challenges. While the document outlines strategic objectives, it does not include specific legislative proposals.

The Commission highlighted the need to revisit the European Union’s approach to internal security, citing what it described as ‘a changed security environment and an evolving geopolitical landscape.’ Among the identified challenges are hybrid threats from state and non-state actors, organised crime, and increasing levels of online criminal activity.

One of the key elements of the strategy is the proposed strengthening of Europol’s operational role. The Commission suggests developing Europol into a truly operational police agency to reinforce support to member states, with the capacity to assist in cross-border, large-scale, and complex investigations that present serious risks to the Union’s internal security.

That would bring Europol closer in function to agencies such as the US Federal Bureau of Investigation. The strategy also notes the Commission’s intention to develop roadmaps on ‘lawful and effective access to data for law enforcement’ and encryption.

The strategy aims to ‘identify and assess technological solutions that would enable law enforcement authorities to access encrypted data lawfully, safeguarding cybersecurity and fundamental rights.’ These issues continue to be the subject of technical and legal discussion across jurisdictions.

Other aspects of the strategy address long-standing challenges within the EU’s security framework, including limited situational awareness and coordination at the executive level. The strategy proposes enhancing intelligence-sharing through the EU’s Single Intelligence Analysis Capacity, a mechanism for the voluntary sharing of intelligence by member states, which is currently supported by open-source analysis.

The report further emphasised that the effectiveness of any reforms in this area would depend on the commitment of member states, citing ongoing challenges related to differing national priorities and levels of political alignment. In addition, the Commission announced its intention to propose a new Cybersecurity Act and new measures to secure cloud and telecom services and develop technological sovereignty.

For more information on these topics, visit diplomacy.edu.

Singapore issues new guidelines to strengthen resilience and security of cloud services and data centres

The Infocomm Media Development Authority (IMDA) has issued new Advisory Guidelines (AGs) intended to support the resilience and security of Cloud Services and Data Centres (DCs) in Singapore. The guidelines set out best practices for Cloud Service Providers (CSPs) and DC operators, aiming to reduce service disruptions and limit their potential impact on economic and social functions.

A wide range of digital services—including online banking, ride-hailing, e-commerce, and digital identity systems—depend on the continued availability of cloud infrastructure and data centre operations. Service interruptions may affect the delivery of these services.

The AGs encourage service providers to adopt measures that improve their ability to recover from outages and maintain operational continuity. The AGs recommend various practices to address risks associated with technical misconfigurations, physical incidents, and cybersecurity threats.

Key proposals include conducting risk and business impact assessments, establishing business continuity arrangements, and strengthening cybersecurity capabilities. For Cloud Services, the guidelines outline seven measures to reinforce security and resilience.

These cover security testing, access controls, data governance, and disaster recovery planning. Concerning Data Centres, the AGs provide a framework for business continuity management to minimise operational disruptions and maintain high service availability.

That involves the implementation of relevant policies, operational controls, and ongoing review processes. The development of the AGs forms part of wider national efforts led by the inter-agency task force on the Resilience and Security of Digital Infrastructure and Services.

These guidelines are intended to complement regulatory initiatives, including planned amendments to the Cybersecurity Act and the Digital Infrastructure Act (DIA) introduction, which will establish requirements for critical digital infrastructure providers such as major CSPs and DC operators. To inform the guidelines, the IMDA conducted consultations with a broad range of stakeholders, including CSPs, DC operators, and end user enterprises across sectors such as banking, healthcare, and digital platforms.

The AGs will be updated periodically to reflect technological developments, incident learnings, and further industry input. A coordinated approach is encouraged across the digital services ecosystem. Businesses that provide digital services are advised to assess operational risks and establish appropriate business continuity plans to support service reliability.

The AGs also refer to international standards, including IMDA’s Multi-Tier Cloud Security Standard, the Cloud Security Alliance Cloud Controls Matrix, ISO 27001, and ISO 22301. Providers are encouraged to designate responsible personnel to oversee resilience and security efforts.

These guidelines form part of Singapore’s broader strategy to strengthen its digital infrastructure. The government will continue to engage with sectoral regulators and stakeholders to promote resilience, cybersecurity awareness, and preparedness across industries and society.

As digital systems evolve, sustained attention to infrastructure resilience and security remains essential. The AGs are intended to support organisations in maintaining reliable services while aligning with recognised standards and best practices.

For more information on these topics, visit diplomacy.edu.