Elevate for Educators, launched by Microsoft, is a global programme designed to help teachers build the skills and confidence to use AI tools in the classroom. The initiative provides free access to training, credentials, and professional learning resources.
The programme connects educators to peer networks, self-paced courses, and AI-powered simulations. The aim is to support responsible AI adoption while improving teaching quality and classroom outcomes.
New educator credentials have been developed in partnership with ISTE and ASCD. Schools and education systems can also gain recognition for supporting professional development and demonstrating impact in classrooms.
AI-powered education tools within Microsoft 365 have been expanded to support lesson planning and personalised instruction. New features help teachers adapt materials to different learning needs and provide students with faster feedback.
College students will also receive free access to Microsoft 365 Premium and LinkedIn Premium Career for 12 months. The offer includes AI tools, productivity apps, and career resources to support future employment.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
London Mayor Sir Sadiq Khan has warned that AI could become a ‘weapon of mass destruction of jobs‘ if its impact is not managed correctly. He said urgent action is needed to prevent large-scale unemployment.
Speaking at Mansion House in the UK capital, Khan said London is particularly exposed due to the concentration of finance, professional services, and creative industries. He described the potential impact on jobs as ‘colossal’.
Khan said AI could improve public services and help tackle challenges such as cancer care and climate change. At the same time, he warned that reckless use could increase inequality and concentrate wealth and power.
Polling by City Hall suggests more than half of London workers expect AI to affect their jobs within a year. Sadiq Khan said entry-level roles may disappear fastest, limiting opportunities for young people.
The mayor announced a new task force to assess how Londoners can be supported through the transition. His office will also commission free AI training for residents.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Oscar-winning actor Matthew McConaughey has trademarked his image and voice to protect them from unauthorised use by AI platforms. His lawyers say the move is intended to safeguard consent and attribution in an evolving digital environment.
Several clips, including his well-known catchphrase from Dazed and Confused, have been registered with the United States Patent and Trademark Office. Legal experts say it is the first time an actor has used trademark law to address potential AI misuse of their likeness.
McConaughey’s legal team said there is no evidence of his image being manipulated by AI so far. The trademarks are intended to act as a preventative measure against unauthorised copying or commercial use.
The actor said he wants to ensure any future use of his voice or appearance is approved. Lawyers also said the approach could help capture value created through licensed AI applications.
Concerns over deepfakes and synthetic media are growing across the entertainment industry. Other celebrities have faced unauthorised AI-generated content, prompting calls for stronger legal protections.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
San Francisco-based company Cloudflare has acquired Human Native, an AI data marketplace designed to connect content creators with AI developers seeking high-quality training and inference material.
A move that reflects growing pressure to establish clearer economic rules for how online content is used by AI systems.
The acquisition is intended to help creators and publishers decide whether to block AI access entirely, optimise material for machine use, or license content for payment instead of allowing uncontrolled scraping.
Cloudflare says the tools developed through Human Native will support transparent pricing and fair compensation across the AI supply chain.
Human Native, founded in 2024 and backed by UK-based investors, focuses on structuring original content so it can be discovered, accessed and purchased by AI developers through standardised channels.
The team includes researchers and engineers with experience across AI research, design platforms and financial media.
Cloudflare argues that access to reliable and ethically sourced data will shape long-term competition in AI. By integrating Human Native into its wider platform, the company aims to support a more sustainable internet economy that balances innovation with creator rights.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Britain and Canada are continuing regulatory probes into xAI’s Grok chatbot, signalling that official scrutiny will persist despite the company’s announcement of new safeguards. Authorities say concerns remain over the system’s ability to generate explicit and non-consensual images.
xAI said it had updated Grok to block edits that place real people in revealing clothing and restricted image generation in jurisdictions where such content is illegal. The company did not specify which regions are affected by the new limits.
Reuters testing found Grok was still capable of producing sexualised images, including in Britain. Social media platform X and xAI did not respond to questions about how effective the changes have been.
UK regulator Ofcom said its investigation remains ongoing, despite welcoming xAI’s announcement. A privacy watchdog in Canada also confirmed it is expanding an existing probe into both X and xAI.
Pressure is growing internationally, with countries including France, India, and the Philippines raising concerns. British Technology Secretary Liz Kendall said the Online Safety Act gives the government tools to hold platforms accountable for harmful content.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
WhatsApp has excluded Brazil from its new restriction on third-party general-purpose chatbots, allowing AI providers to continue operating on the platform despite a broader policy shift affecting other markets.
The decision follows action by the competition authority of Brazil, which ordered Meta to suspend elements of the policy while assessing whether the rules unfairly disadvantage rival chatbot providers in favour of Meta AI.
Developers have been informed that services linked to Brazilian phone numbers do not need to stop responding to users or issue service warnings.
Elsewhere, WhatsApp has introduced a 90-day grace period starting in mid-January, requiring chatbot developers to halt responses and notify users that services will no longer function on the app.
The policy applies to tools such as ChatGPT and Grok, while customer service bots used by businesses remain unaffected.
Italy has already secured a similar exemption after regulatory scrutiny, while the EU has opened an antitrust investigation into the new rules.
Meta continues to argue that general-purpose AI chatbots place technical strain on systems designed for business messaging instead of acting as an open distribution platform for AI services.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
More than 50 EU lawmakers have called on the European Commission to clarify whether AI-powered applications for nudity are prohibited under existing EU legislation, citing concerns about online harm and legal uncertainty.
The request follows public scrutiny of the Grok, owned by xAI, which was found to generate manipulated intimate images involving women and minors.
Lawmakers argue that such systems enable gender-based online violence and the production of child sexual abuse material instead of legitimate creative uses.
In their letter, lawmakers questioned whether current provisions under the EU AI Act sufficiently address nudification tools or whether additional prohibitions are required. They also warned that enforcement focused only on substantial online platforms risks leaving similar applications operating elsewhere.
While EU authorities have taken steps under the Digital Services Act to assess platform responsibilities, lawmakers stressed the need for broader regulatory clarity and consistent application across the digital market.
Further political debate on the issue is expected in the coming days.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Major social media platforms restricted access to approximately 4.7 million accounts linked to children under 16 across Australia during early December, following the introduction of the national social media minimum age requirement.
Initial figures collected by eSafety indicate that platforms with high youth usage are already engaging in early compliance efforts.
Since the obligation took effect on 10 December, regulatory focus has shifted towards monitoring and enforcement instead of preparation, targeting services assessed as age-restricted.
Early data suggests meaningful steps are being taken, although authorities stress it remains too soon to determine whether platforms have achieved full compliance.
eSafety has emphasised continuous improvement in age-assurance accuracy, alongside the industry’s responsibility to prevent circumvention.
Reports indicate some under-16 accounts remain active, although early signals point towards reduced exposure and gradual behavioural change rather than immediate elimination.
Officials note that the broader impact of the minimum age policy will emerge over time, supported by a planned independent, longitudinal evaluation involving academic and youth mental health experts.
Data collection will continue to monitor compliance, platform migration trends and long-term safety outcomes for children and families in Australia.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
California Attorney General Rob Bonta has launched an investigation into xAI, the company behind the Grok chatbot, over the creation and spread of nonconsensual sexually explicit images.
Bonta’s office said Grok has been used to generate deepfake intimate images of women and children, which have then been shared on social media platforms, including X.
Officials said users have taken ordinary photos and manipulated them into sexually explicit scenarios without consent, with xAI’s ‘spicy mode’ contributing to the problem.
‘We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or child sexual abuse material,’ Bonta said in a statement.
The investigation will examine whether xAI has violated the law and follows earlier calls for stronger safeguards to protect children from harmful AI content.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has agreed to purchase up to 750 megawatts of computing power from AI chipmaker Cerebras over the next three years. The deal, announced on 14 January, is expected to be worth more than US$10 billion and will support ChatGPT and other AI services.
Cerebras will provide cloud services powered by its wafer-scale chips, which are designed to run large AI models more efficiently than traditional GPUs. OpenAI plans to use the capacity primarily for inference and reasoning models that require high compute.
Cerebras will build or lease data centres filled with its custom hardware, with computing capacity coming online in stages through 2028. OpenAI said the partnership would help improve the speed and responsiveness of its AI systems as user demand continues to grow.
The deal is also essential for Cerebras as it prepares for a second attempt at a public listing, following a 2025 IPO that was postponed. Diversifying its customer base beyond major backers such as UAE-based G42 could strengthen its financial position ahead of a potential 2026 flotation.
The agreement highlights the wider race among AI firms to secure vast computing resources, as investment in AI infrastructure accelerates. However, some analysts have warned that soaring valuations and heavy spending could resemble past technology bubbles.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!