Amazon CEO Andy Jassy has signalled that more job cuts are likely as the company embraces AI to streamline its operations. In a letter to staff, he said the adoption of generative AI is driving major shifts in roles, especially within corporate functions.
Jassy described generative AI as a once-in-a-lifetime technology and highlighted its growing role across Amazon services, including Alexa+, shopping tools and logistics. He pointed to smarter assistants and improved fulfilment systems as early benefits of AI investments.
While praising the efficiency gains AI delivers, Jassy admitted some roles will no longer be needed, and others will be redefined. The long-term outcome remains uncertain, but fewer corporate roles are expected as AI adoption continues.
He encouraged staff to embrace the technology by learning, experimenting and contributing to AI-related innovations. Workshops and team brainstorming were recommended as Amazon looks to reinvent itself with leaner, more agile teams.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Canva has launched a new tool powered by Google’s Veo 3 model, allowing users to generate short cinematic video clips using simple text prompts. Known as ‘Create a Video Clip’, the feature produces eight-second videos with sound directly inside the Canva platform.
This marks one of the first commercial uses of Veo 3, which debuted last month. The AI tool is available to Canva Pro, Teams, Enterprise and Nonprofit users, who can generate up to five clips per month initially.
Danny Wu, Canva’s head of AI products, said the feature simplifies video creation with synchronised dialogue, sound effects and editing options. Users can integrate the clips into presentations, social media designs or other formats via Canva’s built-in video editor.
Canva is also extending the tool to users of Leonardo.Ai, a related image generation service. The feature is protected by Canva Shield, a content moderation and indemnity framework aimed at enterprise-level security and trust.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has introduced its Safety Charter for India to combat rising online fraud, deepfakes and cybersecurity threats. The charter outlines a collaborative plan focused on user safety, responsible AI development and protection of digital infrastructure.
AI-powered measures have already helped Google detect 20 times more scam-related pages, block over 500 million scam messages monthly, and issue 2.5 billion suspicious link warnings. Its ‘Digikavach’ programme has reached over 177 million Indians with fraud prevention tools and awareness campaigns.
Google Pay alone averted financial fraud worth ₹13,000 crore in 2024, while Google Play Protect stopped nearly 6 crore high-risk app installations. These achievements reflect the company’s ‘AI-first, secure-by-design’ strategy for early threat detection and response.
The tech giant is also collaborating with IIT-Madras on post-quantum cryptography and privacy-first technologies. Through language models like Gemini and watermarking initiatives such as SynthID, Google aims to build trust and inclusion across India’s digital ecosystem.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new AI-powered heart test could significantly improve early detection of cardiovascular disease, especially in high-risk patients without symptoms.
Developed in Germany and evaluated in a UK study led by Dr Simon Rudland, the Cardisio test uses five electrodes—four on the chest, one on the back—to record 3D heart data. Unlike a traditional 2D ECG, this method captures electrical signals in more dimensions and uses AI to analyse rhythm, structure, and blood flow.
The quick 10-minute test returns a colour-coded result: green (normal), amber (borderline), or red (high risk). The study, published in BJGP Open, tested 628 individuals and found a positive predictive accuracy of 80% and a negative accuracy of 90.4%, with fewer than 2% test failures.
Dr Rudland called the findings ‘exciting,’ noting that the technology could streamline referrals, improve diagnosis in primary care, and reduce hospital waiting lists. He added that a pilot rollout may begin soon in Suffolk or north Essex, targeting high-risk women.
AI’s ability to process complex cardiac data far exceeds human capacity, making it a promising tool in preventative medicine. This research supports the NHS’s broader push to integrate AI for faster, smarter healthcare.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new AI model has created a fresh image of Sagittarius A*, the supermassive black hole at the centre of our galaxy, suggesting it is spinning close to its maximum speed.
The model was trained on noisy data from the Event Horizon Telescope, a globe-spanning network of radio telescopes, using information once dismissed due to atmospheric interference.
Researchers believe this AI-enhanced image shows the black hole’s rotational axis pointing towards Earth, offering potential insights into how radiation and matter behave near such cosmic giants.
By using previously considered unusable data, scientists hope to improve our understanding of black hole dynamics.
However, not all physicists are confident in the results.
Nobel Prize-winning astrophysicist Reinhard Genzel has voiced concern over the reliability of models built on compromised data, stressing that AI should not be treated as a miracle fix. He warned that the new image might be distorted due to the poor quality of its underlying information.
The researchers plan to test their model against newer and more reliable data to address these concerns. Their goal is to refine the AI further and provide more accurate simulations of black holes in the future.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A growing threat of AI-generated media is reshaping workplace harassment, with deepfakes used to impersonate colleagues and circulate fabricated explicit content in the US. Recent studies found that almost all deepfakes were sexually explicit by 2023, often targeting women.
Organisations risk liability under existing laws if deepfake incidents create hostile work environments. New legislation like the TAKE IT DOWN Act and Florida’s Brooke’s Law now mandates rapid removal of non-consensual intimate imagery.
Employers are also bracing for proposed rules requiring strict authentication of AI-generated evidence in legal proceedings. Industry experts advise an urgent review of harassment and acceptable use policies, clear incident response plans and targeted training for HR, legal and IT teams.
Protective measures include auditing insurance coverage for synthetic media claims and staying abreast of evolving state and federal regulations. Forward-looking employers already embed deepfake awareness into their harassment prevention and cybersecurity training to safeguard workplace dignity.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
T-Mobile is expanding its support for emergency response teams by combining 5G, AI and drone technologies to boost disaster recovery operations. Its T-Priority service, launched last year, offers dedicated network slices to ensure fast, low-latency data access during crises.
US first responders in disaster-hit regions like Southern California and North Carolina have already used the system to operate body cams, traffic monitoring tools and mapping systems. T-Mobile deployed hundreds of 5G routers and hotspot devices to aid efforts during the Palisades wildfire and Hurricanes.
AI and drone technologies are key in reconnaissance, damage assessment and real-time communication. T-Mobile’s self-organising network adapts to changing conditions using live data, ensuring stable connectivity throughout emergency operations.
Public-private collaboration is central to the initiative, with T-Mobile working alongside FEMA, the Department of Defense and local emergency centres. The company has also signed a major deal to provide New York City with a dedicated public safety network.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The UK’s National Cyber Security Centre has warned that integrating AI into national infrastructure creates a broader attack surface, raising concerns about an increased risk of cyber threats.
Its latest report outlines how AI may amplify the capabilities of threat actors, especially when it comes to exploiting known vulnerabilities more rapidly than ever before.
By 2027, AI-enabled tools are expected to shorten the time between vulnerability disclosure and exploitation significantly. The evolution could pose a serious challenge for defenders, particularly within critical systems.
The NCSC notes that the risk of advanced cyber attacks will likely escalate unless organisations can keep pace with so-called ‘frontier AI’.
The centre also predicts a growing ‘digital divide’ between organisations that adapt to AI-driven threats and those left behind. The divide could further endanger the overall cyber resilience of the UK. As a result, decisive action is being urged to close the gap and reduce future risks.
NCSC operations director Paul Chichester said AI is expanding attack surfaces, increasing the volume of threats, and speeding up malicious activity. He emphasised that while these dangers are real, AI can strengthen the UK’s cyber defences.
Organisations are encouraged to adopt robust security practices using resources like the Cyber Assessment Framework, the 10 Steps to Cyber Security, and the new AI Cyber Security Code of Practice.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Nobel Prize-winning scientist Geoffrey Hinton, often called the ‘Godfather of AI,’ has warned that many intellectual jobs are at risk of being replaced by AI—while manual trades like plumbing may remain safe for years to come.
Speaking on the Diary of a CEO podcast, Hinton predicted that AI will eventually surpass human capabilities across most fields, but said it will take far longer to master physical skills. ‘A good bet would be to be a plumber,’ he noted, citing the complexity of physical manipulation as a barrier for AI.
Hinton, known for his pioneering work on neural networks, said ‘mundane intellectual labour’ would be among the first to go. ‘AI is just going to replace everybody,’ he said, naming paralegals and call centre workers as particularly vulnerable.
He added that while highly skilled roles or those in sectors with overwhelming demand—like healthcare—may endure, most jobs are unlikely to escape the wave of disruption. ‘Most jobs, I think, are not like that,’ he said, forecasting widespread upheaval in the labour market.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
ChatGPT, was trounced in a chess match by a 1979 video game running on an Atari 2600 emulator. Citrix engineer Robert Caruso set up the match using Video Chess to test how the AI would perform against vintage gaming software.
The result was unexpectedly lopsided. ChatGPT confused rooks for bishops, forgot piece positions and made repeated beginner mistakes, eventually asking for the match to be restarted. Even when standard chess notation was used, its performance failed to improve.
Caruso described the 90-minute session as full of basic blunders, saying the AI would have been laughed out of a primary school chess club. His post highlighted the limitations of ChatGPT’s architecture, which is built for language understanding, not strategic board gameplay.
While the experiment doesn’t mean ChatGPT is entirely useless at chess, it suggests users are better off discussing the game with the bot than challenging it. OpenAI has not yet responded to the light-hearted but telling critique.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!