Microsoft to support UAE investment analytics with responsible AI tools

The UAE Ministry of Investment and Microsoft signed a Memorandum of Understanding at GITEX Global 2025 to apply AI to investment analytics, financial forecasting, and retail optimisation. The deal aims to strengthen data governance across the investment ecosystem.

Under the MoU, Microsoft will support upskilling through its AI National Skilling Initiative, targeting 100,000 government employees. Training will focus on practical adoption, responsible use, and measurable outcomes, in line with the UAE’s National AI Strategy 2031.

Both parties will promote best practices in data management using Azure services such as Data Catalog and Purview. Workshops and knowledge-sharing sessions with local experts will standardise governance. Strong controls are positioned as the foundation for trustworthy AI at scale.

The agreement was signed by His Excellency Mohammad Alhawi and Amr Kamel. Officials say the collaboration will embed AI agents into workflows while maintaining compliance. Investment teams are expected to gain real-time insights and automation that shorten the time to action.

The partnership supports the ambition to make the UAE a leader in AI-enabled investment. It also signals deeper public–private collaboration on sovereign capabilities. With skills, standards, and use cases in place, the ministry aims to attract capital and accelerate diversification.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Scaling a cell ‘language’ model yields new immunotherapy leads

Yale University and Google unveiled Cell2Sentence-Scale 27B, a 27-billion-parameter model built on Gemma to decode the ‘language’ of cells. The system generated a novel hypothesis about cancer cell behaviour, and CEO Sundar Pichai called it ‘an exciting milestone’ for AI in science.

The work targets a core problem in immunotherapy: many tumours are ‘cold’ and evade immune detection. Making them visible requires boosting antigen presentation. C2S-Scale sought a ‘conditional amplifier’ drug that boosts signals only in immune-context-positive settings.

Smaller models lacked the reasoning to solve the problem, but scaling to 27B parameters unlocked the capability. The team then simulated 4,000 drugs across patient samples. The model flagged context-specific boosters of antigen presentation, with 10–30% already known and the rest entirely novel.

Researchers emphasise that conditional amplification aims to raise immune signals only where key proteins are present. That could reduce off-target effects and make ‘cold’ tumours discoverable. The result hints at AI-guided routes to more precise cancer therapies.

Google has released C2S-Scale 27B on GitHub and Hugging Face for the community to explore. The approach blends large-scale language modelling with cell biology, signalling a new toolkit for hypothesis generation, drug prioritisation, and patient-relevant testing.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Humanity AI launches $500M initiative to build a people-centred future

A coalition of ten leading philanthropic foundations has pledged $500 million over five years to ensure that AI evolves in ways that strengthen humanity rather than marginalise it.

The initiative, called Humanity AI, brings together organisations such as the Ford, MacArthur, Mellon, and Mozilla foundations to promote a people-driven vision for AI that enhances creativity, democracy, and security.

As AI increasingly shapes every aspect of daily life, the coalition seeks to place citizens at the centre of the conversation instead of leaving decisions to a few technology firms.

It plans to support new research, advocacy, and partnerships that safeguard democratic rights, protect creative ownership, and promote equitable access to education and employment.

The initiative also prioritises the ethical use of AI in safety and economic systems, ensuring innovation does not come at the expense of human welfare.

John Palfrey, president of the MacArthur Foundation, said Humanity AI aims to shift power back to the public by funding technologists and advocates committed to responsible innovation.

Michele Jawando of the Omidyar Network added that the future of AI should be designed by people collectively, not predetermined by algorithms or corporate agendas.

Rockefeller Philanthropy Advisors will oversee the fund, which begins issuing grants in 2026. Humanity AI invites additional partners to join in creating a future where people shape technology instead of being shaped by it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Adult erotica tests OpenAI’s safety claims

OpenAI will loosen some ChatGPT rules, letting users make replies friendlier and allowing erotica for verified adults. Altman framed the shift as ‘treat adult users like adults’, tied to stricter age-gating. The move follows months of new guardrails against sycophancy and harmful dynamics.

The change arrives after reports of vulnerable users forming unhealthy attachments to earlier models. OpenAI has since launched GPT-5 with reduced sycophancy and behaviour routing, plus safeguards for minors and a mental-health council. Critics question whether evidence justifies loosening limits so soon.

Erotic role-play can boost engagement, raising concerns that at-risk users may stay online longer. Access will be restricted to verified adults via age prediction and, if contested, ID checks. That trade-off intensifies privacy tensions around document uploads and potential errors.

It is unclear whether permissive policies will extend to voice, image, or video features, or how regional laws will apply to them. OpenAI says it is not ‘usage-maxxing’ but balancing utility with safety. Observers note that ambitions to reach a billion users heighten moderation pressures.

Supporters cite overdue flexibility for consenting adults and more natural conversation. Opponents warn normalising intimate AI may outpace evidence on mental-health impacts. Age checks can fail, and vulnerable users may slip through without robust oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tokens-at-scale with Intel’s Crescent Island and Xe architecture

Intel unveils ‘Crescent Island’ data-centre GPU at OCP, targeting real-time, everywhere inference with high memory capacity and energy-efficient performance for agentic AI.

Sachin Katti said scaling complex inference needs heterogeneous systems and an open, developer-first stack; Intel positions Xe architecture GPUs to deliver efficient headroom as token volumes surge.

Intel’s approach spans AI PC to data centre and edge, pairing Xeon 6 and GPUs with workload-centric orchestration to simplify deployment, scaling, and developer continuity.

Crescent Island is designed for air-cooled enterprise servers, optimised for power and cost, and tuned for inference with large memory capacity and bandwidth.

Key features include the Xe3P microarchitecture for performance-per-watt gains, 160GB LPDDR5X, broad data-type support for ‘tokens-as-a-service’, and a unified software stack proven on Arc Pro B-Series; customer sampling is slated for H2 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Salesforce invests $15 billion in San Francisco’s AI future

The US cloud-based software company, Salesforce, has announced a $15 billion investment in San Francisco over the next five years, strengthening the city’s position as the world’s AI capital.

The funding will support a new AI Incubator Hub on the company’s campus, workforce training programmes, and initiatives to help businesses transform into ‘Agentic Enterprises’.

A move that coincides with the company’s annual Dreamforce conference, expected to generate $130 million in local revenue and create 35,000 jobs.

Chief Executive Marc Benioff said the investment demonstrates Salesforce’s deep commitment to San Francisco, aiming to boost AI innovation and job creation.

Dreamforce, now in its 23rd year, is the world’s largest AI event, attracting nearly 50,000 participants and millions more online. Benioff described the company’s goal as leading a new technological era where humans and AI collaborate to drive progress and productivity.

Founded in 1999 as an online CRM service, Salesforce has evolved into a global leader in enterprise AI and cloud computing. It is now San Francisco’s largest private employer and continues to expand through acquisitions of local AI firms such as Bluebirds, Waii, and Regrello.

The company’s new AI Incubator Hub will support early-stage startups, while its Trailhead learning platform has already trained more than five million people for the AI-driven workplace.

Salesforce remains one of the city’s most active corporate philanthropists. Its 1-1-1 model has inspired thousands of companies worldwide to dedicate a share of equity, product, and employee time to social causes.

With an additional $39 million pledged to education and healthcare, Salesforce and the Benioffs have now donated over $1 billion to the Bay Area.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Broadcom unite to deploy 10 gigawatts of AI accelerators

The US firm, OpenAI, has announced a multi-year collaboration with Broadcom to design and deploy 10 gigawatts of custom AI accelerators.

The partnership will combine OpenAI’s chip design expertise with Broadcom’s networking and Ethernet technologies to create large-scale AI infrastructure. The deployment is expected to begin in the second half of 2026 and be completed by the end of 2029.

A collaboration that enables OpenAI to integrate insights gained from its frontier models directly into the hardware, enhancing efficiency and performance.

Broadcom will develop racks of AI accelerators and networking systems across OpenAI’s data centres and those of its partners. The initiative is expected to meet growing global demand for advanced AI computation.

Executives from both companies described the partnership as a significant step toward the next generation of AI infrastructure. OpenAI CEO Sam Altman said it would help deliver the computing capacity needed to realise the benefits of AI for people and businesses worldwide.

Broadcom CEO Hock Tan called the collaboration a milestone in the industry’s pursuit of more capable and scalable AI systems.

The agreement strengthens Broadcom’s position in AI networking and underlines OpenAI’s move toward greater control of its technological ecosystem. By developing its own accelerators, OpenAI aims to boost innovation while advancing its mission to ensure artificial general intelligence benefits humanity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Why DC says no to AI-made comics

Jim Lee rejects generative AI for DC storytelling, pledging no AI writing, art, or audio under his leadership. He framed AI alongside other overhyped threats, arguing that predictions falter while human craft endures. DC, he said, will keep its focus on creator-led work.

Lee rooted the stance in the value of imperfection and intent. Smudges, rough lines, and hesitation signal authorship, not flaws. Fans, he argued, sense authenticity and recoil from outputs that feel synthetic or aggregated.

Concerns ranged from shrinking attention spans to characters nearing the public domain. The response, Lee said, is better storytelling and world-building. Owning a character differs from understanding one, and DC’s universe supplies the meaning that endures.

Policy meets practice in DCs recent moves against suspected AI art. In 2024, variant covers were pulled after high-profile allegations of AI-generated content. The episode illustrated a willingness to enforce standards rather than just announce them.

Lee positioned 2035 and DC’s centenary as a waypoint, not a finish line. Creative evolution remains essential, but without yielding authorship to algorithms. The pledge: human-made stories, guided by editors and artists, for the next century of DC.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google cautions Australia on youth social media ban proposal

The US tech giant, Google (also owner of YouTube), has reiterated its commitment to children’s online safety while cautioning against Australia’s proposed ban on social media use for those under 16.

Speaking before the Senate Environment and Communications References Committee, Google’s Public Policy Senior Manager Rachel Lord said the legislation, though well-intentioned, may be difficult to enforce and could have unintended effects.

Lord highlighted the 23-year presence of Google in Australia, contributing over $53 billion to the economy in 2024, while YouTube’s creative ecosystem added $970 million to GDP and supported more than 16,000 jobs.

She said the company’s investments, including the $1 billion Digital Future Initiative, reflect its long-term commitment to Australia’s digital development and infrastructure.

According to Lord, YouTube already provides age-appropriate products and parental controls designed to help families manage their children’s experiences online.

Requiring children to access YouTube without accounts, she argued, would remove these protections and risk undermining safe access to educational and creative content used widely in classrooms, music, and sport.

She emphasised that YouTube functions primarily as a video streaming platform rather than a social media network, serving as a learning resource for millions of Australian children.

Lord called for legislation that strengthens safety mechanisms instead of restricting access, saying the focus should be on effective safeguards and parental empowerment rather than outright bans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok to get new AI video detection tools, Musk says

Musk said Grok will analyse bitstreams for AI signatures and scan the web to verify the origins of videos. Grok added that it will detect subtle AI artefacts in compression and generation patterns that humans cannot see.

AI tools such as Grok Imagine and Sora are reshaping the internet by making realistic video generation accessible to anyone. The rise of deepfakes has alarmed users, who warn that high-quality fake videos could soon be indistinguishable from real footage.

A user on X expressed concern that leaders are not addressing the growing risks. Elon Musk responded, revealing that his AI company xAI is developing Grok’s ability to detect AI-generated videos and trace their origins online.

The detection features aim to rebuild trust in digital media as AI-generated content spreads. Commentators have dubbed the flood of such content ‘AI slop’, raising concerns about misinformation and consent.

Concerns about deepfakes have grown since OpenAI launched the Sora app. A surge in deepfake content prompted OpenAI to tighten restrictions on cameo mode, allowing users to opt out of specific scenarios.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!