Gemini Live screensharing now free for all Android users

Google has announced that Gemini Live’s screen and camera sharing capabilities will now be free for all Android users through the Gemini app.

The AI feature, which enables the app to interpret and respond to real-time visuals from a device’s screen or camera, had initially launched exclusively for Pixel 9 and Samsung Galaxy S25 users.

The company originally planned to restrict wider access to those subscribed to Gemini Advanced, but has now reversed that decision following strong user feedback. Google confirmed the broader rollout is beginning today and will continue over the coming weeks.

A promotional video released by the company demonstrates the feature in action, with a user pointing their phone camera at an aquarium while Gemini provides information about the marine life.

In a similar move, Microsoft has launched its own AI tool, Copilot Vision, for free via the Edge browser.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI deploys new safeguards for AI models to curb biothreat risks

OpenAI has introduced a new monitoring system to reduce the risk of its latest AI models, o3 and o4-mini, being misused to create chemical or biological threats.

The ‘safety-focused reasoning monitor’ is built to detect prompts related to dangerous materials and instruct the AI models to withhold potentially harmful advice, instead of providing answers that could aid bad actors.

These newer models represent a major leap in capability compared to previous versions, especially in their ability to respond to prompts about biological weapons. To counteract this, OpenAI’s internal red teams spent 1,000 hours identifying unsafe interactions.

Simulated tests showed the safety monitor successfully blocked 98.7% of risky prompts, although OpenAI admits the system does not account for users trying again with different wording, a gap still covered by human oversight instead of relying solely on automation.

Despite assurances that neither o3 nor o4-mini meets OpenAI’s ‘high risk’ threshold, the company acknowledges these models are more effective at answering dangerous questions than earlier ones like o1 and GPT-4.

Similar monitoring tools are also being used to block harmful image generation in other models, yet critics argue OpenAI should do more.

Concerns have been raised over rushed testing timelines and the lack of a safety report for GPT-4.1, which was launched this week instead of being accompanied by transparency documentation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AMD warns of financial hit from US AI chip export ban

AMD has warned that new US government restrictions on exporting AI chips to China and several other countries could materially affect its earnings.

The company said it may face charges of up to $800 million related to unsold inventory, purchase commitments, and reserves if it fails to secure export licences for its MI308 GPUs, now subject to strict control measures.

In a filing to the US Securities and Exchange Commission, AMD confirmed it would seek the necessary licences but admitted there is no guarantee they will be granted.

The move follows broader export restrictions aimed at protecting national security interests, with US officials arguing that unrestricted access to advanced chips would weaken the country’s strategic lead in AI, instead of preserving it.

AMD’s stock dropped around 6% following the announcement. Competitors are also feeling the impact. Nvidia expects charges of $5.5 billion from similar restrictions, and Intel’s Gaudi hardware line has reportedly been affected as well.

The US Commerce Department has defended the move as necessary to safeguard economic and national interests.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Businesses face Meta account lockouts

Small businesses are increasingly falling victim to scams targeting their Instagram and Facebook accounts, with many reporting long and frustrating recovery processes.

Wedding dress designer Catherine Deane, whose Instagram account was hacked through a fake verification link, described the experience as ‘devastating’ and said it took four months and persistent efforts to regain access.

Despite repeated emails to Meta, the issue was only resolved after a team member contacted someone within the company directly.

Cybersecurity experts say such cases are far from isolated. Jonas Borchgrevink, head of US-based firm Hacked.com, said thousands of business accounts are compromised every day, with some clients paying for help after months of failed recovery attempts.

Scammers often pose as Meta support, using convincing branding and AI-generated messages to trick victims into revealing passwords or verifying accounts on fake websites. These tactics allow them to gain control of business profiles and demand ransoms or post fraudulent content.

Meta has declined to disclose the full scale of the problem but says it encourages users to enable security features like two-factor authentication and regularly check their account safety. Some businesses, however, report being locked out despite not being hacked.

Others say Meta has wrongly removed pages without notice, with limited recourse or explanation. Calls are growing for the company to improve its support systems and take faster action to help affected businesses recover access to their vital online platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI pushes Grok forward with memory update

Elon Musk’s AI venture, xAI, has introduced a new ‘memory’ feature for its Grok chatbot in a bid to compete more closely with established rivals like ChatGPT and Google’s Gemini.

The update allows Grok to remember details from past conversations, enabling it to provide more personalised responses when asked for advice or recommendations, instead of offering generic answers.

Unlike before, Grok can now ‘learn’ a user’s preferences over time, provided it’s used frequently enough. The move mirrors similar features from competitors, with ChatGPT already referencing full chat histories and Gemini using persistent memory to shape its replies.

According to xAI, the memory is fully transparent. Users can view what Grok has remembered and choose to delete specific entries at any time.

The memory function is currently available in beta on Grok’s website and mobile apps, although not yet accessible to users in the EU or UK.

Instead of being automatically enabled, it can be turned off in the settings menu under Data Controls. Deleting individual memories is also possible via the web chat interface, with Android support expected shortly.

xAI has confirmed it is working on adding memory support to Grok’s version on X. However, this expansion aims to deepen the bot’s integration with users’ digital lives instead of limiting the experience to one platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quantum spin breakthrough at room temperature

South Korean researchers have discovered a way to generate much stronger spin currents at room temperature, potentially transforming the future of electronics.

By using a mechanism called longitudinal spin pumping and a special iron-rhodium material, the team showed that quantum magnetisation dynamics, once thought to only occur at extremely low temperatures, can take place in everyday conditions.

These currents were found to be 10 times stronger than those created through traditional methods, offering a major boost for low-power, high-performance devices.

Instead of relying on the movement of electric charge, spintronics makes use of the electron’s spin, which reduces energy loss and heat generation. This advancement could be particularly beneficial for Magnetoresistive Random Access Memory (MRAM), a type of memory that depends on spin currents to function.

Researchers believe their findings may significantly cut power consumption in MRAM, which is already being explored by companies like Samsung for next-generation AI computing systems.

The study, carried out by teams at KAIST and Sogang University, used a combination of ultrafast measurement experiments and theoretical analysis to validate the discovery. Experts say the results could lead to a new era of energy-efficient memory and processor technologies.

Instead of stopping here, the researchers now plan to develop novel spintronic device architectures and explore other quantum-based mechanisms to push the limits of what modern electronics can achieve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hamburg Declaration champions responsible AI

The Hamburg Declaration on Responsible AI for the Sustainable Development Goals (SDGs) is a new global initiative jointly launched by the United Nations Development Programme (UNDP) and Germany’s Federal Ministry for Economic Cooperation and Development (BMZ).

The Declaration seeks to build a shared vision for AI that supports fair, inclusive, and sustainable global development. It is set to be officially adopted at the Hamburg Sustainability Conference in June 2025.

The initiative brings together voices from across sectors—governments, civil society, academia, and industry—to shape how AI can ethically and effectively align with the SDGs. Central to this effort is an open consultation process inviting stakeholders to provide feedback on the draft declaration, participate in expert discussions, and endorse its principles.

In addition to the declaration itself, the initiative also features the AI SDG Compendium, a global registry of AI projects contributing to sustainable development. The process has already gained visibility at major international forums like the Internet Governance Forum and the AI for Good Global Summit, reflecting its growing significance in shaping global AI governance.

The Declaration aims to ensure that AI is developed and used in ways that respect human rights, reduce inequalities, and foster sustainable progress. Establishing shared principles and promoting collaboration across sectors and regions sets a foundation for responsible AI that serves both people and the planet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft unveils powerful lightweight AI model for CPUs

Microsoft researchers have introduced the largest 1-bit AI model to date, called BitNet b1.58 2B4T, designed to run efficiently on standard CPUs instead of relying on GPUs. This ‘bitnet’ model, now openly available under the MIT license, can even operate on Apple’s M2 chips.

Bitnets use extreme weight quantisation, storing only -1, 0, or 1 as values, making them far more memory- and compute-efficient than most conventional models.

With 2 billion parameters and trained on 4 trillion tokens, roughly the equivalent of 33 million books, BitNet b1.58 2B4T outperforms several similarly sized models in key benchmarks.

Microsoft claims it beats Meta’s Llama 3.2 1B, Google’s Gemma 3 1B, and Alibaba’s Qwen 2.5 1.5B on tasks like grade-school maths and physical reasoning. It also runs up to twice as fast while using significantly less memory, offering a potential edge for lower-end or energy-constrained devices.

The main limitation lies in its dependence on Microsoft’s custom bitnet.cpp framework, which supports only select hardware and does not yet work with GPUs.

Instead of being broadly compatible with existing AI systems, BitNet’s performance depends on a narrower infrastructure, a hurdle that may limit adoption, despite its promise for lightweight AI deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Apple AI model uses private email comparisons

Apple has outlined a new approach to improving its AI features by privately analysing user data with the help of synthetic data. The move follows criticism of the company’s AI products, especially notification summaries, which have underperformed compared to competitors.

The new method relies on ‘differential privacy,’ where Apple generates synthetic messages that resemble real user data without containing any actual content.

These messages are used to create embeddings—abstract representations of message characteristics—which are then compared with real emails on user’ devices that have opted in to share analytics.

Devices send back signals indicating which synthetic data most closely matches real content, without sharing the actual messages with Apple.

Apple said the technique is already being used to improve its Genmoji models and will soon be applied to other features, including Image Playground, Image Wand, Memories Creation, Writing Tools, and Visual Intelligence.

The company also confirmed plans to improve email summaries using the same privacy-focused method, aiming to refine its AI tools while maintaining a strong commitment to user data protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google uses AI and human reviews to fight ad fraud

Google has revealed it suspended 39.2 million advertiser accounts in 2024, more than triple the number from the previous year, as part of its latest push to combat ad fraud.

The tech giant said it is now able to block most bad actors before they even run an advert, thanks to advanced large language models and detection signals such as fake business details and fraudulent payments.

Instead of relying solely on AI, a team of over 100 experts from across Google and DeepMind also reviews deepfake scams and develops targeted countermeasures.

The company rolled out more than 50 LLM-based safety updates last year and introduced over 30 changes to advertising and publishing policies. These efforts, alongside other technical reinforcements, led to a 90% drop in reports of deepfake ads.

While the US saw the highest number of suspensions, with all 39.2 million accounts coming from there alone, India followed with 2.9 million accounts taken down. In both countries, ads were removed for violations such as trademark abuse, misleading personalisation, and financial service scams.

Overall, Google blocked 5.1 billion ads globally and restricted another 9.1 billion, instead of allowing harmful content to spread unchecked. Nearly half a billion of those removed were linked specifically to scam activity.

In a year when half the global population headed to the polls, Google also verified over 8,900 election advertisers and took down 10.7 million political ads.

While the scale of suspensions may raise concerns about fairness, Google said human reviews are included in the appeals process.

The company acknowledged previous confusion over enforcement clarity and is now updating its messaging to ensure advertisers understand the reasons behind account actions more clearly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!