India has activated the Digital Personal Data Protection Act 2023 after extended delays. Final regulations notified in November operationalise a long-awaited national privacy framework. The Act, passed in August 2023, now gains a fully operational compliance structure.
Implementation of the rules is staggered so organisations can adjust governance, systems and contracts. Some provisions, including the creation of a Data Protection Board, take effect immediately. Obligations on consent notices, breach reporting and children’s data begin after 12 or 18 months.
India introduces regulated consent managers acting as a single interface between users and data fiduciaries. Managers must register with the Board and follow strict operational standards. Parents will use digital locker-based verification when authorising the processing of children’s information online.
Global technology, finance and health providers now face major upgrades to internal privacy programmes. Lawyers expect major work mapping data flows, refining consent journeys and tightening security practices.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google will pour 40 billion dollars into Texas by 2027, expanding digital infrastructure. Funding focuses on new cloud and AI facilities alongside existing campuses in Midlothian and Dallas.
Three new US data centres are planned, one in Armstrong County and two in Haskell County. One Haskell site will sit beside a solar plant and battery storage facility. Investment is accompanied by agreements for more than 6,200 megawatts of additional power generation.
Google will create a 30 million dollar Energy Impact Fund supporting Texan energy efficiency and affordability projects. The company backs training for existing electricians and over 1,700 apprentices through electrical training programmes.
Spending strengthens Texas as a major hub for data centres and AI development. Google says expanded infrastructure and workforce will help maintain US leadership in advanced computing technologies. Company highlights its 15 year presence in Texas and pledges ongoing community support.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In the UK and other countries, teenagers continue to encounter harmful social media content, including posts about bullying, suicide and weapons, despite the Online Safety Act coming into effect in July.
A BBC investigation using test profiles revealed that some platforms continue to expose young users to concerning material, particularly on TikTok and YouTube.
The experiment, conducted with six fictional accounts aged 13 to 15, revealed differences in exposure between boys and girls.
While Instagram showed marked improvement, with no harmful content displayed during the latest test, TikTok users were repeatedly served posts about self-harm and abuse, and one YouTube profile encountered videos featuring weapons and animal harm.
Experts warned that changes will take time and urged parents to monitor their children’s online activity actively. They also recommended open conversations about content, the use of parental controls, and vigilance rather than relying solely on the new regulatory codes.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Some experts now say neurotechnology could be as revolutionary as AI, as devices advance rapidly from sci-fi tropes into practical reality. Researchers can already translate thoughts into words through brain implants, and spinal implants are helping people with paralysis regain movement.
King’s College London neuroscientist Anne Vanhoestenberghe told AFP, ‘People do not realise how much we’re already living in science fiction.’
Her lab works on implants for both brain and spinal systems, not just restoring function, but reimagining communication.
At the same time, the technology carries profound ethical risks. There is growing unease about privacy, data ownership and the potential misuse of neural data.
Some even warn that our ‘innermost thoughts are under threat.’ Institutions like UNESCO are already moving to establish global neurotech governance frameworks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Uzbekistan has granted full legal validity to online personal data stored on the my.gov.uz Unified Interactive Public Services Portal, placing it on equal footing with traditional documents.
The measure, in force from 1 November, supports the country’s digital transformation by simplifying how citizens interact with state bodies.
Personal information can now be accessed, shared and managed entirely through the portal instead of relying on printed certificates.
State institutions are no longer permitted to request paper versions of records that are already available online, which is expected to reduce queues and alleviate the administrative burden faced by the public.
Officials in Uzbekistan anticipate that centralising personal data on one platform will save time and resources for both citizens and government agencies. The reform aims to streamline public services, remove redundant steps and improve overall efficiency across state procedures.
Government bodies have encouraged citizens to use the portal’s functions more actively and follow official channels for updates on new features and improvements.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta is expanding its robotics ambitions by appointing Li-Chen Miller, previously head of its smart glasses portfolio, as the first product manager for Reality Labs’ robotics division. Her transfer marks a significant shift in Meta’s hardware priorities following the launch of its latest augmented reality devices.
The company is reportedly developing a humanoid assistant known internally as Metabotwithin the same organisation that oversees its AR and VR platforms. Former Cruise executive Marc Whitten leads the robotics group, supported by veteran engineer Ning Li and renowned MIT roboticist Sangbae Kim.
Miller’s move emphasises Meta’s aim to merge its AI expertise with physical robotics. The new team collaborates with the firm’s Superintelligence Lab, which is building a ‘world model’ capable of powering dextrous motion and real-time reasoning.
Analysts see the strategy as Meta’s attempt to future-proof its ecosystem and diversify Reality Labs, which continues to post heavy losses. The company’s growing investment in humanoid design could bring home-use robots closer to reality, blending social AI with the firm’s long-term vision for the metaverse.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Disney+ is preparing to introduce tools that enable subscribers to create short, AI-generated videos inspired by its characters and franchises. Chief executive Bob Iger described the move as part of a sweeping platform upgrade that marks the service’s most significant technological expansion since its 2019 launch.
Alongside user-generated video features, Disney+ will gain interactive, game-like functions through its collaboration with Epic Games. The company plans to merge storytelling and interactivity, creating a new form of engagement where fans can build or remix short scenes within Disney’s creative universe.
Iger confirmed that Disney has held productive talks with several AI firms to develop responsible tools that safeguard intellectual property. The company aims to ensure that fans’ creations can exist within brand limits, avoiding misuse of iconic characters while opening the door to more creative participation.
Industry analysts suggest that the plan could reshape the streaming industry by blending audience creativity with studio production. Yet creators have expressed caution, urging transparency on rights and moderation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Chinese cyberspace authorities announced a crackdown on AI deepfakes impersonating public figures in livestream shopping. Regulators said platforms have removed thousands of posts and sanctioned numerous accounts for misleading users.
Officials urged platforms to conduct cleanups and hold marketers accountable for deceptive promotions. Reported actions include removing over 8,700 items and dealing with more than 11,000 impersonation accounts.
Measures build on wider campaigns against AI misuse, including rules targeting deep synthesis and labelling obligations. Earlier efforts focused on curbing rumours, impersonation and harmful content across short videos and e-commerce.
Chinese authorities pledged a continued high-pressure stance to safeguard consumers and protect celebrity likenesses online. Platforms risk penalties if complaint handling and takedowns fail to deter repeat infringements in livestream commerce.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Apple has updated its App Review Guidelines to require developers to disclose and obtain permission before sharing personal data with third-party AI systems. The company says the change enhances user control as AI features become more prevalent across apps.
The revision arrives ahead of Apple’s planned 2026 release of an AI-enhanced Siri, expected to take actions across apps and rely partly on Google’s Gemini technology. Apple is also moving to ensure external developers do not pass personal data to AI providers without explicit consent.
Previously, rule 5.1.2(i) already limited the sharing of personal information without permission. The update adds explicit language naming third-party AI as a category that requires disclosure, reflecting growing scrutiny of how apps use machine learning and generative models.
The shift could affect developers who use external AI systems for features such as personalisation or content generation. Enforcement details remain unclear, as the term ‘AI’ encompasses a broad range of technologies beyond large language models.
Apple released several other guideline updates alongside the AI change, including support for its new Mini Apps Programme and amendments involving creator tools, loan products, and regulated services such as crypto exchanges.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Google DeepMind has released a research preview of SIMA 2, an upgraded generalist agent that draws on Gemini’s language and reasoning strengths. The system moves beyond simple instruction following, aiming to understand user intent and interact more effectively with its environment.
SIMA 1 relied on game data to learn basic tasks across diverse 3D worlds but struggled with complex actions. DeepMind says SIMA 2 represents a step change, completing harder objectives in unfamiliar settings and adapting its behaviour through experience without heavy human supervision.
The agent is powered by the Gemini 2.5 Flash-Lite model and built around the idea of embodied intelligence, where an AI acts through a body and responds to its surroundings. Researchers say this approach supports a deeper understanding of context, goals, and the consequences of actions.
Demos show SIMA 2 describing landscapes, identifying objects, and choosing relevant tasks in titles such as No Man’s Sky. It also reveals its reasoning, interprets clues, uses emojis as instructions, and navigates photorealistic worlds generated by Genie, DeepMind’s own environment model.
Self-improvement comes from Gemini models that create new tasks and score attempts, enabling SIMA 2 to refine its abilities through trial and error. DeepMind sees these advances as groundwork for future general-purpose robots, though the team has not shared timelines for wider deployment.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!