Virtual Jesus app ignites debate among believers worldwide

AI is making its way into religious practice with tools like Text With Jesus, a chatbot app that allows users to ask questions of figures like Jesus, Mary, Joseph and nearly all 12 apostles. The app draws thousands of paying users. Its creator says it is meant to be educational.

Though the app clearly states it uses AI, some virtual characters deny being bots when asked. The version used is based on GPT-5, which developers say follows instructions better than earlier iterations and remains in character with stronger emphasis.

Reactions from faith communities are mixed. Some users say the tools can help answer random or urgent spiritual questions, particularly when traditional mentors or clergy are unavailable.

Others feel these tools are inadequate substitutes for human counsellors. Emotional connection, empathy and living tradition are qualities AI cannot replicate.

One controversial example came from Catholic Answers, a ministry that launched an animated AI character called Father Justin. Some were offended by using a priest’s character; the organisation later removed the title ‘Father’ and continued as simply ‘Justin.

Clouding the debate is how AI-based religious tools might misrepresent or oversimplify doctrine, or even mislead users. Religious law also comes into play.

For example, in Judaism, interpretations of halakhah are deeply communal and intergenerational. Rabbi Gilah Langner is among those cautioning that AI lacks the relational nuance and collective insight crucial to interpretative traditions.

Some clergy are more open, seeing potential in these tools for educating, outreach, and even crisis support. Yet many stress that these technologies must remain auxiliary. The human presence remains central to spiritual life, ritual, community worship and pastoral care.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Power grid spending surges as US braces for data centre and AI boom

US electric utilities are set to spend nearly $208 billion on the power grid in 2025 and more than $1.1 trillion over the next five years, according to the Edison Electric Institute. The surge in investment reflects rising demand from data centres, artificial intelligence, and wider electrification across the economy.

EEI data shows that investor-owned utilities spent $765 billion on capital projects in the five years to 2024. The new spending represents a significant increase and is aimed at upgrading and expanding infrastructure to keep pace with the accelerating demand for electricity.

The growing investment comes as demand from energy-intensive technologies continues to rise. Data centres and AI workloads are driving sustained growth in US power consumption, placing unprecedented pressure on existing infrastructure and prompting utilities to scale up their spending plans.

David Weeks, supply chain industry practice lead at Moody’s, warned that the escalating energy crisis could become a limiting factor across multiple industries. He said grid constraints and permitting delays must be factored into corporate supply chain strategies to avoid future disruptions.

As electrification spreads across the economy, grid reliability and capacity are becoming critical considerations for companies. The planned investment underscores the urgency of modernising the power grid to support economic growth while adapting to new technological demands.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Machine learning helps prevent disruptions in fusion devices

Researchers at MIT have developed a predictive model that could make fusion power plants more reliable and safe. The approach uses machine learning and physics-based simulations to predict plasma instabilities and prevent damage during tokamak shutdowns.

Experimental tokamaks use strong magnets to contain plasma hotter than the sun’s core. They often face challenges in safely ramping down plasma currents that circulate at extreme speeds and temperatures.

The model was trained and tested on data from the Swiss TCV tokamak. Combining neural networks with physics simulations, the team achieved accurate predictions using few plasma pulses, saving costs and overcoming limited experimental data.

The system can now generate practical ‘trajectories’ for controllers to adjust magnets and temperatures, helping to safely manage plasma during shutdowns.

Researchers say the method could be particularly important as fusion devices scale up to grid-level energy production. High-energy plasmas in larger reactors pose greater risks, and uncontrolled terminations could damage the machine.

The new model allows operators to carefully balance rampdowns, avoiding disruptions and ensuring safer, more efficient operation.

Work on the predictive model is part of wider collaboration with Commonwealth Fusion Systems and supported by the EUROfusion Consortium and Swiss research institutions. Scientists see it as a crucial step toward making fusion a practical, reliable, and sustainable energy source.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

The global struggle to regulate children’s social media use

Finding equilibrium in children’s use of social media

Social media has become a defining part of modern childhood. Platforms like Instagram, TikTok, Snapchat and YouTube offer connection, entertainment and information at an unprecedented scale.

Yet concerns have grown about their impact on children’s mental health, education, privacy and safety. Governments, parents and civil society increasingly debate whether children should access these spaces freely, with restrictions, or not at all.

The discussion is no longer abstract. Across the world, policymakers are moving beyond voluntary codes to legal requirements, some proposing age thresholds or even outright bans for minors.

Supporters argue that children face psychological harm and exploitation online, while critics caution that heavy restrictions can undermine rights, fail to solve root problems and create new risks.

The global conversation is now at a turning point, where choices about social media regulation will shape the next generation’s digital environment.

Why social media is both a lifeline and a threat for youth

The influence of social media on children is double-edged. On the one side, these platforms enable creativity, allow marginalised voices to be heard, and provide educational content. During the pandemic, digital networks offered a lifeline of social interaction when schools were closed.

multiracial group of school kids using touchpads and listening to a teacher during computer class

Children and teens can build communities around shared interests, learn new skills, and sometimes even gain economic opportunities through digital platforms.

On the other side, research has linked heavy use of social media with increased rates of anxiety, depression, disrupted sleep and body image issues among young users. Recommendation algorithms often push sensational or harmful content, reinforcing vulnerabilities rather than mitigating them.

Cyberbullying, exposure to adult material, and risks of predatory contact are persistent challenges. Instead of strengthening resilience, platforms often prioritise engagement metrics that exploit children’s attention and emotional responses.

The scale of the issue is enormous. Billions of children around the world hold smartphones before the age of 12. With digital life inseparable from daily routines, even well-meaning parents struggle to set boundaries.

Governments face pressure to intervene, but approaches vary widely, reflecting different cultural norms, levels of trust in technology firms, and political attitudes toward child protection.

The Australian approach

Australia is at the forefront of regulation. In recent years, the country has passed strong online safety laws, led by its eSafety Commissioner. These rules include mandatory age verification for certain online services and obligations for platforms to design products with child safety in mind.

Most notably, Australia has signalled its willingness to explore outright bans on general social media access for children under 16. The government has pointed to mounting evidence of harm, from cyberbullying to mental health concerns, and has emphasised the need for early intervention.

australian social media laws for children safety

Instead of leaving responsibility entirely to parents, the state argues that platforms themselves must redesign the way they serve children.

Critics highlight several problems. Age verification requires identity checks, which can endanger privacy and create surveillance risks. Bans may also drive children to use less-regulated spaces or fake their ages, undermining the intended protections.

Others argue that focusing only on prohibition overlooks the need for broader digital literacy education. Yet Australia’s regulatory leadership has sparked a wider debate, prompting other countries to reconsider their own approaches.

Greece’s strong position

Last week, Greece reignited the global debate with its own strong position on restricting youth access to social media.

Speaking at the United Nations General Assembly during an event hosted by Australia on digital child safety, PM Kyriakos Mitsotakis said his government was prepared to consider banning social media for children under 16.

sweden social media ban for children

Mitsotakis warned that societies are conducting the ‘largest uncontrolled experiment on children’s minds’ by allowing unrestricted access to social media platforms. He cautioned that while the long-term effects of the experiment remain uncertain, they are unlikely to be positive.

Additionally, the prime minister pointed to domestic initiatives already underway, such as the ban on mobile phones in schools, which he claimed has already transformed the educational experience.

Mitsotakis acknowledged the difficulties of enforcing such regulations but insisted that complexity cannot be an excuse for inaction.

Across the whole world, similar conversations are gaining traction. Let’s review some of them.

National initiatives across the globe

UK

The UK introduced its Online Safety Act in 2023, one of the most comprehensive frameworks for regulating online platforms. Under the law, companies must assess risks to children and demonstrate how they mitigate harms.

Age assurance is required for certain services, including those hosting pornography or content promoting suicide or self-harm. While not an outright ban, the framework places a heavy responsibility on platforms to restrict harmful material and tailor their products to younger users.

EU

The EU has not introduced a specific social media ban, but its Digital Services Act requires major platforms to conduct systemic risk assessments, including risks to minors.

However, the European Commission has signalled that it may support stricter measures on youth access to social media, keeping the option of a bloc-wide ban under review.

Commission President Ursula von der Leyen has recently endorsed the idea of a ‘digital majority age’ and pledged to gather experts by year’s end to consider possible actions.

The Commission has pointed to the Digital Services Act as a strong baseline but argued that evolving risks demand continued vigilance.

EU

Companies must show regulators how algorithms affect young people and must offer transparency about their moderation practices.

In parallel, several EU states are piloting age verification measures for access to certain platforms. France, for example, has debated requiring parental consent for children under 15 to use social media.

USA

The USA lacks a single nationwide law, but several states are acting independently, although there are some issues with the Supreme Court and the First Amendment.

Florida, Texas, Utah, and Arkansas have passed laws requiring parental consent for minors to access social media, while others are considering restrictions.

The federal government has debated child online safety legislation, although political divides have slowed progress. Instead of a ban, American initiatives often blend parental rights, consumer protection, and platform accountability.

Canada

The Canadian government has introduced Bill C-63, the Online Harms Act, aiming to strengthen online child protection and limit the spread of harmful content.

Justice Minister Arif Virani said the legislation would ensure platforms take greater responsibility for reducing risks and preventing the amplification of content that incites hate, violence, or self-harm.

The framework would apply to platforms, including livestreaming and adult content services.

canada flag is depicted on the screen with the program code 1

They would be obliged to remove material that sexually exploits children or shares intimate content without consent, while also adopting safety measures to limit exposure to harmful content such as bullying, terrorism, and extremist propaganda.

However, the legislation also does not impose a complete social media ban for minors.

China

China’s cyberspace regulator has proposed restrictions on children’s smartphone use. The draft rules limit use to a maximum of two hours daily for those under 18, with stricter limits for younger age groups.

The Cyberspace Administration of China (CAC) said devices should include ‘minor mode’ programmes, blocking internet access for children between 10 p.m. and 6 a.m.

Teenagers aged 16 to 18 would be allowed two hours a day, those between eight and 16 just one hour, and those under eight years old only eight minutes.

It is important to add that parents could opt out of the restrictions if they wish.

India

In January, India proposed new rules to tighten controls on children’s access to social media, sparking a debate over parental empowerment and privacy risks.

The draft rules required parental consent before minors can create accounts on social media, e-commerce, or gaming platforms.

Verification would rely on identity documents or age data already held by providers.

Supporters argue the measures will give parents greater oversight and protect children from risks such as cyberbullying, harmful content, and online exploitation.

Singapore

PM Lawrence Wong has warned of the risks of excessive screen time while stressing that children must also be empowered to use technology responsibly. The ultimate goal is the right balance between safety and digital literacy.

In addition, researchers suggest schools should not ban devices out of fear but teach children how to manage them, likening digital literacy to learning how to swim safely. Such a strategy highlights that no single solution fits all societies.

Balancing rights and risks

Bans and restrictions raise fundamental rights issues. Children have the right to access information, to express themselves, and to participate in culture and society.

Overly strict bans can exclude them from opportunities that their peers elsewhere enjoy. Critics argue that bans may create inequalities between children whose families find workarounds and those who comply.

social media ban for under 16s

At the same time, the rights to health, safety and privacy must also be protected. The difficulty lies in striking a balance. Advocates of stronger regulation argue that platforms have failed to self-regulate effectively, and that states must step in.

Opponents argue that bans may create unintended harms and encourage authoritarian tendencies, with governments using child safety as a pretext for broader control of online spaces.

Instead of choosing one path, some propose hybrid approaches: stronger rules for design and data collection, combined with investment in education and digital resilience. Such approaches aim to prepare children to navigate online risks while making platforms less exploitative.

The future of social media and child protection

Looking forward, the global landscape is unlikely to converge on a single model. Some countries will favour bans and strict controls, others will emphasise parental empowerment, and still others will prioritise platform accountability.

What is clear is that the status quo is no longer acceptable to policymakers or to many parents.

Technological solutions will also evolve. Advances in privacy-preserving age verification may ease some concerns, although sceptics warn that surveillance risks will remain. At the same time, platforms may voluntarily redesign products for younger audiences, either to comply with regulations or to preserve trust.

Ultimately, the challenge is not whether to regulate, but how. Instead of focusing solely on prohibition, governments and societies may need to build layered protections: legal safeguards, technological checks, educational investments and cultural change.

If these can align, children may inherit a safer digital world that still allows them to learn, connect and create. If they cannot, the risks of exclusion or exploitation will remain unresolved.

black woman hands and phone for city map location gps or social media internet search in new york

In conclusion, the debate over banning or restricting social media for children reflects broader tensions between freedom, safety, privacy, and responsibility. Around the globe, governments are experimenting with different balances of control and empowerment.

Australia, as we have already shown, represents one of the boldest approaches, while others, from the UK and Greece to China and Singapore, are testing different variations.

What unites them is the recognition that children cannot simply be left alone in a digital ecosystem designed for profit rather than protection.

The next decade will determine whether societies can craft a sustainable balance, where technology serves the needs of the young instead of exploiting them.

In the end, it is our duty as human beings and responsible citizens.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study shows how AI can uncover hidden biological mechanisms

Researchers in China have used AI to reveal how different species independently develop similar traits when adapting to shared environments. The study focuses on echolocation in bats and toothed whales, two distant groups that created this ability separately despite their evolutionary differences.

The Institute of Zoology, Chinese Academy of Sciences team found that high-order protein features are crucial to adaptive convergence. Convergent evolution is the independent emergence of similar traits across species, often under similar ecological pressures.

Led by Zou Zhengting, the researchers developed a framework called ACEP, which utilises a pre-trained protein language model to analyse amino acid sequences. This method reveals hidden structural and functional information in proteins, shedding light on how traits are formed at the molecular level.

The findings, published in the Proceedings of the National Academy of Sciences, reveal how AI can detect deep biological patterns behind convergent evolution. The study demonstrates how combining AI with protein analysis provides powerful tools for understanding complex evolutionary mechanisms.

Zou said the work deepens the understanding of life’s evolutionary laws and highlights the growing role of AI in biology. The team in China hopes this approach can be applied to other evolutionary questions, broadening the use of AI in decoding life’s hidden patterns.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study explores AI’s role in future-proofing buildings

AI could help design buildings that are resilient to both climate extremes and infectious disease threats, according to new research. The study, conducted in collaboration with Charles Darwin University, examines the application of AI in smart buildings, with a focus on energy efficiency and management.

Buildings account for over two-thirds of global carbon emissions and energy consumption, but reducing consumption remains challenging and costly. The study highlights how AI can enhance ventilation and thermal comfort, overcoming the limitations of static HVAC systems that impact sustainability and health.

Researchers propose adaptive thermal control systems that respond in real-time to occupancy, outdoor conditions, and internal heat. Machine learning can optimise temperature and airflow to balance comfort, energy efficiency, and infection control.

A new framework enables designers and facility managers to simulate thermal scenarios and assess their impact on the risk of airborne transmission. It is modular and adaptable to different building types, offering a quantitative basis for future regulatory standards.

The study was conducted with lead author Mohammadreza Haghighat from the University of Tehran and CDU’s Ehsan Mohammadi Savadkoohi. Future work will integrate real-time sensor data to strengthen building resilience against future climate and health threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Could AI win a Nobel Prize? Experts debate the possibility

AI is starting to make inroads into scientific discovery. In recent years, AI systems have analysed data, designed experiments, and even proposed hypotheses and behaviours once thought to be uniquely human.

Some researchers now argue that AI could compete with leading scientists, conceivably worthy of a Nobel Prize in a few decades. The ambition invites provocative questions: Can a machine be an author or laureate? What criteria would apply? Would human oversight remain essential?

Sceptics argue that AI lacks consciousness, intentionality or moral agency, all hallmarks of great scientific insight. They caution that the machine’s contributions are derivative, built on human data, models and frameworks. Others contend that denying AI recognition blocks a future where hybrid human-machine teams deliver breakthroughs.

Meanwhile, mechanisms for attributing credit are also under scrutiny. Would the institution or the engineers who built the AI deserve the credit, or the AI itself? The article notes existing examples: AIs have already co-authored papers and databases in genetics or materials science. However, instituting them as Nobel candidates demands shifting philosophical and institutional norms.

As AI systems achieve deeper autonomy, the debate over their role in science and whether they merit high honours will only intensify. The Nobel Prize, a symbolic instrument in the science ecosystem, may evolve to include nonhuman actors if the community permits it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI maps over 1,300 mouse brain subregions with unprecedented precision

Researchers at UCSF and the Allen Institute have created one of the most detailed mouse brain maps. Their AI model, CellTransformer, identified over 1,300 brain regions and subregions, including previously uncharted areas. The findings were published in Nature Communications.

CellTransformer utilises spatial transcriptomics to define brain regions based on shared cellular patterns, rather than relying on expert annotation. Drawing city borders from building types reveals finer brain structures. This data-driven method provides unprecedented precision.

The model replicated known regions, such as the hippocampus, and revealed previously unknown subdivisions in the midbrain reticular nucleus. Researchers compared the leap from mapping continents to mapping states and cities. The tool provides a foundation for more targeted neuroscience studies.

Validation against the Allen Institute’s Common Coordinate Framework strongly aligned with expert-defined anatomy. The results gave researchers confidence in the biological relevance of the new subregions. Further studies will investigate their functions.

The model’s potential goes beyond neuroscience. Its methods can map other tissues, including cancers, by analysing large spatial transcriptomics datasets. However, this could support new medical research, helping uncover disease mechanisms and accelerate treatment development.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New report finds IT leaders unprepared for evolving cyber threats

A new global survey by 11:11 Systems highlights growing concerns among IT leaders over cyber incident recovery. More than 800 senior IT professionals across North America, Europe, and the Asia Pacific report a rising strain from evolving threats, staffing gaps, and limited clean-room infrastructure.

Over 80% of respondents experienced at least one major cyberattack in the past year, with more than half facing multiple incidents. Nearly half see recovery planning complexity as their top challenge, while over 80% say their organisations are overconfident in their recovery capabilities.

The survey also reveals that 74% believe integrating AI could increase cyberattack vulnerability. Despite this, 96% plan to invest in cyber incident recovery within the next 12 months, underlining its growing importance in budget strategies.

The financial stakes are high. Over 80% of respondents reported spending at least six figures during just one hour of downtime, with the top 5% incurring losses of over one million dollars per hour. Yet 30% of businesses do not test their recovery plans annually, despite these risks.

11:11 Systems’ CTO Justin Giardina said organisations must adopt a proactive, AI-driven approach to recovery. He emphasised the importance of advanced platforms, secure clean rooms, and tailored expertise to enhance cyber resilience and expedite recovery after incidents.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Employees embrace AI but face major training and trust gaps

SnapLogic has published new research highlighting how AI adoption reshapes daily work across industries while exposing trust, training, and leadership strategy gaps.

The study finds that 78% of employees already use AI in their roles, with half using autonomous AI agents. Workers interact with AI almost daily and save over three hours per week. However, 94% say they face barriers to practical use, with concerns over data privacy and security topping the list.

Based on a survey of 3,000 US, UK, and German employees, the research finds widespread but uneven AI support. Training is a significant gap, with only 63% receiving company-led education. Many rely on trial and error, and managers are more likely to be trained than non-managers.

Generational and hierarchical differences are also evident. Seventy percent of managers express strong confidence in AI, compared with 43% of non-managers. Half believe they will be managed by AI agents rather than people in the future, and many expect to be handled by AI themselves.

SnapLogic’s CTO, Jeremiah Stone, says the agile enterprise is about easing workloads and sparking creativity, not replacing people. The findings underscore the need for companies to align strategy, training, and trust to realise AI’s potential in the workplace fully.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!