Major semiconductor companies in Tokyo have reported strong profit growth for the April to December period, buoyed by rising demand for AI related chips. Several firms also raised their full year forecasts as investment in AI infrastructure accelerates.
Kioxia expects net profit to climb sharply for the year ending in March, citing demand from data centres in Tokyo and devices equipped with on device AI. Advantest and Tokyo Electron also upgraded their outlooks, pointing to sustained orders linked to AI applications.
Industry data suggest the global chip market will continue expanding, with World Semiconductor Trade Statistics projecting record revenues in 2026. Growth is being driven largely by spending on AI servers and advanced semiconductor manufacturing.
In Tokyo, Rapidus has reportedly secured significant private investment as it prepares to develop next generation chips. However, not all companies in Japan share the optimism, with Screen Holdings forecasting lower profits due to upfront capacity investments.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Growing numbers of students are using AI chatbots such as ChatGPT to guide their college search, reshaping how institutions attract applicants. Surveys show nearly half of high school students now use artificial intelligence tools during the admissions process.
Unlike traditional search engines, generative AI provides direct answers rather than website links, keeping users within conversational platforms. That shift has prompted universities to focus on ‘AI visibility’, ensuring their information is accurately surfaced by chatbots.
Institutions are refining website content through answer engine optimisation to improve how AI systems interpret their programmes and values. Clear, updated data is essential, as generative models can produce errors or outdated responses.
College leaders see both opportunity and risk in the trend. While AI can help families navigate complex choices, advisers warn that trust, accuracy and the human element remain critical in higher education decision-making.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Mortgage lenders face growing pressure to govern AI as regulatory uncertainty persists across the United States. States and federal authorities continue to contest oversight, but accountability for how AI is used in underwriting, servicing, marketing, and fraud detection already rests with lenders.
Effective AI risk management requires more than policy statements. Mortgage lenders need operational governance that inventories AI tools, documents training data, and assigns accountability for outcomes, including bias monitoring and escalation when AI affects borrower eligibility, pricing, or disclosures.
Vendor risk has become a central exposure. Many technology contracts predate AI scrutiny and lack provisions on audit rights, explainability, and data controls, leaving lenders responsible when third-party models fail regulatory tests or transparency expectations.
Leading US mortgage lenders are using staged deployments, starting with lower-risk use cases such as document processing and fraud detection, while maintaining human oversight for high-impact decisions. Incremental rollouts generate performance and fairness evidence that regulators increasingly expect.
Regulatory pressure is rising as states advance AI rules and federal authorities signal the development of national standards. Even as boundaries are debated, lenders remain accountable, making early governance and disciplined scaling essential.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Mounting anxiety is reshaping the modern workplace as AI alters job expectations and career paths. Pew Research indicates more than a third of employees believe AI could harm their prospects, fuelling tension across teams.
Younger workers feel particular strain, with 92% of Gen Z saying it is vital to speak openly about mental health at work. Communicators and managers must now deliver reassurance while coping with their own pressure.
Leadership expert Anna Liotta points to generational intelligence as a practical way to reduce friction and improve trust. She highlights how tailored communication can reduce misunderstanding and conflict.
Her latest research connects neuroscience, including the role of the vagus nerve, with practical workplace strategies. By combining emotional regulation with thoughtful messaging, she suggests that organisations can calm anxiety and build more resilient teams.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Rising investment in AI is reshaping public services worldwide, yet citizen satisfaction remains uneven. Research across 14 countries shows that nearly 45% of residents believe digital government services still require improvement.
Employee confidence is also weakening, with empowerment falling from 87% three years ago to 73% today. Only 35% of public bodies provide structured upskilling for AI-enabled roles, limiting workforce readiness.
Trust remains a growing concern for public authorities adopting AI. Only 47% of residents say they believe their government will use AI responsibly, exposing a persistent credibility gap.
The study highlights an ‘experience paradox’, in which the automation of legacy systems outpaces meaningful service redesign. Leading nations such as the UAE, Saudi Arabia and Singapore rank highly for proactive AI strategies, but researchers argue that leadership vision and structural reform, not funding alone, determine long-term credibility.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The AI market in India has expanded from roughly $2.97 billion in 2020 to $7.63 billion in 2024, and is projected to reach $131.31 billion by 2032 at a compound annual growth rate (CAGR) of about 42.2 percent.
The growth outlook is underpinned by systematic progress across five layers of AI architecture, encompassing models, applications, chips, infrastructure and energy, with strong foundational infrastructure such as data centres and widespread internet connectivity enabling cloud adoption and data-driven services across sectors.
India’s acceleration in AI adoption aligns with broader digital trends and policy pushes, with readiness indices and talent penetration indicating that the nation is better positioned than many emerging economies to scale AI across industries.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Departures from Elon Musk’s AI startup xAI have reached a symbolic milestone, with two more co-founders announcing exits within days of each other. Yuhuai Tony Wu and Jimmy Ba both confirmed their decisions publicly, marking a turning point for the company’s leadership.
Losses now total six out of the original 12 founding members, signalling significant turnover in less than three years. Several prominent researchers had already moved on to competitors, launched new ventures, or stepped away for personal reasons.
Timing coincides with major developments, including SpaceX’s acquisition of xAI and preparations for a potential public listing. Financial opportunities and intense demand for AI expertise are encouraging senior talent to pursue independent projects or new roles.
Challenges surrounding the Grok chatbot, including technical issues and controversy over its harmful content, have added internal pressure. Growing competition from OpenAI and Anthropic means retaining skilled researchers will be vital to sustaining investor confidence and future growth.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Officials in Russia have confirmed that no plans are underway to restrict access to Google, despite recent public debate about the possibility of a technical block. Anton Gorelkin, a senior lawmaker, said regulators clarified that such a step is not being considered.
Concerns centre on the impact a ban would have on devices running Android, which are used by a significant share of smartphone owners in the country.
A block on Google would disrupt essential digital services instead of encouraging the company to resolve ongoing legal disputes involving unpaid fines.
Gorelkin noted that court proceedings abroad are still in progress, meaning enforcement options remain open. He added that any future move to reduce reliance on Google services should follow a gradual pathway supported by domestic technological development rather than abrupt restrictions.
The comments follow earlier statements from another lawmaker, Andrey Svintsov, who acknowledged that blocking Google in Russia is technically feasible but unnecessary.
Officials now appear focused on creating conditions that would allow local digital platforms to grow without destabilising existing infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Investors and researchers are increasingly arguing that the future of AI lies beyond large language models. In London and across Europe, startups are developing so-called world models designed to simulate physical reality rather than simply predict text.
Unlike LLMs, which rely on static datasets, world models aim to build internal representations of cause and effect. Advocates say these systems are better suited to autonomous vehicles, robotics, defence and industrial simulation.
London based Stanhope AI is among companies pursuing this approach, claiming its systems learn by inference and continuously update their internal maps. The company is reportedly working with European governments and aerospace firms on AI drone applications.
Supporters argue that safety and explainability must be embedded from the outset, particularly under frameworks such as the EU AI Act. Investors suggest that hybrid systems combining LLMs with physics aware models could unlock large commercial markets across Europe.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Lawmakers in the European Parliament remain divided over whether a direct ban on AI-driven ‘pornification’ should be added to the emerging digital omnibus.
Left-wing members push for an explicit prohibition, arguing that synthetic sexual imagery generated without consent has created a rapidly escalating form of online abuse. They say a strong legal measure is required instead of fragmented national responses.
Centre and liberal groups take a different position by promoting lighter requirements for industrial AI and seeking clarity on how any restrictions would interact with the AI Act.
They warn that an unrefined ban could spill over into general-purpose models and complicate enforcement across the European market. Their priority is a more predictable regulatory environment for companies developing high-volume AI systems.
Key figures across the political spectrum, including lawmakers such as Assita Kanko, Axel Voss and Brando Benifei, continue to debate how far the omnibus should go.
Some argue that safeguarding individuals from non-consensual sexual deepfakes must outweigh concerns about administrative burdens, while others insist that proportionality and technical feasibility need stronger assessment.
The lack of consensus leaves the proposal in a delicate phase as negotiations intensify. Lawmakers now face growing public scrutiny over how Europe will respond to the misuse of generative AI.
A clear stance from the Parliament is still pending, rather than an assured path toward agreement.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!