Human work roles shift alongside AI

Reporting by The Korea Herald highlights that AI is increasingly reshaping workplace expectations, with employees adapting how they approach tasks and productivity. The shift reflects broader changes in how work is organised and delivered.

The article indicates that workers are using AI tools to improve efficiency while also reassessing workloads and job design. This is leading to a growing focus on balancing automation with human input.

At the same time, organisations are being pushed to rethink management structures, accountability and skills development. The integration of AI is influencing both individual roles and wider organisational strategies.

The Korea Herald suggests that long-term success will depend on how effectively businesses align AI adoption with workforce needs and sustainable work practices globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Armenia plans AI road scanning system

Armenpress reports that the Government of the Republic of Armenia plans to acquire an AI-powered road-scanning device to improve infrastructure maintenance. The system is intended to assess road conditions and guide repair decisions.

According to the Ministry of Territorial Administration and Infrastructure of the Republic of Armenia, the device will scan roads and use AI to determine the type and depth of repairs required. This includes identifying whether partial repairs or full reconstruction are needed.

Minister of Territorial Administration and Infrastructure of the Republic of Armenia, Davit Khudatyan, stated that the AI technology will provide a detailed analysis by passing over road surfaces. The system is expected to improve planning and maintenance efficiency.

The project is estimated to cost between 500 and 600 million drams and forms part of broader efforts to modernise infrastructure management in Armenia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI launches child safety framework to address AI risks

A new framework has been introduced by OpenAI to address risks of AI-enabled child abuse and strengthen protection mechanisms across digital systems.

An initiative that reflects growing concern over how emerging technologies can both enable and prevent harm.

The blueprint focuses on modernising legal frameworks to address AI-generated harmful content, improving reporting and coordination among service providers, and embedding safety measures directly into AI systems.

These measures aim to enhance early detection and prevent misuse at scale.

Developed in collaboration with organisations such as the National Center for Missing and Exploited Children (NCMEC) and the Attorney General Alliance, the framework promotes shared standards across industry and public authorities.

It emphasises coordinated responses and stronger accountability mechanisms.

An approach that combines technical safeguards, human oversight, and legal enforcement, aiming to improve response speed and reduce risks before harm occurs.

Ultimately, the initiative highlights the need for continuous adaptation as AI capabilities evolve and reshape online safety challenges.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU advances AI copyright safeguards through GPAI taskforce discussions

The European Commission has convened the second meeting of the Signatory Taskforce under the General-Purpose AI Code of Practice (GPAI), focusing on copyright protection in AI systems.

The discussion brought together signatories to exchange early implementation practices and technical approaches.

Participants examined methods to reduce copyright risks in AI-generated outputs, highlighting measures applied across the model’s lifecycle, including data selection, training, and deployment.

Emphasis was placed on combining technical safeguards with organisational processes to improve transparency and effectiveness.

One approach presented involved training models on licensed content alongside attribution systems to identify similarities between generated outputs and source material. Such a method aims to support fair remuneration and strengthen accountability within AI development.

The meeting also addressed mechanisms for handling complaints from rights holders, with participants discussing procedures for accessible and timely responses.

An exchange that forms part of ongoing EU efforts to refine governance standards for AI systems and copyright compliance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Corning and Meta start construction on North Carolina AI cable facility

Corning Incorporated and Meta Platforms have begun construction on a major expansion of Corning’s optical cable manufacturing facility in Hickory, North Carolina. The project will support advanced AI data centres using US-developed technology.

The initiative is part of a multiyear, up to $6 billion agreement between the two companies to accelerate the deployment of high-performance data centres. Under the agreement, Corning will supply Meta with new optical fibre, cable, and connectivity solutions.

Meta will act as the anchor customer for the Hickory expansion, which will produce optical cable critical for AI infrastructure. The expansion is expected to strengthen domestic manufacturing and create additional skilled jobs in North Carolina.

Corning currently employs more than 5,000 people in the state and plans to increase its workforce by 15 to 20 percent. Executives emphasised the partnership’s role in advancing US innovation and supporting the next generation of AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Adobe launches a free AI learning tool for students

The US software company, Adobe, has introduced Student Spaces, a free AI study tool within Acrobat designed to help students generate learning materials efficiently.

Users can create flashcards, quizzes, mind maps, podcasts, and editable presentations from PDFs, Docs, PowerPoint, Excel, URLs, and handwritten notes.

The tool builds on Acrobat’s AI features, now allowing students to interact with a chat assistant grounded in uploaded documents, reducing errors.

Tested with 500 students from universities including Harvard, Berkeley, and Brown, Adobe emphasises convenience, letting students generate study materials without constantly moving files.

The goal is to simplify study workflows and support learning across multiple document types.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Transparency push for automated recruitment in the UK

The UK’s Information Commissioner’s Office has issued new guidance on the growing use of AI in recruitment, warning jobseekers may be unaware of how automated systems influence hiring decisions. The regulator says greater transparency is needed as adoption accelerates.

Automated decision-making tools are increasingly used to screen applications, analyse CVs and rank candidates. While this can improve efficiency, some applicants may be rejected before any human review takes place.

The regulator highlights risks including bias, lack of clarity and potential unfair treatment if safeguards towards the use of AI are not properly applied. Employers are expected to monitor systems for discrimination and clearly explain how decisions are made.

Jobseekers are entitled to know when automation is used, to challenge outcomes, and to request human review. The guidance aims to ensure fair and lawful hiring practices as AI becomes increasingly embedded in UK recruitment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

China sets standards for AI ethics review and algorithm accountability

The introduction of new AI ethics guidelines by China signals a structured attempt to formalise governance frameworks for rapidly expanding AI systems.

Coordinated by the Ministry of Industry and Information Technology of the People’s Republic of China and multiple state bodies, the policy integrates ethical oversight directly into technological development processes.

A central feature of the framework is the emphasis on operationalising ethical principles such as fairness, accountability, and human well-being through technical review mechanisms.

By focusing on data selection, algorithmic design, and system architecture, the guidelines move towards embedding ethical safeguards at the development stage and protecting intellectual property rights in AI ethics review technologies.

Such an approach reflects a broader shift towards anticipatory governance, where risks such as bias, discrimination, and algorithmic manipulation are addressed before deployment.

A policy by China that also highlights the role of infrastructure in ethical governance, including the development of auditing tools, risk assessment systems, and curated datasets.

Scenario-based evaluation mechanisms indicate an effort to tailor oversight to specific use cases, recognising that AI risks vary significantly across sectors. Instead of relying solely on static compliance rules, the framework promotes adaptive governance aligned with technological complexity.

Ultimately, the outcome is a governance model that seeks to maintain technological competitiveness while addressing societal risks, contributing to wider global debates on how states can regulate AI systems without constraining their development.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Student AI rights framework unveiled

A newly released ‘Student AI Bill of Rights’ in the US outlines a proposed framework to protect learners as AI tools become increasingly widespread in education. The initiative aims to establish clear standards for fairness, transparency and accountability.

The document highlights the need for students to be informed when AI systems are used in teaching, assessment or administration. It also stresses that students should retain control over their personal data and academic work.

Another central principle is accountability, with students given the right to question and appeal decisions made or influenced by AI systems. The framework also calls for safeguards to prevent bias and ensure equal access to educational opportunities.

While not legally binding, the proposal is designed to guide higher education institutions in developing responsible AI policies. It reflects growing efforts to define ethical standards for AI use in education in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

CNN develops agent infrastructure for AI media trading

CNN is developing an internal agent infrastructure as part of a plan to begin AI-driven media trading by early 2027. The company aims to complete protocol scoping by the end of the second quarter before moving into testing phases later in the year.

Testing will focus on how properties are interpreted by large language models and how buyers allocate budgets to agent-based systems. Executives say the timeline may change as the technology and market conditions continue to evolve.

The initiative combines in-house development with external technology partners, while aligning with industry frameworks to ensure compatibility. CNN is also working with standards bodies to ensure agent communication produces accurate outcomes for buyers.

Agentic protocols enable systems to exchange information, negotiate pricing, and manage tasks autonomously between buyers and sellers. The company is prioritising consistent communication to support efficient and reliable transactions.

Early efforts are centred on learning and experimentation, even without immediate revenue generation. Initial use cases are expected to focus on performance-driven campaigns before expanding into broader advertising activities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot