LSEG data and LG’s EXAONE model combine in new AI-driven stock prediction service

LG AI Research and LSEG have launched an AI forecasting tool that scores around 5,000 NYSE stocks daily. It combines LSEG’s financial data with LG’s EXAONE model. The service was presented to Korean financial institutions in Seoul.

The AI Equity Forecasting Score provides a numeric outlook and a short explanation for each stock. It analyses structured market data and unstructured filings and news. LG says this improves transparency in automated research.

LSEG says the partnership combines its global data infrastructure with LG’s modelling capabilities. According to LG, the system can uncover patterns that traditional analysis often misses. Daily scores and weekly commentary are already available.

Pilot testing is underway in the US, Europe, Japan and Korea. Analysts say wider adoption will depend on clear performance metrics and independent validation. They also note the lack of disclosure on trading frictions.

LG plans to expand the service to more markets and add tools for portfolio construction and commodities. Deeper integration with LSEG’s APIs is also being explored. LG describes the system as a daily, automated investment memo.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Creativity that AI cannot reshape

A landmark ruling in Munich has put renewed pressure on AI developers, following a German court’s finding that OpenAI is liable for reproducing copyrighted song lyrics in outputs generated by GPT-4 and GPT-4o. The judges rejected OpenAI’s argument that the system merely predicts text without storing training data, stressing the long-established EU principle of technological neutrality that, regardless of the medium, vinyl, MP3, or AI output, the unauthorised reproduction of protected works remains infringement.

Because the models produced lyrics nearly identical to the originals, the court concluded that they had memorised and therefore stored copyrighted content. The ruling dismantled OpenAI’s attempt to shift responsibility to users by claiming that any copying occurs only at the output stage.

Judges found this implausible, noting that simple prompts could not have ‘accidentally’ produced full, complex song verses without the model retaining them internally. Arguments around coincidence, probability, or so-called ‘hallucinations’ were dismissed, with the court highlighting that even partially altered lyrics remain protected if their creative structure survives.

As Anita Lamprecht explains in her blog, the judgement reinforces that AI systems are not neutral tools like tape recorders but active presenters of content shaped by their architecture and training data.

A deeper issue lies beneath the legal reasoning, the nature of creativity itself. The court inferred that highly original works, which are statistically unique, force AI systems into a kind of memorisation because such material cannot be reliably reproduced through generalisation alone.

That suggests that when models encounter high-entropy, creative texts during training, they must internalise them to mimic their structure, making infringement difficult to avoid. Even if this memorisation is a technical necessity, the judges stressed that it falls outside the EU’s text and data mining exemptions.

The case signals a turning point for AI regulation. It exposes contradictions between what companies claim in court and what their internal guidelines acknowledge. OpenAI’s own model specifications describe the output of lyrics as ‘reproduction’.

As Lamprecht notes, the ruling demonstrates that traditional legal principles remain resilient even as technology shifts from physical formats to vector space. It also hints at a future where regulation must reach inside AI systems themselves, requiring architectures that are legible to the law and laws that can be enforced directly within the models.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UAE launches 1 billion AI initiative for Africa

The UAE has unveiled a US$1 billion AI for Development initiative to finance AI projects across African nations. The programme aims to enhance digital infrastructure, government services, and productivity, supporting long-term economic and social development.

Implementation will be led by the Abu Dhabi Exports Office (ADEX), in cooperation with the UAE Foreign Aid Agency. AI technologies will be applied in key sectors, including education, agriculture, and infrastructure, to create innovative solutions and promote sustainable growth.

Officials highlighted the initiative as part of the UAE’s vision to become a global hub for AI while reinforcing its humanitarian and developmental legacy. The programme aims to boost international partnerships and deliver impactful support to developing countries.

The initiative reinforces the UAE’s long-term commitment to Africa and its role in technological and digital advancement. Leaders emphasised that AI-driven projects can improve living standards and foster inclusive, sustainable development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Greece accelerates AI training for teachers

A national push to bring AI into public schools has moved ahead in Greece after the launch of an intensive training programme for secondary teachers.

Staff in selected institutions will receive guidance on a custom version of ChatGPT designed for academic use, with a wider rollout planned for January.

The government aims to prepare educators for an era in which AI tools support lesson planning, research and personalised teaching instead of remaining outside daily classroom practice.

Officials view the initiative as part of a broader ambition to position Greece as a technological centre, supported by partnerships with major AI firms and new infrastructure projects in Athens. Students will gain access to the system next spring under tight supervision.

Supporters argue that generative tools could help teachers reduce administrative workload and make learning more adaptive.

Concerns remain strong among pupils and educators who fear that AI may deepen an already exam-driven culture.

Many students say they worry about losing autonomy and creativity, while teachers’ unions warn that reliance on automated assistance could erode critical thinking. Others point to the risk of increased screen use in a country preparing to block social media for younger teenagers.

Teacher representatives also argue that school buildings require urgent attention instead of high-profile digital reforms. Poor heating, unreliable electricity and decades of underinvestment complicate adoption of new technologies.

Educators who support AI stress that meaningful progress depends on using such systems as tools to broaden creativity rather than as shortcuts that reinforce rote learning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI use rises among Portuguese youth

A recent survey reveals that 38.7% of Portuguese individuals aged 16 to 74 used AI tools in the three months preceding the interview, primarily for personal purposes. Usage is particularly high among 16 to 24-year-olds (76.5%) and students (81.5%).

Internet access remains widespread, with 89.5% of residents going online recently. Nearly half (49.6%) placed orders online, primarily for clothing, footwear, and fashion accessories, while 74.2% accessed public service websites, often using a Citizen Card or Digital Mobile Key for authentication.

Digital skills are growing, with 59.2% of the population reaching basic or above basic levels. Young adults and tertiary-educated individuals show the highest digital proficiency, at 83.4% and 88.4% respectively.

Household internet penetration stands at 90.9%, predominantly via fixed connections.

Concerns about online safety are on the rise, as 45.2% of internet users reported encountering aggressive or discriminatory content, up from 35.5% in 2023. Reported issues include discrimination based on nationality, politics, and sexual identity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nokia to invest 4 billion in AI-ready US networks

Nokia has announced a $4 billion expansion of its US research, development, and manufacturing operations to accelerate AI-ready networking technologies. The move builds on Nokia’s earlier $2.3 billion US investment via Infinera and semiconductor manufacturing plans.

The expanded investment will support mobile, fixed access, IP, optical, data centre networking, and defence solutions. Approximately $3.5 billion will be allocated for R&D, with $500 million dedicated to manufacturing and capital expenditures in Texas, New Jersey, and Pennsylvania.

Nokia aims to advance AI-optimised networks with enhanced security, productivity, and energy efficiency. The company will also focus on automation, quantum-safe networks, semiconductor testing, and advanced material sciences to drive innovation.

Officials highlight the strategic impact of Nokia’s US investment. Secretary of Commerce Howard Lutnick praised the plan for boosting US tech capacity, while CEO Justin Hotard said it would secure the future of AI-driven networks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Popular Python AI library compromised to deliver malware

Security researchers have confirmed that the Ultralytics YOLO library was hijacked in a supply-chain attack, where attackers injected malicious code into the PyPI-published versions 8.3.41 and 8.3.42. When installed, these versions deployed the XMRig cryptominer.

The compromise stemmed from Ultralytics’ continuous-integration workflow: by exploiting GitHub Actions, the attackers manipulated the automated build process, bypassing review and injecting cryptocurrency mining malware.

The maintainers quickly removed the malicious versions and released a clean build (8.3.43); however, newer reports suggest that further suspicious versions may have appeared.

This incident illustrates the growing risk in AI library supply chains. As open-source AI frameworks become more widely used, attackers increasingly target their build systems to deliver malware, particularly cryptominers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Smarter AI processing could lead to cleaner air, say UCR engineers

As AI continues to scale rapidly, the environmental cost of powering massive data centres is becoming increasingly urgent. Machines require substantial amounts of electricity and water to stay cool, and a significant portion of this energy comes from fossil-fuel sources.

Scientists at UC Riverside’s Bourns College of Engineering, led by Professors Mihri and Cengiz Ozkan, have proposed a novel solution called Federated Carbon Intelligence (FCI). Their system doesn’t just prioritise low-carbon energy; it also monitors the health of servers in real-time to decide where and when AI tasks should be run.

Using simulations, the team found that FCI could reduce carbon dioxide emissions by up to 45 percent over five years and extend the operational life of hardware by about 1.6 years.

Their model takes into account server temperature, age and physical wear, and dynamically routes computing workloads to optimise both environmental and machine-health outcomes.

Unlike other approaches that only shift workloads to regions with cleaner energy, FCI also addresses the embodied emissions of manufacturing new servers. Keeping current hardware running longer and more efficiently helps reduce the carbon footprint associated with production.

If adopted by cloud providers, this adaptive system could mark a significant milestone in the sustainable development of AI infrastructure, one that aligns compute demand with both performance and ecological goals. The researchers are now calling for pilots in real data centres.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gemini 3 struggles to accept the year 2025

Google’s new AI model, Gemini 3, was left temporarily confused when it refused to accept that the year was 2025 during early testing by AI researcher Andrej Karpathy.

The model, pre-trained on data only through 2024 and initially disconnected from the internet, accused Karpathy of trickery and gaslighting before finally recognising the correct date.

Once Gemini 3 accessed real-time information, it expressed astonishment and apologised for its previous behaviour, demonstrating the model’s quirky but sophisticated reasoning capabilities. The interaction went viral online, drawing attention to both the humour and unpredictability of advanced AI systems.

Experts note that incidents like this illustrate the limitations of LLMs, which, despite their reasoning power, cannot inherently perceive reality and rely entirely on pre-training data and connected tools.

Observers emphasise that AI remains a powerful human aid rather than a replacement, and understanding its quirks is essential for practical use.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI teaching leaves Staffordshire students frustrated

Students at the University of Staffordshire in the UK have criticised a coding course after discovering much of the teaching was delivered through AI-generated slides and voiceovers.

Participants in the government-funded apprenticeship programme said they felt deprived of knowledge and frustrated that the course relied heavily on automated materials.

Concerns arose when learners noticed inconsistencies in language, suspicious file names, and abrupt changes in voiceover accents during lessons.

Students reported raising these issues with university staff, but the institution maintained the use of AI, asserting it supported academic standards while remaining ethical and responsible.

Critics argue that AI teaching diminishes engagement and reduces the opportunity to acquire practical skills needed for career development.

Experts suggest students supplement AI-driven courses with hands-on learning and critical thinking to ensure the experience remains valuable and relevant to their professional goals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!