Italy is testing AI-assisted learning tools in selected schools to close the nation’s significant digital skills gap. Prime Minister Giorgia Meloni’s government has introduced the initiative in 15 classrooms across four regions, aimed at supporting both students and teachers through virtual assistants.
The AI tools are designed to tailor education to individual needs, providing an improved learning environment. Though few details have been provided, officials remain optimistic that the experiment will offer insights into a potential wider rollout. Education Minister Giuseppe Valditara emphasised the importance of these digital advancements for future generations.
Italy currently lags behind most EU countries in basic digital skills, ranking near the bottom of the bloc. The government has also introduced a ban on mobile phones in classrooms, a move aimed at reducing distractions and promoting focus.
The trial will be carefully monitored throughout the year to assess its effectiveness and inclusiveness, with the hope of addressing past struggles to digitalise Italy’s education system.
Meta Platforms will soon start using public posts on Facebook and Instagram to train its AI models in the UK. The company had paused its plans after regulatory concerns from the Irish privacy regulator and Britain’s Information Commissioner’s Office (ICO). The AI training will involve content such as photos, captions, and comments but will exclude private messages and data from users under 18.
Meta faced privacy-related backlash earlier in the year, leading to its decision to halt the AI model launch in Europe. The company has since engaged with UK regulators, resulting in a clearer framework that allows the AI training plans to proceed. The new strategy simplifies the way users can object to their data being processed.
From next week, Facebook and Instagram users in the UK will receive in-app notifications explaining how their public posts may be used for AI training. Users will also be informed on how to object to the use of their data. Meta has extended the window in which objections can be filed, aiming to address transparency concerns raised by both the ICO and advocacy groups.
Earlier in June, Meta’s AI plans faced opposition from privacy advocacy groups like NOYB, which urged regulators to intervene. These groups argued that Meta’s notifications did not fully meet the EU’s privacy and transparency standards. Meta’s latest updates are seen as an effort to align with these regulatory demands.
Dubai has introduced a pioneering AI security policy through the Dubai Electronic Security Center, led by H.E. Amer Sharaf. This landmark initiative is designed to address the unique challenges and vulnerabilities associated with AI. The policy focuses on three critical pillars: data integrity, protection of critical infrastructure, and ethical AI usage.
By establishing robust guidelines and best practices, Dubai aims to ensure that AI systems are resilient against emerging threats and operate securely. This comprehensive approach not only sets a high standard for AI security but also positions Dubai as a global leader in digital innovation in accordance with the UAE National Strategy for Artificial Intelligence 2031.
As part of its broader strategy to drive digital transformation, Dubai has implemented a pioneering AI security policy that plays a crucial role in its ambition to become a leading global digital city. Integrating advanced security measures into its AI initiatives allows Dubai to mitigate risks while effectively creating an environment conducive to innovation. That policy underpins ambitious projects such as self-driving vehicles and smart health systems, highlighting Dubai’s commitment to fostering a secure and dynamic digital landscape that aligns with its forward-looking vision.
A new task force has been launched by the White House to address the growing demands of AI infrastructure. Led by the National Economic Council and the National Security Council, the group aims to balance AI development with national security, economic, and environmental goals. Senior US officials and executives from major technology companies, including OpenAI and Google, took part in the meeting on Thursday.
The focus of the discussion was on the power requirements for advanced AI systems. Leaders explored how to meet clean energy targets and infrastructure needs, particularly in the face of increasing demand from data centres. AI has raised both hopes for efficiency gains and concerns over potential misuse, with its energy consumption being a significant challenge.
The Biden administration is pushing tech firms to invest in eco-friendly power solutions. The AI industry’s energy needs could complicate the government’s ambition to decarbonise the power grid by 2035. Representatives from major agencies, including Energy Secretary Jennifer Granholm, were part of the conversation on tackling these issues.
AI infrastructure plays a crucial role in the future of the US economy, according to OpenAI. The company emphasised the importance of expanding data centres domestically, not only to support industrial growth but also to ensure that AI’s benefits reach all corners of society.
A new $330 million data centre investment is poised to boost Greece’s digital economy. French company Data4 has announced plans to build a state-of-the-art AI hub in Paiania, near Athens. This development is expected to strengthen the country’s digital infrastructure.
Data4, which already manages data centres across six European nations, aims to collaborate with Greek banks to finance the project. CEO Olivier Micheli highlighted the significant contribution this data centre would bring to Greece’s economy and digital ecosystem. The hub may expand further, with potential investments of €200 million to add two more centres.
Greece is rapidly emerging as a key data hub in Southeast Europe. With a growing number of data centres, including upcoming investments from global players like Microsoft and Google, the country is positioned to become a digital gateway between Europe, Asia, and Africa. Recent telecoms infrastructure, including high-speed cables, further boosts this role.
Market research shows the data centre sector in Greece is expected to grow by 9% annually through 2028. The country’s digital transformation is being propelled by government support and the increasing adoption of AI and cloud services. Greece could soon become the second-largest data hub in the Mediterranean.
Oprah Winfrey aired a special titled ‘AI and the Future of Us,’ featuring guests like OpenAI CEO Sam Altman, tech influencer Marques Brownlee, and FBI director Christopher Wray. The discussion was largely focused on the potential risks and ethical concerns surrounding AI. Winfrey highlighted the need for humanity to adapt to AI’s rapid development, while Altman emphasised the importance of safety regulations.
Altman defended AI’s learning capabilities but acknowledged the need for government involvement in safety testing. However, his company has opposed California’s AI safety bill, which experts believe would provide essential safeguards. He also discussed the dangers of deepfakes and urged caution as AI technology advances.
Wray pointed out AI’s role in rising cybercrimes like sextortion and disinformation. He warned of its potential to be exploited for election interference, urging the public to remain vigilant in the face of increasing AI-generated content.
For balance, Bill Gates expressed optimism about AI’s positive impact on education and healthcare. He envisioned AI improving medical transcription and classroom learning, though concerns about bias and misuse remain.
China Telecom Global has recently inaugurated two significant centres in Hong Kong: the Artificial Intelligence Innovation Center and the Security Business Innovation Center. That development marks a crucial step in China Telecom’s strategy to enhance its high-quality development, cloud capabilities, and digital transformation initiatives.
By establishing these centres, the company aims to leverage China’s high-level opening-up policy to strengthen its international presence and drive global business growth through advanced technology and innovation. Furthermore, these centres are designed to optimise the business structure, integrate internal and external resources, and accelerate the global deployment of China Telecom’s capabilities in AI and security, thereby reinforcing its position as a leading global telecom player.
Additionally, China Telecom Global is placing a strong emphasis on research and collaboration. The focus is advancing cutting-edge technology and fostering partnerships between industry, academia, and research institutions. As a result, these centres are poised to become central hubs for developing AI and security talent, which will support Hong Kong’s evolution into an international centre of innovation and technology.
OpenAI’s latest version of ChatGPT, GPT o1, a nomenclature indicative of resetting the counter clock to 1, and its less costly mini version, represents a watershed moment in the company’s LLM stockpile. Designed to replicate superhuman-level intelligence, the models can already answer questions a lot faster than humans. This series of models will be unlike previous ones. In responding to queries, they utilise a human-like ‘chain of thought’ processing combined with reinforcement learning on specialised datasets and optimisation algorithms.
The model outperforms older models by a significant margin. For example, when tested against GPT-4o at the International Mathematics Olympiad, it scored 83 percent to GPT-4o’s 13 percent. What’s unique about the model is its ability to not only provide step-by-step reasoning for outputs but to show human-like patterns of hesitation during the process, ‘I’m curious about…’ and ‘Ok, let me see’ or ‘Oh, I’m running out of time, let me get to an answer quickly’. The new design has also resulted in a reduced occurrence of hallucinations. Yet, despite their many pros, the models have limitations. For instance, they cannot browse the internet, lack world knowledge, and cannot process files and images.
According to the lead researcher on the project, Jerry Tworek, the next level is for the models to perform similarly to PhD students on challenging benchmark tasks in areas such as physics, chemistry and biology. He assures that the intention here is not to equate AI with human thinking but rather to illustrate the model’s ability to dive cognitively deep. For the company, reasoning is a step up from pattern recognition, which is the design model used with previous versions. Ultimately, OpenAI aims to develop a product that can make decisions and take action on behalf of humans, a venture estimated to cost a further $USD 150 billion. Removing the current kinks in the system will mean that the models can work on complex global problems we face today in areas such as engineering and medicine.
More breakthroughs will also mean reduced access costs for developers and users. According to Chief Research Officer Bob McGrew, developer access to o1-preview is currently $15 per 1 million input tokens (chunks of text parsed by the model) and $60 per 1 million output tokens. GPT -o4 costs $5 per 1 million input tokens and $15 per 1 million output tokens.
Meta’s decision to change how it labels AI-modified content on Instagram, Facebook, and Threads signifies another advancement in the company’s approach to generative AI. The visibility of AI’s involvement is reduced by moving the ‘AI info’ label to the post’s menu for content that has been edited with AI tools. This could make it easier for users to overlook or miss the AI editing details in such posts.
However, for content fully generated by AI, Meta will continue to prominently display the label beneath the user’s name, ensuring that posts created entirely by AI prompts remain visibly marked. The distinction Meta is making here seems to reflect the varying degrees of AI involvement in content creation.
Meta aims to increase transparency about content labelling, specifying if AI designation is from industry signals or self-disclosure. This effort follows complaints and confusion over the previous ‘Made with AI’ label, particularly from photographers concerned that their real photos were misrepresented.
This change may raise concerns about the potential for users to be misled, especially as AI editing tools become more sophisticated and the line between human and AI-created content continues to blur. It highlights the need for continued transparency as AI technology integrates more deeply into content creation across platforms.
Taiwan is now using AI to track and predict the path of tropical storms, including the approaching storm Bebinca. AI-powered models, such as those from Nvidia and other tech companies, are outperforming traditional methods. The Central Weather Administration (CWA) has found these tools especially useful, providing more accurate forecasts that give forecasters greater confidence in predicting storm paths.
In July, AI models helped Taiwan predict Typhoon Gaemi’s path and impact, delivering early warnings eight days before landfall. Technology like this one significantly outperformed conventional methods, accurately forecasting record rainfall and giving authorities more time to prepare. The AI-based system allowed Taiwan to anticipate a rare loop in Gaemi’s path, which prolonged its effects on the island.
While AI weather forecasting models have delivered impressive results, experts say more time is needed for the technology to fully surpass traditional methods in predicting typhoon strength and wind speeds. AI has already proven its worth in predicting storm tracks and could revolutionise weather forecasting globally.
Despite some limitations, AI’s increasing role in weather prediction is promising. Taiwan’s weather service forecasters hope ongoing partnerships with companies like Nvidia will enhance these tools, potentially leading to even more accurate predictions in the future.