Meta Platforms is preparing to introduce a new audio feature for its AI chatbot, which will allow users to select voices from five celebrities, including Judi Dench and John Cena. As part of its efforts to enhance user engagement, Meta will offer the voice options across its platforms such as Facebook, Instagram, and WhatsApp.
The announcement is expected at Meta’s annual Connect conference, where the company is also set to unveil augmented-reality glasses and provide updates on its Ray-Ban Meta smart glasses. These developments reflect Meta’s push to integrate AI more deeply into everyday interactions through its various products.
Celebrity voices are set to roll out this week in the US and other English-speaking markets. Meta hopes that this new feature will appeal to users seeking a more personalised experience with its AI chatbot, positioning itself in competition with AI giants like Google and OpenAI.
As part of its broader AI strategy, Meta has shifted focus towards integrating celebrity voices after earlier text-based characters saw limited success. The company is committed to making its chatbot a core feature across its platforms, striving to stay ahead in the competitive AI landscape.
UAE-based AI firm G42 has announced a partnership with US chipmaker Nvidia to focus on developing advanced climate technology. A new operational base and lab will be established in Abu Dhabi to create AI solutions that improve global weather forecasting. This collaboration comes as the UAE seeks to diversify its economy away from oil through heavy investment in AI technologies.
UAE’s government-backed G42 has been making strides in the AI sector, forging agreements with various US firms. Recently, G42 and Microsoft revealed plans to open two AI centres in Abu Dhabi, further expanding the Gulf nation’s capabilities in AI research. These developments align with Abu Dhabi’s broader goals of advancing technological cooperation with the US.
While the UAE builds on AI collaboration, concerns have emerged in Washington about US technology potentially reaching China. To counter this, the US government has introduced stricter export controls on AI chips to the region. However, this has not hampered the UAE’s ambitions for AI growth and strategic partnerships.
Next week, UAE President Sheikh Mohammed bin Zayed Al Nahyan will visit the White House, marking a historic moment. Discussions will centre around regional security and technological cooperation in AI with President Joe Biden, reinforcing the strategic relationship between the two nations.
Demetris Skourides, the Chief Scientist, spoke at the Learning Innovation Summit 2024, stressing the significance of ethical AI development. He emphasised the EU AI Act’s role in establishing trustworthy AI systems that focus on ethics, transparency, and accountability. Skourides advocated for AI’s application in education, pointing out its ability to personalise learning, automate tasks, and enhance teaching environments.
He praised rapid AI advancements in Cyprus, with more than 50 companies leveraging the technology across key industries like healthcare and finance. Skourides highlighted the country’s commitment to upholding the EU AI Act, ensuring that AI systems meet the highest standards of accountability and ethics. The Chief Scientist also noted how Cyprus could generate new job opportunities through this AI revolution.
The potential for AI to transform education was a central theme. Skourides discussed the benefits of adaptive learning platforms, which can tailor lessons to individual students’ strengths, enabling each learner to reach their full potential. He urged educators to embrace AI, foreseeing a shift from rote memorisation to fostering creativity, critical thinking, and collaboration in the classroom.
Finally, Skourides called for a balanced approach to AI development. By equipping future generations with digital skills and ensuring that ethics remain central, AI’s power can be harnessed to drive both economic growth and innovation. He reaffirmed his commitment to advancing AI in education and collaborating with industry leaders to create an empowering learning environment.
African perspectives are vital for developing AI solutions tailored to the continent’s unique challenges, according to US officials. At the Global Inclusivity and AI: Africa Conference, the acting Special Envoy for Critical and Emerging Technology, Dr Seth Center, and Deputy Assistant Secretary of State for African Affairs, Joy Basu, emphasised the importance of African representation in shaping global AI policies.
The event focused on fostering deeper conversations about AI’s potential role in Africa’s development. Basu praised the diverse voices from across the continent and stressed the need for African leaders to influence AI’s future applications, especially in sectors like agriculture and healthcare. The conference marked a pivotal step in increasing African engagement in critical technology discussions, which are already being supported in global forums like the G20 and the United Nations.
AI could help Africa achieve its Sustainable Development Goals, addressing key challenges across agriculture, healthcare, and education, according to Dr Seth Center. He noted the transformative role AI can play in boosting economic development, reducing poverty, and improving healthcare access. However, collaboration, both regional and global, will be essential to ensuring that AI is developed responsibly.
Startups and entrepreneurs will play a significant role in shaping Africa’s AI landscape, with many countries already crafting national AI strategies. The African Union is also working on governance frameworks to enable cross-border collaboration. These efforts will help unlock opportunities for innovation, ensuring AI’s benefits reach all parts of the continent.
When planning his summer trip to Amsterdam and Ireland, Jason Brown opted for ChatGPT over traditional travel resources. The founder of People Movers used the AI tool to design a detailed itinerary for his family, outlining activities in Dublin and Galway. He described the experience as ‘fantastic,’ noting how quickly ChatGPT generated organised suggestions for each day. While he implemented many of the AI‘s recommendations, he also appreciated personal connections for uncovering local treasures.
The growing influence of generative AI in travel planning is clear, with tools like Google’s Gemini and Microsoft’s Copilot becoming increasingly popular. A recent survey found that one in ten Britons have turned to AI for travel arrangements, with many showing interest in using it again. However, challenges persist, as many users reported receiving generic or inaccurate information. Experts stress the need to verify AI-generated content with trusted sources, such as residents or travel agents, to ensure accuracy.
Sardar Bali, co-founder of the AI travel planner Just Ask Layla, argues about the need for accuracy in AI-generated content. His team uses a two-step verification process to enhance reliability, though he admits that errors can still happen. Meanwhile, major companies like Expedia are incorporating AI into their services to simplify complex travel planning by offering personalised suggestions.
However, not all experiences with AI in travel planning have been positive. Freelance writer Rebecca Crowe faced challenges with AI-generated itineraries that were often impractical and outdated, especially when looking for gluten-free dining options. She recommends using AI mainly for inspiration, while also cross-referencing information with trusted blogs and travel guides to ensure accuracy and save time.
California has introduced three new laws aimed at reducing AI-generated deepfakes ahead of the 2024 election. The legislation, signed by Governor Gavin Newsom, is designed to combat election misinformation and protect the public from deceptive political ads. One law requires online platforms like X to remove false materials and empowers individuals to sue over election-related deepfakes.
However, two of these laws are now facing a legal challenge. A creator of parody videos featuring Kamala Harris claims the legislation violates free speech rights. The lawsuit, filed in Sacramento, accuses California of censoring content, despite assurances from Newsom’s office that the laws do not target satire or parody.
Supporters of the laws argue they are necessary to prevent erosion of trust in US elections, as AI-generated disinformation becomes an increasing threat. Critics, including free speech advocates, believe the legislation overreaches and could be ineffective due to slow court processes, limiting its impact.
Despite the debate, California’s laws could serve as a deterrent to potential violations. Legislators hope the rules will prompt platforms to act quickly in identifying and removing misleading content.
As the EU finalises its groundbreaking AI Act, major technology firms are lobbying for lenient regulations to minimise the risk of multi-billion dollar fines. The AI Act, agreed upon in May, is the world’s first comprehensive legislation governing AI. However, the details on how general-purpose AI systems like ChatGPT will be regulated remain unclear. The EU has opened the process to companies, academics, and other stakeholders to help draft the accompanying codes of practice, receiving a surge of interest with nearly 1,000 applications.
A key issue at stake is how AI companies, including OpenAI and Stability AI, use copyrighted content to train their models. While the AI Act mandates companies to disclose summaries of the data they use, businesses are divided over how much detail to include, with some advocating for protecting trade secrets. In contrast, others demand transparency from content creators. Major players like Google and Amazon have expressed their commitment to the process, but there are growing concerns about transparency, with some accusing tech giants of trying to avoid scrutiny.
The debate over transparency and copyright has sparked a broader discussion on the balance between regulation and innovation. Critics argue that the EU’s focus on regulation could stifle technological advancements, while others stress the importance of oversight in preventing abuse. Former European Central Bank chief Mario Draghi recently urged the EU to improve its industrial policy to compete with China and the US, emphasising the need for swift decision-making and significant investment in the tech sector.
The finalised code of practice, expected next year, will not be legally binding but will serve as a guideline for compliance. Companies will have until August 2025 to meet the new standards, with non-profits and startups also playing a role in drafting. Some fear that big tech firms could weaken essential transparency measures, underscoring the ongoing tension between innovation and regulation in the digital era.
Taiwan is now using AI to track and predict the path of tropical storms, including the approaching storm Bebinca. AI-powered models, such as those from Nvidia and other tech companies, are outperforming traditional methods. The Central Weather Administration (CWA) has found these tools especially useful, providing more accurate forecasts that give forecasters greater confidence in predicting storm paths.
In July, AI models helped Taiwan predict Typhoon Gaemi’s path and impact, delivering early warnings eight days before landfall. Technology like this one significantly outperformed conventional methods, accurately forecasting record rainfall and giving authorities more time to prepare. The AI-based system allowed Taiwan to anticipate a rare loop in Gaemi’s path, which prolonged its effects on the island.
While AI weather forecasting models have delivered impressive results, experts say more time is needed for the technology to fully surpass traditional methods in predicting typhoon strength and wind speeds. AI has already proven its worth in predicting storm tracks and could revolutionise weather forecasting globally.
Despite some limitations, AI’s increasing role in weather prediction is promising. Taiwan’s weather service forecasters hope ongoing partnerships with companies like Nvidia will enhance these tools, potentially leading to even more accurate predictions in the future.
Researchers at Western University have developed an AI model that detects strawberry diseases and predicts ripeness with nearly 99% accuracy. The system, designed by Joshua Pearce and Soodeh Nikan, could significantly enhance crop quality and reduce waste. Tested in a controlled hydroponic environment, the technology aims to extend Canada’s strawberry growing season while improving fruit quality.
The model is free and open-source, enabling farmers to tailor it to their needs. It can notify them via email or phone when diseases are detected or fruit is ripe. This adaptable AI system could prove crucial for increasing agricultural efficiency.
By minimising food waste and lowering production costs, the AI model has the potential to reduce grocery prices for consumers. Researchers hope the technology will support food security and help farmers meet growing demands for fresh produce.
Future plans involve testing the AI outdoors, possibly with drones monitoring larger fields. The innovation could bring smarter, more sustainable farming to outdoor environments, further boosting efficiency in agriculture.
Researchers are currently developing AI tools to help predict and manage future pandemics, which some experts believe will likely within the next decade. Teams from UC Irvine and UCLΑ, part of the US National Science Foundation’s Predictive Intelligence for Pandemic Prevention grant programme, are working on an AI-based early warning system that analyses social media posts to detect early signs of outbreaks. They aim to track billions of posts on platforms like X (formerly Twitter) to identify public health trends and assess the potential outcomes of public health policies. However, the reliance on specific platforms and US-focused data limits its global application. Researchers are working to expand its reach.
Harvard Medical School and the University of Oxford have created a tool called EVEScape, which predicts virus mutations. This tool helps in developing vaccines and treatment strategies. Pharmaceutical companies such as AstraZeneca are also utilising AI to accelerate the discovery of antibodies, which could potentially reduce the response time to new viral threats. These initiatives demonstrate how AI can enhance pandemic response by providing faster and more accurate data for decision-making.
“Despite its potential, experts warn that the effectiveness of AI depends on the quality of the data it receives. Biases or misrepresentations in the data could lead to skewed results, and there are ethical and fairness concerns. Although AI can improve preparedness and response times, human judgement, trust, and collaboration are essential for effectively managing future pandemics.”