At the IGF 2024 preparatory session, stakeholders discussed the critical challenges surrounding digital sovereignty in developing countries, particularly in Africa. The dialogue, led by AFICTA and global experts, explored balancing data localisation with economic growth, infrastructure constraints, and regulatory policies.
Jimson Olufuye and Ulandi Exner highlighted the financial and technical hurdles of establishing local data centres, including unreliable electricity supplies and limited expertise. Nigeria‘s Kashifu Inuwa Abdullahi stressed the importance of data classification, advocating for clear regulations that differentiate sensitive government data from less critical commercial information.
The conversation extended to renewable energy’s role in powering local infrastructure. Jimson Olufuye pointed to successful solar-powered centres in Nigeria, while Kossi Amessinou noted the need for governments to utilise data effectively for economic development. Participants, including Martin Koyabe and Mary Uduma, underscored the importance of harmonised regional policies to streamline cross-border data flows without compromising security.
Speakers like Melissa Sassi and Dr Toshikazu Sakano argued for public-private partnerships to foster skills development and job creation. The call for capacity building remained a recurring theme, with Rachael Shitanda and Melissa Sassi urging governments to prioritise technical training while retaining talent in their countries.
The discussion concluded on an optimistic note, acknowledging that solutions, such as renewable energy integration and smart regulations, can help achieve digital sovereignty. Speakers emphasised the need for continued collaboration to overcome economic, technical, and policy challenges while fostering innovation and growth.
Elon Musk’s AI startup, xAI, revealed on Saturday that the latest version of its Grok-2 chatbot will be available for free to all users of the social media platform X. The new version of Grok-2 is part of xAI’s continued efforts to integrate AI technology into the platform, providing users with more advanced and efficient tools for interaction.
While the chatbot will be free for everyone, Premium and Premium+ users will benefit from higher usage limits and will be the first to experience new features as they are rolled out. This tiered approach ensures that paying users receive an enhanced experience, with priority access to future updates and capabilities.
xAI has been quietly testing the new Grok-2 model for several weeks, fine-tuning its performance and features in preparation for the public release. The improved version is expected to offer better capabilities and user interactions, marking a significant step forward in AI development for social media platforms.
The Swedish government is exploring age restrictions on social media platforms to combat the rising problem of gangs recruiting children online for violent crimes. Officials warn that platforms like TikTok and Snapchat are being used to lure minors—some as young as 11—into carrying out bombings and shootings, contributing to Sweden‘s status as the European country with the highest per capita rate of deadly shootings. Justice Minister Gunnar Strommer emphasised the seriousness of the issue and urged social media companies to take concrete action.
Swedish police report that the number of children under 15 involved in planning murders has tripled compared to last year, highlighting the urgency of the situation. Education Minister Johan Pehrson noted the government’s interest in measures such as Australia’s recent ban on social media for children under 16, stating that no option is off the table. Officials also expressed frustration at the slow progress by tech companies in curbing harmful content.
Representatives from platforms like TikTok, Meta, and Google attended a recent Nordic meeting to address the issue, pledging to help combat online recruitment. However, Telegram and Signal were notably absent. The government has warned that stronger regulations could follow if the tech industry fails to deliver meaningful results.
Google’s DeepMind has introduced GenCast, a cutting-edge AI weather prediction model that outperforms the European Centre for Medium-Range Weather Forecasts’ (ECMWF) ENS, widely regarded as the global leader in operational forecasting. A study in Nature highlighted GenCast’s superior accuracy, predicting weather more effectively 97.2% of the time during a comparative analysis of 2019 data.
Unlike earlier deterministic models, GenCast creates a complex probability distribution of potential weather scenarios by generating 50 or more forecasts per instance. This ensemble approach provides a nuanced understanding of weather trajectories, elevating predictive reliability.
Google is integrating GenCast into its platforms like Search and Maps, while also planning to make real-time and historical AI powered forecasts accessible for public and research use. With this advancement, the tech giant aims to revolutionise weather forecasting and its applications worldwide.
Google’s newest AI, the PaliGemma 2 model, has drawn attention for its ability to interpret emotions in images, a feature unveiled in a recent blog post. Unlike basic image recognition, PaliGemma 2 offers detailed captions and insights about people and scenes. However, its emotion detection capability has sparked heated debates about ethical implications and scientific validity.
Critics argue that emotion recognition is fundamentally flawed, relying on outdated psychological theories and subjective visual cues that fail to account for cultural and individual differences. Studies have shown that such systems often exhibit biases, with one report highlighting how similar models assign negative emotions more frequently to certain racial groups. Google says it performed extensive testing on PaliGemma 2 for demographic biases, but details of these evaluations remain sparse.
Experts also worry about the risks of releasing this AI technology to the public, citing potential misuse in areas like law enforcement, hiring, and border control. While Google emphasises its commitment to responsible innovation, critics like Oxford’s Sandra Wachter caution that without robust safeguards, tools like PaliGemma 2 could reinforce harmful stereotypes and discriminatory practices. The debate underscores the need for a careful balance between technological advancement and ethical responsibility
Meta Platforms has reported that generative AI had limited influence on misinformation campaigns across its platforms in 2023. According to Nick Clegg, Meta‘s president of global affairs, coordinated networks spreading propaganda struggled to gain traction on Facebook and Instagram, and AI-generated misinformation was promptly flagged or removed.
Clegg noted, however, that some of these operations have migrated to other platforms or standalone websites with fewer moderation systems. Meta dismantled around 20 covert influence campaigns this year. The company aims to refine content moderation while maintaining free expression.
Meta also reflected on its overly strict moderation during the COVID-19 pandemic, with CEO Mark Zuckerberg expressing regret over certain decisions influenced by external pressure. Looking forward, Zuckerberg intends to engage actively in policy debates on AI under President-elect Donald Trump‘s administration, underscoring AI’s critical role in US technological leadership.
World Labs, the startup co-founded by AI pioneer Fei-Fei Li, has introduced groundbreaking technology that transforms single images into interactive 3D environments. Unlike existing tools, these AI-generated scenes can be explored and modified directly within a browser, offering a dynamic and engaging experience.
The startup’s system leverages a category of AI known as ‘world models,’ which simulate 3D environments with improved consistency and physical realism. While the technology is still in its early stages, it aims to revolutionise industries like gaming, filmmaking, and design by providing accessible and cost-effective tools for creating virtual worlds.
Backed by $230M in funding from prominent investors, including Andreessen Horowitz and Intel Capital, World Labs is valued at over $1B. The company plans to refine its system further and release its first product in 2025, marking a significant step in the evolution of interactive AI applications.
The US Supreme Court has decided to allow a class-action lawsuit against Meta, Facebook’s parent company, to move forward. The case stems from the Cambridge Analytica scandal, where the political consulting firm accessed personal data from 87M Facebook users and used it for voter targeting in the 2016 US presidential election. Meta had sought to block the lawsuit, but the court dismissed its appeal.
Investors claim Meta failed to fully disclose the risks of data misuse, leading to two major drops in its stock price in 2018 when the extent of the privacy breach became public. Meta has already paid a $5.1B fine and a $725M settlement with users over related allegations.
The lawsuit is one of several legal challenges facing big tech firms. A separate case against Nvidia is under review, as investors allege the company misled them about its dependency on cryptocurrency mining.
OpenAI is under scrutiny after engineers accidentally erased key evidence in an ongoing copyright lawsuit filed by The New York Times and Daily News. The publishers accuse OpenAI of using their copyrighted content to train its AI models without authorisation.
The issue arose when OpenAI provided virtual machines for the plaintiffs to search its training datasets for infringed material. On 14 November 2024, OpenAI engineers deleted the search data stored on one of these machines. While most of the data was recovered, the loss of folder structures and file names rendered the information unusable for tracing specific sources in the training process.
Plaintiffs are now forced to restart the time-intensive search, leading to concerns over OpenAI’s ability to manage its own datasets. Although the deletion is not suspected to be intentional, lawyers argue that OpenAI is best equipped to perform searches and verify its use of copyrighted material. OpenAI maintains that training AI on publicly available data falls under fair use, but it has also struck licensing deals with major publishers like the Associated Press and News Corp. The company has neither confirmed nor denied using specific copyrighted works for its AI training.
Elon Musk’s social media platform X is testing a free version of its AI chatbot, Grok, which was previously exclusive to premium subscribers. Over the weekend, reports surfaced from users and researchers indicating that some free accounts in regions like New Zealand now have access to the AI tool. While usage is capped to 10 queries every two hours for the Grok-2 model, this marks a significant expansion of the technology’s reach.
Grok, developed by Musk’s company xAI, launched earlier this year with advanced features like image generation and understanding, powered by Black Forest Labs’ FLUX.1 model. Previously available only to paying users, the decision to extend limited access to free users may reflect xAI’s strategy to grow its user base and improve feedback for refining its technology.
To use Grok for free, accounts must be at least seven days old and linked to a phone number. This move positions xAI to compete with AI giants like OpenAI’s ChatGPT and Google’s Gemini, while also potentially bolstering its valuation, which reportedly reached $40B in recent funding discussions. This test of free access could accelerate Grok’s development cycle and further establish xAI in the competitive AI market.