Nigeria launches first multilingual large language model for inclusive AI development 

The Nigerian government, through the Ministry of Communications, Innovation and Digital Economy, has unveiled Nigeria’s first Multilingual Large Language Model (LLM). This initiative marks a critical step towards integrating AI technology with the country’s diverse linguistic heritage.

The LLM, announced by Dr. Bosun Tijani, the Communications Minister, is part of a broader effort to position Nigeria as a leader in AI development across the continent. The model was launched following a four-day AI workshop held in Abuja, which drew over 120 AI experts. The development of the LLM is a collaboration between the National Information Technology Development Agency (NITDA), the National Centre for AI and Robotics (NCAIR), Awarritech—a Nigerian AI company, and DataDotOrg, a global tech company.

Designed to support five low-resource languages and accented English, the LLM aims to improve the representation of these languages in AI applications, promoting more inclusive and effective technology solutions. This approach is expected to enhance local content and provide a foundation for more nuanced AI-driven applications and services.

The project will also be reinforced by significant educational support, with over 7,000 fellows from the Three Million Technical Talent (3MTT) program participating.    These efforts are supported by substantial financial backing, with $3.5 million in seed funding contributed by international and local partners, including UNDP, UNESCO, Meta, Google, Microsoft.

Infrastructure developments have also been announced, including the acquisition of GPUs funded by 21st Century Technologies to enhance the country’s national computing capacity. This infrastructure will support local researchers, startups, and government entities involved in critical AI projects, and will be housed at the GBB Data Centre in the Federal Capital Territory.

This strategic development positions Nigeria to leverage AI for local benefits and sets the stage for the country to play a significant role in the global AI landscape.

UK Government Introduces £8 Million Fund to Enhance AI Integration in the Maritime Sector

The UK government has introduced an £8 million Smart Shipping Acceleration Fund, overseen by Innovate UK, which seeks to propel the industry forward by supporting AI integration in various maritime operations, from autonomous vessels to advanced port logistics.

This strategic fund is part of the larger UK Shipping Office for Reducing Emissions (UK SHORE) programme, which commenced in March 2022 with a substantial allocation of £206 million. UK SHORE’s broader objectives include decarbonizing the maritime industry, enhancing safety standards, and contributing to economic growth through technological innovation. 

Maritime Minister Lord Davies emphasised the transformative potential of AI during his visit to Ocean Infinity, a Southampton-based company known for its advancements in marine robotics. According to Lord Davies, employing cutting-edge technologies to streamline ship operations and port activities is crucial for reducing the sector’s carbon footprint, improving safety for seafarers, and fostering economic expansion.

The fund will notably support feasibility studies and the development of new technologies, covering a wide array of applications from self-driving boats to automated systems to enhance port operations’ efficiency and sustainability. 

Industry experts have been enthusiastic about the introduction of the Smart Shipping Acceleration Fund. Eleanor Watson, an AI ethics engineer and faculty member at Singularity University, made the assessment that “It’s ultimately in the interest of businesses to embrace this. AI is advancing at a tremendous rate, and its power or potential is now far clearer to the public. The technology’s vast applicability opens up so many opportunities, and organisations cannot adapt quickly enough to new developments.”.

Microsoft unveils Phi-3 Mini for AI innovation

Microsoft has introduced Phi-3 Mini, the latest addition to its lineup of lightweight AI models. It marks the first instalment of a planned trio of small-scale models. With 3.8 billion parameters, Phi-3 Mini offers a streamlined alternative to larger language models like GPT-4, catering to users’ diverse needs. On platforms like Azure, Hugging Face, and Ollama, Phi-3 Mini represents a significant step forward in democratising AI technology.

Compared to its predecessors, Phi-3 boasts improved performance, with Microsoft touting its ability to rival larger models while maintaining a compact form factor. According to Eric Boyd, corporate vice president of Microsoft Azure AI Platform, Phi-3 Mini is on par with LLMs like GPT-3.5, offering comparable capabilities in a smaller package. This advancement underscores Microsoft’s commitment to enhancing accessibility and efficiency in AI development.

Small AI models like Phi-3 are gaining traction for their cost-effectiveness and suitability for personal devices, offering optimal performance without compromising functionality. Microsoft’s strategic focus on lightweight models aligns with industry trends, as evidenced by the company’s development of Orca-Math, a specialised model for solving mathematical problems. With Phi-3, Microsoft aims to empower developers with versatile tools catering to various applications.

Why does it matter?

As the AI landscape evolves, companies increasingly turn to tailored solutions like Phi-3 for their specific needs. With its refined capabilities in coding and reasoning, Phi-3 represents a significant milestone in Microsoft’s AI journey. While it may not match the breadth of larger models like GPT-4, Phi-3’s adaptability and affordability make it a compelling choice for custom applications and resource-conscious organisations.

Japan to launch new international AI dialogue framework

Japan is considering creating a new framework for dialogue involving like-minded nations to discuss international regulations on the appropriate use of generative AI technology. Prime Minister Fumio Kishida is expected to make the announcement at the upcoming OECD ministerial council meeting, which is scheduled to take place in Paris from 2 May to 3 May, revealed a government source.  

Japanese Prime Minister Fumio Kishida plans to introduce the concept of a ‘Friends’ meeting specifically focused on AI issues,  which aims to extend the discussions initiated by last year’s Hiroshima AI Process. This initiative, launched at the G7 summit chaired by Japan in Hiroshima, seeks to develop global regulations to guide the development and application of AI technologies, such as ChatGPT, and to mitigate risks such as the spread of misinformation which could threaten political stability and democracy.

The Hiroshima AI process includes a comprehensive policy framework comprising guiding principles and a code of conduct for both developers and users of advanced AI systems. These guidelines are designed to ensure safety, security and trustworthiness in AI deployment.

Given the urgency of setting universal standards, Japan views the OECD gathering as a strategic platform to draw attention to the significance of the Hiroshima AI Process and securing broad support from the public and private sectors, according to the source. 

OpenAI hires first employee in India for AI policy

OpenAI, the company behind ChatGPT, has appointed Pragya Misra, its first employee in India, to lead government relations and public policy affairs. This move comes as India prepares for a new administration to influence AI regulations in one of the world’s largest and fastest-growing tech markets. Previously with Truecaller AB and Meta Platforms Inc., Misra brings a wealth of experience navigating policy issues and partnerships within the tech industry.

The hiring reflects OpenAI’s strategic efforts to advocate for favourable regulations amid the global push for AI governance. Given its vast population and expanding economy, India presents a significant growth opportunity for tech giants. However, regulatory complexities in India have posed challenges, with authorities aiming to protect local industries while embracing technological advancements.

Why does it matter?

OpenAI’s engagement in India mirrors competition from other tech giants like Google, which is developing AI models tailored for the Indian market to address linguistic diversity and expand internet access beyond English-speaking urban populations. OpenAI’s CEO, Sam Altman, emphasised the need for AI research to enhance government services like healthcare, underscoring the importance of integrating emerging technologies into public sectors.

During Altman’s visit to India last year, he highlighted the country’s early adoption of OpenAI’s ChatGPT. Altman has advocated for responsible AI development, calling for regulations to mitigate potential harms from AI technologies. While current AI versions may not require major regulatory changes, Altman believes that evolving AI capabilities will soon necessitate comprehensive governance.

UK union proposes bill to protect workers from AI risks

The Trade Union Congress (TUC) in the UK has proposed a bill to safeguard workers from the potential risks posed by AI-powered decision-making in the workplace. The government has maintained a relatively light-touch approach to regulating AI, preferring to rely on existing laws and regulatory bodies. The TUC’s proposal seeks to prompt the government to adopt a firmer stance on regulating AI and ensuring worker protection.

According to the TUC, the bill addresses the risks associated with AI deployment and advocates for trade union rights concerning employers’ use of AI systems. Mary Towers, a TUC policy officer, emphasised the urgency of action, stating that while AI rapidly transforms society and work, there are currently no specific AI-related laws in the UK. The proposed bill aims to fill this legislative gap and ensure everyone benefits from AI opportunities while being shielded from potential harm.

Why does it matter?

While the UK government has outlined an approach to AI regulation based on harms rather than risks, the TUC argues for more comprehensive legislation akin to the EU’s AI Act, which is the world’s first legislation to address AI risks. The TUC’s efforts, including forming an AI task force and the proposed AI bill, underscore the pressing need for legislation to protect workers’ rights and ensure that AI advancements benefit all members of society, not just a few.

Meta launches Llama 3 to challenge OpenAI

Meta Platforms launched its latest large language model, Llama 3, and a real-time image generator designed to update pictures as users type prompts. The development aims to catch up with the generative AI market leader, OpenAI. The models are set to be integrated into Meta’s virtual assistant, Meta AI, which the company claims to be the most advanced among its free-to-use counterparts. Performance comparisons highlight its reasoning, coding, and creative writing capabilities against competitors like Google and Mistral AI.

Meta is giving prominence to its updated Meta AI assistant within its various platforms, positioning it to compete more directly with OpenAI’s ChatGPT. The assistant will feature prominently in Meta’s Facebook, Instagram, WhatsApp, and Messenger apps, along with a standalone website offering various functionalities, from creating vacation packing lists to providing homework help.

The development of Llama 3 is part of Meta’s efforts to challenge OpenAI’s leading position in generative AI. The company has openly released its Llama models for developers, aiming to disrupt rivals’ revenue plans with powerful free options. However, critics have raised safety concerns about unscrupulous actors’ potential misuse of such models.

While Llama 3 currently outputs only text, future versions will incorporate multimodal capabilities, generating text and images. Meta CEO Mark Zuckerberg emphasised the performance of Llama 3 versions against other free models, indicating a growing performance gap between free and proprietary models. The company aims to address previous issues with understanding context by leveraging high-quality data and significantly increasing the training data for Llama 3.

Baidu’s chatbot Ernie Bot surpasses 200 million users

Baidu, a leading Chinese technology company, announced that its AI chatbot, ‘Ernie Bot,’ has reached 200 million users. This announcement, made by CEO Robin Li, comes a few months after its introduction in March and subsequent public release in August. Ernie Bot’s application programming interface (API) is utilized 200 million times daily, indicating its widespread use. Moreover, Ernie Bot’s enterprise client base has reached 85,000. 

Facing competition from domestic rivals like Moonshot AI’s ‘Kimi’ chatbot, backed by Alibaba, Ernie Bot navigates a dynamic market. Last month, Ernie Bot recorded 14.9 million visits compared to Kimi’s 12.6 million, reflecting a tightly contested AI landscape. The regulatory environment in China, requiring government approval for public AI releases, adds a layer of complexity, affecting the speed at which new technologies can enter the market.

China hosts 130 large language models (LLMs), comprising 40% of the total global population and second only to the United States. Despite these figures, Chinese AI technologies, including Ernie Bot, fail to bridge the gap with global leaders like OpenAI’s ChatGPT, which achieved 1.86 billion views last month. 

This development for Baidu’s Ernie Bot highlights the rapid adoption and potential of Chinese AI technologies, with China aiming to become a global leader in the AI industry by 2030, with an AI market projected to reach $104.70bn USD.

FDA approves new AI tool for sepsis detection

The US Food and Drug Administration (FDA) has approved the Sepsis ImmunoScore, a diagnostic tool based on artificial intelligence (AI) and machine learning, developed by Prenosis. This is the first approval for an AI tool dedicated to the early detection and prediction of sepsis, a condition resulting from a harmful immune response to infection.

Created by the Chicago-based firm Prenosis, the Sepsis ImmunoScore integrates into hospital electronic health records to provide immediate diagnostic and predictive insights. The tool analyzes 22 biomarkers and clinical data to evaluate the risk of developing sepsis within 24 hours of a patient’s admission to an emergency or hospital setting.

With its capability to generate a risk score and categorize patients into four distinct levels of risk, the Sepsis ImmunoScore aids healthcare providers in making informed treatment decisions swiftly, which is crucial for high-risk patients.

The FDA’s De Novo pathway facilitated this approval, highlighting the tool’s novelty and the moderate risk it presents. This pathway is often used for innovative devices that also offer a new approach to medical treatment or diagnosis.

The urgency for advanced diagnostic tools like this stems from the substantial impact of sepsis, which affects over 1.7 million adults in the U.S. each year, with about 350,000 resulting fatalities, according to the Centers for Disease Control and Prevention (CDC). Prompt diagnosis and treatment are critical, as the mortality rate increases significantly with delays in sepsis detection.

Sepsis diagnosis has historically been complicated due to the non-specific nature of its early symptoms, such as fever and increased heart rate, which can resemble those of other conditions. The ImmunoScore addresses these challenges by providing a quantifiable assessment of a patient’s risk for sepsis, supporting more targeted and timely interventions.

To develop this tool, Prenosis utilized its extensive biobank and dataset comprising more than 100,000 blood samples from over 25,000 patients. This research was crucial in identifying the biomarkers and patient parameters that are predictive of sepsis, enhancing the tool’s accuracy.

The approval of the Sepsis ImmunoScore by the FDA is a notable development in the field of medical diagnostics and represents a significant step towards the application of AI technologies in healthcare.

Israeli startup Bridgewise raises $21 million for AI investment research

Bridgewise, a Tel Aviv-based startup harnessing AI for investment research, has secured $21 million in funding amidst a surge in technology adoption within the finance industry. Founded in 2019, the company employs machine-learning algorithms trained on historical data to offer tailored investment analysis to brokerages, wealth advisors, and major exchanges like Nasdaq and the London Stock Exchange. Their robo-advisor, Bridget, can provide personalised financial insights akin to a conversational ChatGPT experience.

The latest funding round, led by SIX Group alongside Group 11 and L4 Venture Builder, aims to propel Bridgewise’s global expansion and the development of new AI tools. With 50 clients and a presence on 35 trading platforms, Bridgewise caters to users in 22 languages across 15 countries, including Australia, Japan, Singapore, and the US. Their research covers over 36,000 equities and 14,500 exchange-traded or mutual funds, continuously updated with real-time data sourced from news reports and social media.

Gaby Diamant, Bridgewise’s CEO, emphasised the company’s mission to simplify investment analysis, regardless of language barriers or financial jargon comprehension. Bridgewise’s innovative approach extends to its micro language model (MLM), which delivers conversational analysis in the user’s preferred language. While larger language models like OpenAI’s GPT-4 boast trillion-parameter capabilities, Bridgewise’s MLM optimises efficiency and expertise by focusing on specific datasets, a strategy poised to evolve into larger language models tailored to specific markets and customer needs.