Apple explores AI partnerships for iPhones in China

According to sources familiar with the matter, Apple is in early talks with Tencent and ByteDance to integrate their AI models into iPhones sold in China. This comes as Apple rolls out OpenAI’s ChatGPT in other markets, but regulatory restrictions in China prevent the availability of the chatbot there. To comply with local rules and counter the declining market share, Apple is exploring partnerships with Chinese firms that already have government-approved AI models.

Potential partners include ByteDance’s Doubao and Tencent’s Hunyuan, part of a growing field of AI services in China. Although Apple previously discussed using Baidu’s Ernie model, reports suggest technical disagreements halted progress. Baidu’s shares dropped following news of these challenges, while Tencent’s stock saw a boost.

Apple faces increasing pressure in China’s competitive smartphone market, where domestic rivals like Huawei are surging ahead. Huawei’s recent AI-equipped models have attracted consumers, contributing to a 42% spike in sales. In contrast, Apple’s third-quarter sales dipped slightly, underscoring the need for a successful AI integration strategy to regain momentum in China.

UN discusses ethical tech and inclusion at IGF 2024

Speakers at IGF 2024 highlighted digital innovation within the United Nations system, demonstrating how emerging technologies are enhancing services and operational efficiency. Representatives from UNHCR, UNICEF, the UN Pension Fund, and UNICC shared their organisations’ progress and collaborative efforts.

Michael Walton, Head of Digital Services at UNHCR, detailed initiatives supporting refugees through digital tools. These include mobile apps for services and efforts to counter misinformation. Walton stressed the importance of digital inclusion and innovation to bridge gaps in education and access for vulnerable groups.

Fui Meng Liew, Chief of Digital Center of Excellence at UNICEF, emphasised safeguarding children’s data rights through a comprehensive digital resilience framework. UNICEF’s work also involves developing digital public goods, with a focus on accessibility for children with disabilities and securing data privacy.

Dino Cataldo Dell’Accio from the UN Pension Fund presented a blockchain-powered proof-of-life system that uses biometrics and AI in support of e-Government for the aging population. This system ensures beneficiaries’ security and privacy while streamlining verification processes. Similarly, Sameer Chauhan of UNICC showcased digital solutions like AI chatbots and cybersecurity initiatives supporting UN agencies.

The session’s collaborative tone extended into discussions of the UN Digital ID project, which links multiple UN agencies. Audience members raised questions on accessibility, with Nancy Marango and Sary Qasim suggesting broader use of these solutions to support underrepresented communities globally.

Efforts across UN organisations reflect a shared commitment to ethical technology use and digital inclusion. The panellists urged collaboration and transparency as key to addressing challenges such as data protection and equitable access while maintaining focus on innovation.

Democratising AI: the promise and pitfalls of open-source LLMs

At the Internet Governance Forum 2024 in Riyadh, the session Democratising Access to AI with Open-Source LLMs explored a transformative vision: a world where open-source large language models (LLMs) democratise AI, making it accessible, equitable, and responsive to local needs. However, this vision remains a double-edged sword, revealing immense promise and critical challenges.

Panelists, including global experts from India, Brazil, Africa, and the Dominican Republic, championed open-source AI to prevent monopolisation by large tech companies. Melissa Muñoz Suro, Director of Innovation in the Dominican Republic, showcased Taina, an AI project designed to reflect the nation’s culture and language. ‘Open-source means breaking the domino effect of big tech reliance,’ she noted, emphasising that smaller economies could customise AI to serve their unique priorities and populations.

Yet, as Muñoz Suro underscored, resource constraints are a significant obstacle. Training open-source models require computational power, infrastructure, and expertise, which are luxuries many Global South nations lack. A Global South AI expert, Abraham Fifi Selby echoed this, calling for ‘public-private partnerships and investment in localised data infrastructure’ to bridge the gap. He highlighted the significance of African linguistic representation, emphasising that AI trained in local dialects is essential to addressing regional challenges.

The debate also brought ethical and governance concerns into sharp focus. Bianca Kremer, a researcher and activist from Brazil, argued that regulation is indispensable to combat monopolies and ensure AI fairness. She cited Brazil’s experience with algorithmic bias, pointing to an incident where generative AI stereotypically portrayed a Brazilian woman from a favela (urban slum) as holding a gun. ‘Open-source offers the power to fix these biases,’ Kremer explained but insisted that burdensome regulation must accompany technological optimism.

Despite its potential, open-source AI risks misuse and dwindling incentives for large-scale investments. Daniele Turra from ISA Digital Consulting proposed redistributing computational resources—suggesting mechanisms like a ‘computing tax’ or infrastructure sharing by cloud giants to ensure equitable access. The session’s audience also pushed for practical solutions, including open datasets and global collaboration to make AI development truly inclusive.

While challenges persist, trust, collaboration, and local capacity-building remain critical to open-source AI’s success. As Muñoz Suro stated, ‘Technology should make life simpler, happier, and inclusive, and open-source AI if done right, is the key to unlocking this vision.’

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Experts at IGF 2024 address the dual role of AI in elections, emphasising empowerment and challenges

At IGF 2024, panellists explored AI’s role in elections, its potential for both empowerment and disruption, and the challenges it poses to democratic processes. Moderator Tapani Tarvainen led the discussion with contributions from Ayobangira Safari Nshuti, Roxana Radu, Babu Ram Aryal, and other experts.

Speakers noted that AI had been primarily used for self-promotion in campaigns, helping smaller candidates compete with limited resources. Roxana Radu highlighted AI’s positive role in voter outreach in India but warned of risks such as disinformation and public opinion manipulation. Ayobangira Safari Nshuti pointed to algorithmic biases and transparency issues in platforms as critical concerns, emphasising a recent case in Romania where AI-enabled manipulation caused election disruption.

Accountability of social media platforms became a focal point. Platforms increasingly rely on AI for content moderation, but their effectiveness in languages with limited online presence remains inadequate. Babu Ram Aryal stressed the need for stronger oversight, particularly in multilingual nations, while Dennis Redeker underscored the challenges of balancing regulation with free speech.

Panellists called for holistic solutions to safeguard democracy. Suggestions included enhancing platform transparency, implementing robust digital literacy programmes, and addressing social factors like poverty that exacerbate misinformation. Nana, an AI ethics specialist, advocated for proactive governance to adapt electoral institutions to technological realities.

The session concluded with a recognition that AI’s role in elections will continue to evolve. Panellists urged collaborative efforts between governments, civil society, and technology companies to ensure election integrity and maintain public trust in democratic systems.

Basis lands $34 million to revolutionise accounting

Basis, an AI startup, has secured $34 million in a Series A funding round to develop its AI-powered accounting automation product. The round, led by Khosla Ventures, attracted a diverse group of investors, including NFDG (the AI-focused fund managed by former GitHub CEO Nat Friedman and ex-Apple executive Daniel Gross), OpenAI board members Larry Summers and Adam D’Angelo, and Google’s chief scientist Jeff Dean.

The New York-based company is part of a growing group of AI startups creating autonomous agents—systems capable of performing tasks independently. Basis’ product, designed specifically for accounting firms, can handle various workflows such as entering transactions, verifying data accuracy, and integrating with popular ledger systems like QuickBooks and Xero. The product has already shown promising results, with large firms like Wiss reporting a 30% reduction in time spent on manual accounting tasks. Basis functions similarly to a junior accountant, allowing staff to focus on reviewing the AI’s work rather than completing tasks themselves.

Basis also aims to address the critical shortage of accountants in the US, exacerbated by retiring baby boomers and a decline in younger generations entering the profession. According to the Bureau of Labor Statistics, the accounting sector employs over 3 million people, but the number of candidates sitting for the CPA exam has fallen by 33% between 2016 and 2021. The shortage has led many firms to outsource work to countries like India. Moreover, with AI’s potential to automate tasks traditionally performed by accountants, the sector is expected to experience significant disruption. A 2023 OpenAI paper suggested that automation powered by large language models could eventually impact all accountant and auditor roles.

Partnership aims to advance AI in electric vehicles

Synopsys and SiMa.ai, two Silicon Valley-based companies, have announced a partnership to accelerate the development of energy-efficient AI chips designed for automotive applications. Synopsys, a leader in chip-design software, will collaborate with SiMa.ai, a startup known for its low-power hardware and software tailored for diverse AI functions.

The collaboration aims to meet the increasing demand for advanced AI technologies in electric vehicles, where efficient energy use is critical. SiMa.ai’s technology supports a range of applications, from driver-assistance systems that improve safety to voice assistants enabling hands-free commands. These tools often require different types of hardware, and the partnership allows automakers to simulate and select the best combinations for their needs.

The companies see this as a step towards integrating features like voice assistants into cars within the next three years. SiMa.ai’s CEO, Krishna Rangasayee, highlighted the importance of adapting data centre-level AI performance into power-efficient solutions for vehicles, ensuring both high performance and minimal energy consumption.

Inclusive AI governance: Perspectives from the Global South

At the 2024 Internet Governance Forum (IGF) in Riyadh, the Data and AI Governance coalition convened a panel to explore the challenges and opportunities of AI governance from the perspective of the Global South. The discussion delved into AI’s impacts on human rights, democracy, and economic development, emphasising the need for inclusive and region-specific frameworks.

Towards inclusive frameworks

Ahmad Bhinder, representing the Digital Cooperation Organization, stressed the importance of regional AI strategies. He highlighted the development of a self-assessment tool for AI readiness, designed to guide member states in governance and capacity development.

Similarly, Melody Musoni, Policy Officer at ECDPM, pointed to the African Union’s continental strategy as a promising example of unified AI governance. Elise Racine’s (Doctoral candidate at the University of Oxford) proposal for reparative algorithmic impact assessments underscored the need to address historical inequities, providing a blueprint for more equitable AI systems.

Ethics, rights, and regional challenges

The ethical dimensions of AI took centre stage, with Bianca Kremer, a member of the board of CGI.br and a professor at FGV Law School Rio, highlighting algorithmic bias in Brazil, where ‘90.5% of those arrested through facial recognition technologies are black and brown.’ This stark statistic underscored the urgent need to mitigate AI-driven discrimination.

Guangyu Qiao Franco from Radboud University emphasised the underrepresentation of Global South nations in AI arms control discussions, advocating for an inclusive approach to global AI governance.

Labour, economy, and sustainability

The panel explored AI’s economic and environmental ramifications. Avantika Tewari, PhD candidate at the Center for Comparative Politics and Political Theory at Jawaharlal Nehru University in New Delhi, discussed the exploitation of digital labour in AI development, urging fair compensation for workers in the Global South.

Rachel Leach raised concerns about the environmental costs of AI technologies, including embodied carbon, and criticised the lack of sustainability measures in current AI development paradigms.

Regional and global collaboration

Speakers highlighted the necessity of cross-border cooperation. Sizwe Snail ka Mtuze and Rocco Saverino, PhD candidate at the Free University of Brussels, examined region-specific approaches in Africa and Latin America, stressing the importance of tailored frameworks.

Luca Belli’s (Professor at Vilo School, Director of the Center for Technology and Society) observations on Brazil revealed gaps between AI regulation and implementation, emphasising the need for pragmatic, context-sensitive policies.

Actionable pathways forward

The discussion concluded with several actionable recommendations: fostering inclusive AI governance frameworks, implementing reparative assessments, addressing environmental and labour impacts, and prioritising digital literacy and regional collaboration.

‘Inclusive governance is not just a moral imperative but a practical necessity,’ Bhinder remarked, encapsulating the panel’s call to action. The session underscored the critical need for global cooperation to ensure AI serves humanity equitably.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Parliamentary panel at IGF discusses ICTs and AI in counterterrorism efforts

At the 2024 Internet Governance Forum (IGF) in Riyadh, a panel of experts explored how parliaments can harness information and communication technologies (ICTs) and AI to combat terrorism while safeguarding human rights. The session, titled ‘Parliamentary Approaches to ICT and UN SC Resolution 1373,’ emphasised the dual nature of these technologies—as tools for both law enforcement and malicious actors—and highlighted the pivotal role of international collaboration.

Legislation and oversight in a digital era

David Alamos, Chief of the UNOCT programme on Parliamentary Engagement, set the stage by underscoring the responsibility of parliaments to translate international frameworks like UN Security Council Resolution 1373 into national laws. ‘Parliamentarians must allocate budgets and exercise oversight to ensure counterterrorism efforts are both effective and ethical,’ Alamos stated.

Akvile Giniotiene of the UN Office of Counterterrorism echoed this sentiment, emphasising the need for robust legal frameworks to empower law enforcement in leveraging new technologies responsibly.

Opportunities and risks in emerging technologies

Panelists examined the dual role of ICTs and AI in counterterrorism. Abdelouahab Yagoubi, a member of Algeria’s National Assembly, highlighted AI’s potential to enhance threat detection and predictive analysis.

At the same time, Jennifer Bramlette from the UN Counterterrorism Committee stressed the importance of digital literacy in fortifying societal resilience. On the other hand, Kamil Aydin and Emanuele Loperfido of the OSCE Parliamentary Assembly cautioned against the misuse of these technologies, pointing to risks like deepfakes and cybercrime-as-a-service, enabling terrorist propaganda and disinformation campaigns.

The case for collaboration

The session spotlighted the critical need for international cooperation and public-private partnerships to address the cross-border nature of terrorist threats. Giniotiene called for enhanced coordination mechanisms among nations, while Yagoubi praised the Parliamentary Assembly of the Mediterranean for fostering knowledge-sharing on AI’s implications.

‘No single entity can tackle this alone,’ Alamos remarked, advocating for UN-led capacity-building initiatives to support member states.

Balancing security with civil liberties

A recurring theme was the necessity of balancing counterterrorism measures with the protection of human rights. Loperfido warned against the overreach of security measures, noting that ethical considerations must guide the development and deployment of AI in law enforcement.

An audience query on the potential misuse of the term ‘terrorism’ further underscored the importance of safeguarding civil liberties within legislative frameworks.

Looking ahead

The panel concluded with actionable recommendations, including updating the UN Parliamentary Handbook on Resolution 1373, investing in digital literacy, and ensuring parliamentarians are well-versed in emerging technologies.

‘Adapting to the rapid pace of technological advancement while maintaining a steadfast commitment to the rule of law is paramount,’ Alamos said, encapsulating the session’s ethos. The discussion underscored the indispensable role of parliaments in shaping a global counterterrorism strategy that is both effective and equitable.

NeurIPS conference showcases AI’s rapid growth

The NeurIPS conference, AI’s premier annual gathering, drew over 16,000 computer scientists to British Columbia last week, highlighting the field’s rapid growth and transformation. Once an intimate meeting of academic outliers, the event has evolved into a showcase for technological breakthroughs and corporate ambitions, featuring major players like Alphabet, Meta, and Microsoft.

Industry luminaries like Ilya Sutskever and Fei-Fei Li discussed AI’s evolving challenges. Sutskever emphasised AI’s unpredictability as it learns to reason, while Li called for expanding beyond 2D internet data to develop “spatial intelligence.” The conference, delayed a day to avoid clashing with a Taylor Swift concert, underscored AI’s growing mainstream prominence.

Venture capitalists, sponsors, and tech giants flooded the event, reflecting AI’s lucrative appeal. The number of research papers accepted has surged tenfold in a decade, and discussions focused on tackling the costs and limitations of scaling AI models. Notable attendees included Meta’s Yann LeCun and Google DeepMind’s Jeff Dean, who advocated for ‘modular’ and ‘tangly’ AI architectures.

In a symbolic moment of AI’s widening reach, 10-year-old Harini Shravan became the youngest ever to have a paper accepted, illustrating how the field now embraces new generations and diverse ideas.

Meta enhances Ray-Ban smart glasses with AI video and translation

Meta Platforms has introduced significant upgrades to its Ray-Ban Meta smart glasses, adding AI video capabilities and real-time language translation. The updates, announced during Meta’s Connect conference in September, are now available through the v11 software rollout for Early Access Program members.

The new AI video feature lets the smart glasses process visuals and answer user queries in real-time. Additionally, the glasses can now translate speech between English and Spanish, French, or Italian, providing translations via open-ear speakers or as text on a connected phone.

Meta also integrated the Shazam music identification app into the glasses, enhancing their functionality for users in the US and Canada. Earlier AI upgrades, such as setting reminders and scanning QR codes via voice commands, continue to expand the glasses’ utility.