AR studio closed as Meta prioritises AI and metaverse

Meta Platforms has announced plans to shut down its augmented reality studio, Meta Spark, which allowed third-party creators to design custom effects for Instagram and Facebook. The platform will close on 14 January, removing third-party AR effects such as filters, masks, and 3D objects created using the studio. However, their first-party AR effects will remain on its platforms, including Instagram, Facebook, and Messenger.

The decision aligns with Meta’s broader strategy to prioritise investments in AI and the metaverse, a virtual environment the company views as the future of the internet. In a blog post, the company confirmed that resources would now focus on developing the next generation of experiences, particularly in new factors like AR glasses. The shift in strategy has left many third-party creators, who relied on Meta Spark, searching for alternatives.

Many creators have expressed disappointment at the platform’s closure, with some considering moving to other AR creation tools like Snapchat’s Lens Studio or Unity. Despite the discontinuation, the tech giant reassured users that existing reels and stories featuring third-party AR effects will remain accessible. However, the Meta Spark Hub and studio files will no longer be available after the shutdown.

In recent months, the company has also announced the phasing out of other projects, such as its work-focused Workplace app, which will cease customer operation by June 2026. The company’s strategic focus on AI and emerging technologies reflects its ongoing efforts to redefine its core business in an increasingly competitive tech landscape.

Zuckerberg apologises for Facebook photo error involving Trump

Former President Donald Trump revealed that Meta CEO Mark Zuckerberg apologised to him after Facebook mistakenly labelled a photo of Trump as misinformation. The photo, which showed Trump raising a fist after surviving an assassination attempt at a rally in Butler, Pennsylvania, was initially flagged by Meta’s AI system. Trump disclosed the apology during an interview with FOX Business’ Maria Bartiromo, stating that Zuckerberg called him twice to express regret and praise his response to the event.

Meta Vice President of Global Policy Joel Kaplan clarified that the error occurred due to similarities between a doctored image and the real photo, leading to an incorrect fact-check label. Kaplan explained that the AI system misapplied the label due to subtle differences between the two images. Meta’s spokesperson Andy Stone reiterated that Zuckerberg has not endorsed any candidate for the 2024 presidential election and that the labelling error was not due to bias.

The incident highlights ongoing challenges for Meta as it navigates content moderation and political neutrality, especially ahead of the 2024 United States election. Additionally, the assassination attempt on Trump has sparked various online conspiracy theories. Meta’s AI chatbot faced criticism for initially refusing to answer questions about the shooting, a decision attributed to the overwhelming influx of information during breaking news events. Google’s AI chatbot Gemini similarly refused to address the incident, sticking to its policy of avoiding responses on political figures and elections.

Both Meta and Google have faced scrutiny over their handling of politically sensitive content. Meta’s recent efforts to shift away from politics and focus on other areas, combined with Google’s cautious approach to AI responses, reflect the tech giants’ strategies to manage the complex dynamics of information dissemination and political neutrality in an increasingly charged environment.

Meta’s AI bots aim to support content creators

Meta CEO Mark Zuckerberg has proposed a vision where AI bots assist content creators with audience engagement, aiming to free up their time for more crucial tasks. In an interview with internet personality Rowan Cheung, Zuckerberg discussed how these AI bots could capture the personalities and business objectives of creators, allowing fans to interact with them as if they were the creators themselves.

Zuckerberg’s optimism aligns with many in the tech industry who believe AI can significantly enhance the impact of individuals and organizations. However, there are concerns about whether creators, whose audiences value authenticity, will embrace generative AI. Meta’s initial rollout of AI-powered bots earlier this year faced issues, including bots making false claims and providing misleading information, raising questions about the technology’s reliability.

Meta claims improvements with its latest AI model, Llama 3.1, but challenges such as hallucinations and planning errors persist. Zuckerberg acknowledges the need to address these concerns and build trust with users. Despite these hurdles, Meta continues to focus on integrating AI into its platforms while also pursuing its Metaverse ambitions and competing in the tech space.

Meta’s plans to introduce generative AI to its apps dating back to 2023, along with its increased focus on AI amid Metaverse ambitions highlight the company’s broader strategic vision. However, convincing creators to rely on AI bots for fan interaction remains a significant challenge.

Meta’s new strategy: AI-powered gaming experiences

Meta is set to integrate more generative AI technology into its virtual, augmented, and mixed-reality games, aiming to boost its struggling metaverse strategy. According to a recent job listing, the company plans to create new gaming experiences that change with each playthrough and follow unpredictable paths. The initiative will initially focus on Horizon, Meta’s suite of metaverse games and applications, but could extend to other platforms like smartphones and PCs.

These developments are part of Meta’s broader effort to enhance its metaverse offerings and address the financial challenges faced by Reality Labs, the division responsible for its metaverse projects. Despite selling millions of Quest headsets, Meta has struggled to attract users to its Horizon platform and mitigate substantial operating losses. Recently, the company began allowing third-party manufacturers to license Quest software features and increased investment in metaverse gaming, spurred by CEO Mark Zuckerberg’s growing interest in the field.

Meta’s interest in generative AI is not new. In 2022, Zuckerberg demonstrated a prototype called Builder Bot, which allows users to create virtual worlds with simple prompts. Additionally, Meta’s CTO, Andrew Bosworth, has highlighted the potential of generative AI tools to democratise content creation within the metaverse, likening their impact to that of Instagram on personal content creation.

Generative AI is already making waves in game development, with companies like Disney-backed Inworld using the technology to enhance game dialogues and narratives. While some game creators are concerned about the impact on their jobs, Meta is committed to significant investments in generative AI, even though CEO Zuckerberg cautioned that it might take years for these investments to become profitable.

Zuckerberg critiques closed-source AI development

In a recent interview, Mark Zuckerberg positioned Meta as a leading advocate for open-source AI, critiquing competitors for their closed-source approaches. Speaking on the YouTube channel Kallaway, Zuckerberg expressed his belief that individual companies should not monopolise AI technology to create singular products. Instead, he envisions a future with diverse AI options, supported by open-source principles.

Zuckerberg highlighted Meta’s commitment to open-source AI, emphasising the importance of empowering developers and users to contribute to and innovate within the AI ecosystem. However, some experts question Meta’s open-source claims. Amanda Brock, CEO of OpenUK, argued that Meta’s Llama model is only partially open-source due to certain commercial stipulations. Similarly, Gartner analyst Arun Chandrasekaran noted Meta’s competitive constraints that limit the openness of its models.

Meta is not alone in promoting open-source AI. French startup Mistral AI and Databricks have also made strides in this area, though their offerings include restrictions. The Linux Foundation has announced the Open Platform for Enterprise AI (OPEA) to standardise open-source definitions in AI, reflecting a broader industry movement towards clarity and true openness in AI development.

China’s top prosecutor warns cybercriminals are exploiting blockchain and metaverse projects

China’s Supreme People’s Procuratorate (SPP) is ramping up efforts to combat cybercrime by targeting criminals who use blockchain and metaverse projects for illegal activities. The SPP is alarmed by the recent surge in online fraud, cyber violence, and personal information infringement. Notably, the SPP has observed a significant rise in cybercrimes committed on blockchains and within the metaverse, with criminals increasingly relying on cryptocurrencies for money laundering, making it challenging to trace their illicit wealth.

Ge Xiaoyan, the Deputy Prosecutor-General of the SPP, highlights a 64% year-on-year increase in charges related to cybercrime-related telecom fraud, while charges linked to internet theft have risen nearly 23%, and those related to online counterfeiting and sales of inferior goods have surged by almost 86%. Procuratorates have pressed charges against 280,000 individuals involved in cybercrime cases between January and November, reflecting a 36% year-on-year increase and constituting 19% of all criminal offenses.

The People’s Bank of China (PBoC) acknowledges the importance of regulating cryptocurrency and decentralized finance in its latest financial stability report. The PBoC emphasizes the necessity of international cooperation in regulating the industry.

Despite the ban on most crypto transactions and cryptocurrency mining, mainland China remains a significant hub for crypto-mining activities.

G7 digital and tech ministers discuss AI, data flows, digital infrastructure, standards, and more

On 29-30 April 2023, G7 digital and tech ministers met in Takasaki, Japan, to discuss a wide range of digital policy topics, from data governance and artificial intelligence (AI), to digital infrastructure and competition. The outcomes of the meeting – which was also attended by representatives of India, Indonesia, Ukraine, the Economic Research Institute for ASEAN and East Asia, the International Telecommunication Union, the Organisation for Economic Co-operation and Development, UN, and the World Bank Group – include a ministerial declaration and several action plans and commitments to be endorsed at the upcoming G7 Hiroshima Summit.

During the meeting, G7 digital and tech ministers committed to strengthening cooperation on cross-border data flows, and operationalising Data Free Flow with Trust (DFFT) through an Institutional Arrangement for Partnership (IAP). IAP, expected to be launched in the coming months, is dedicated to ‘bringing governments and stakeholders together to operationalise DFFT through principles-based, solutions-oriented, evidence-based, multistakeholder, and cross-sectoral cooperation’. According to the ministers, focus areas for IAP should include data location, regulatory cooperation, trusted government access to data, and data sharing.

The ministers further noted the importance of enhancing the security and resilience of digital infrastructures. In this regard, they have committed to strengthening cooperation – within G7 and with like-minded partners – to support and enhance network resilience through measures such as ensuring and extending secure and resilient routes of submarine cables. Moreover, the group endorsed the G7 Vision of the future network in the Beyond 5G/6G era, and is committed to enhancing cooperation on research, development, and international standards setting towards building digital infrastructure for the 2030s and beyond. These commitments are also reflected in a G7 Action Plan for building a secure and resilient digital infrastructure

In addition to expressing a commitment to promoting an open, free, global, interoperable, reliable, and secure internet, G7 ministers condemned government-imposed internet shutdowns and network restrictions. When it comes to global digital governance processes, the ministers expressed support for the UN Internet Governance Forum (IGF) as the ‘leading multistakeholder forum for Internet policy discussions’ and have proposed that the upcoming Global Digital Compact reinforce, build on, and contribute to the success of the IGF and World Summit on the Information Society (WSIS) process. Also included in the internet governance section is a commitment to protecting democratic institutions and values from foreign threats, including foreign information manipulation and interference, disinformation and other forms of foreign malign activity. These issues are further detailed in an accompanying G7 Action Plan for open, free, global, interoperable, reliable, and secure internet

On matters related to emerging and disruptive technologies, the ministers acknowledged the need for ‘agile, more distributed, and multistakeholder governance and legal frameworks, designed for operationalising the principles of the rule of law, due process, democracy, and respect for human rights, while harnessing the opportunities for innovation’. They also called for the development of sustainable supply chains and agreed to continue discussions on developing collective approaches to immersive technologies such as the metaverse

With AI high on the meeting agenda, the ministers have stressed the importance of international discussions on AI governance and interoperability between AI governance frameworks, and expressed support for the development of tools for trustworthy AI (e.g. (non)regulatory frameworks, technical standards, assurance techniques) through multistakeholder international organisations. The role of technical standards in building trustworthy AI and in fostering interoperability across AI governance frameworks was highlighted both in the ministerial declaration and in the G7 Action Plan for promoting global interoperability between tools for trustworthy AI

When it comes to AI policies and regulations, the ministers noted that these should be human-centric, based on democratic values, risk-based, and forward-looking. The opportunities and challenges of generative AI technologies were also tackled, as ministers announced plans to convene future discussions on issues such as governance, safeguarding intellectual property rights, promoting transparency, and addressing disinformation. 

On matters of digital competition, the declaration highlights the importance of both using existing competition enforcement tools and developing and implementing new or updated competition policy or regulatory frameworks ‘to address issues caused by entrenched market power, promote competition, and stimulate innovation’. A summit related to digital competition for competition authorities and policymakers is planned for the fall of 2023.

Regulating digital games

Regulating digital games is a topic that has been at the forefront recently due to the advancement in gaming technology. US Congress pushed the games industry to set up an Entertainment Software Ratings Board to determine age ratings. This led to the Pan European Game Information rating in 2003. This rating system is similar to that of films.

As online games have become more impactful, the question of content regulation is becoming more important. This issue got in the focus after the Christchurch shootings in 2019, when users of Roblox, an online gaming platform, started re-enacting the event. After this incident, Roblox employs “thousands” of human moderators and artificial intelligence to check user-submitted games and police chat among its 60m daily users, who have an average age of about 13.

This has prompted debates on how to regulate social media-like conversations. Politicians have argued that the constitution should protect in-game chat as it is considered similar to one-to-one conversation.

Game makers are doing their best to design out bad behaviour before it occurs. For example, when users of Horizon Worlds complained of being virtually groped, a minimum distance between avatars was introduced.

Similar o content moderation of social media, online gaming will trigger new policy and governance challenges.

Africa Tech Festival 2023

Africa Tech Festival will be held from 14 until 16 November 2023 in Cape Town, South Africa.

During the three-day festival, there will be two main events with the aim to unite Africa’s tech ecosystem and industry verticals.

The first one is Africom which will focus on topics of Connectivity Infrastructure and Digitial Inclusion with an emphasis on: Connecting Africa’s Next Billion; Digital Infrastructure Investment; Telcos of Tomorrow; Sustainable Development & Green ICT, and Future Visions: Web3, The Metaverse & Beyond, among others.

The second event called Afritech will focus on topics on Entreprise Transformaiton and Emerging Technologies with an emphasis on: AHUB: Africa’s Start-up Scene; AI, IoT & Disruptive Tech, and Cybersecurity & Data Protection, among others.

For more information, please visit the event page.

Indian University launches MBA programme in the metaverse and web 3.0

India-based private university UPES School of Business has announced the launch of an MBA programme on metaverse and web 3.0, aiming to provide students with a comprehensive understanding of the metaverse, blockchain, and web 3.0 ecosystems. The two-year programme is intended to offer theoretical knowledge of the metaverse, practical experience with companies working in the metaverse arena, and immersive experiences through Meta labs.

The programme’s designers promise to offer students a hands-on experience to better understand the metaverse and its associated technologies, and to equip them with the necessary skills for the future metaverse market.