ChatGPT: A year in review

Within a mere two months of its launch, ChatGPT amassed 100 million monthly users. Fast forward a year, and the user base has now surged to an impressive 200 million. This remarkable growth not only surprised major tech firms but also policymakers. Delve into trends that have shaped both industries and regulatory frameworks.

 Birthday Cake, Cake, Cream, Dessert, Food, People, Person, Icing

As ChatGPT turns one, the significance of its impact cannot be overstated. What started as a pioneering step in AI has swiftly evolved into a ubiquitous presence, transforming abstract notions of AI into an everyday reality for many or at least a topic on everyone’s lips.

While ChatGPT and similar large language models (LLMs) have unveiled glimpses of the possibilities within AI, they are the pillars of the new technological revolution. All predictions state these models to be increasingly personalised and context-specific. To leverage proprietary data for refined model training and industry-specific automation.


Important milestones throughout the year

O7sSb1b44sU9uFl AqFKKj0m03ydxmOYwUDmV4acuu5XX46UCj8z3SOMj FlhB0nBnkSaZfjOJsWPVdU0Gu2Yww8BsBvB854HiVk5ENvfqmjUVUJKrDxLvLW5yOsXewuYoXBtzVmyz6IDrX 5OGYP9w

Source: https://www.globalxetfs.com/

Since its public launch in November 2022, ChatGPT has undergone substantial evolution. Initially, it operated solely as a text generator, limited to responses derived from its training data gathered until September 2021. Initially, it tended to fabricate information when lacking answers, introducing a new term of ‘hallucination’ into discourse when discussing AI. 

At this moment, the evolved iteration of ChatGPT, trained up to April 2023, boasts expanded capabilities. It now harnesses Microsoft’s Bing search engine and internet resources to access more current information. Moreover, it has become a product platform, enabling the integration of images or documents into searches and facilitating conversation through spoken language.

Tech race for AI dominance


In January 2022, ChatGPT achieved 100 million monthly users. The sudden surge in interest in generative AI has taken major tech companies by surprise. In addition to ChatGPT, several other notable generative AI models, such as Midjourney, Stable Diffusion, and Google’s Bard, have been released. These developments are reshaping the technological terrain. Tech giants put all resources into what they perceive as a pivotal future technological infrastructure and shape the narrative of the AI revolution. However, a significant challenge looming ahead is the potential dominance of only a select few players in this landscape.

Venture capitalists invested almost five times as much into generative AI firms in the first half of 2023 as during the same period last year. Even excluding a $10 billion investment by Microsoft unveiled in January, VC funding is still up nearly 58% compared with the first half of 2022.

The anticipated economic impact is substantial, with PwC forecasting that AI could potentially elevate the global economy by over $15 trillion by 2030. The largest economies – the US and China- are at the forefront of this new ‘AI arms race.’

According to the 2023 AI Index Report, the United States and China have consistently held the spotlight regarding AI investment, with the US taking the lead since 2013, accumulating close to $250 billion across 4,643 companies. The momentum in investment shows no signs of slowing. In 2022, the US witnessed the emergence of 524 new AI startups, drawing in an impressive $47 billion from non-government funding. Meanwhile, there were also substantial investment trends in China, with 160 newly established AI startups securing an average of $71 million each in 2022.

Many of these new startups are leveraging ChatGPT API and building specific use-case scenarios for users.

 Head, Person, Face, Photography, Portrait

AI governance – to regulate or not to regulate

In the midst of AI’s incredible advancements, there’s a shadow of concern. The worry about AI generating misleading or inappropriate content often referred to as ‘hallucinating,’ remains a significant challenge. The fear of AI also extends to broader societal implications like biases, job displacement, data privacy, the spread of disinformation and AI’s impact on decision-making processes.

The meteoric rise of the OpenAI company was one of the main reasons for the swift action from policymakers regarding Artificial intelligence regulation. OpenAI CEO Sam Altman was a guest of the US Congress and the EU Commission for negotiations of the new AI regulatory framework in the United States and the European Union.

The United States

The global landscape of AI regulation is gradually taking shape. On 30 October, President Biden issued an executive order mandating AI developers to provide the federal government with an evaluation of the data of their applications used to train and test AI, its performance measurements, and its vulnerability to cyberattacks. The Biden-Harris administration is making progress in crafting domestic AI regulation, including with the National Institute of Standards and Technology (NIST) AI Risk Management Framework and the voluntary commitments from AI companies to manage the risks posed by the technology. This is recognised as the industry’s self-regulation approach from the US government and was welcomed in the industry.

In Congress, there are several bipartisan proposals. Just last week, prominent Senators Amy Klobuchar and John Thune and their colleagues introduced the bipartisan ‘AI Research, Innovation, and Accountability Act ‘to boost innovation while increasing transparency, accountability, and security for high-risk AI applications.

European Union

The tiered approach (as currently envisioned in EU AI Act) would mean categorising AI into different risk bands, with more or less regulation depending on the risk level.

In the EU, two and a half years after the draft rules were proposed, the negotiation on the final version hit a significant snag, as France, Germany, and Italy spoke out against the tiered approach initially envisioned in the EU AI Act for foundation models. It seems that the EU’s largest economies are moving away from the concept of stringent AI regulation and inclining towards a self-regulatory approach akin to the US model. Many speculate that this shift is a consequence of intense lobbying efforts by Big Tech. These three countries asked the Spanish presidency of the EU Council, which negotiates on behalf of member states in the trialogues, to retreat from the approach. What France, Germany, and Italy want is to regulate only the use of AI rather than the technology itself and propose ‘mandatory self-regulation through codes of conduct’ for foundation models.

China

China was the first country to introduce its interim measures on generative AI, effective in August this year.

What is the aim? To solidify China’s role as a key player in shaping global standards for AI regulation. China also unveiled its Global AI Governance Initiative during the Third Belt and Road Forum, marking a significant stride in shaping the trajectory of AI on a global scale. China’s GAIGI is expected to bring together 155 countries participating in the Belt and Road Initiative, establishing one of the largest global AI governance forums. This strategic initiative focuses on five aspects, including ensuring AI development aligns with human progress, promoting mutual benefit, and opposing ideological divisions. It also establishes a testing and assessment system to evaluate and mitigate AI-related risks, similar to the risk-based approach of the EU’s upcoming AI Act.

 Art, Graphics, Advertisement, Poster, Text, Outdoors, Nature

At the international level

At the international level, there are initiatives like the establishment of a High-Level Body on AI by the UN Secretary-General, the group of seven wealthy nations (G7) agreeing on the Hiroshima guiding principles and endorsing an AI code of conduct for companies, AI Safety Summit at Bletchley Park and more.

The UN Security Council on AI

The UN Security Council held its first-ever debate on AI (18 July), delving into the technology’s opportunities and risks for global peace and security. A few experts were also invited to participate in the debate chaired by Britain’s Foreign Secretary James Cleverly. In his briefing to the 15-member council, UN Secretary-General Antonio Guterres promoted a risk-based approach to regulating AI and backed calls for a new UN entity on AI, akin to models such as the International Atomic Energy Agency, the International Civil Aviation Organization, and the Intergovernmental Panel on Climate Change.

G7

The G7 nations released their guiding principles for advanced AI, accompanied by a detailed code of conduct for organisations developing AI. A notable similarity with the EU’s AI Act is the risk-based approach, placing responsibility on AI developers to assess and manage the risks associated with their systems. While building on the existing Organisation for Economic Co-operation and Development AI Principles (OECD) principles, the G7 principles go a step further in certain aspects. They encourage developers to deploy reliable content authentication and provenance mechanisms, such as watermarking, to enable users to identify AI-generated content. However, the G7’s approach preserves a degree of flexibility, allowing jurisdictions to adopt the code in ways that align with their individual approaches.

UK AI Safety Summit

The UK’s much-anticipated summit resulted in a landmark commitment among leading AI countries and companies to test frontier AI models before public release.

The Bletchley Declaration identifies the dangers of current AI, including bias, threats to privacy, and deceptive content generation. While addressing these immediate concerns, the focus shifted to frontier AI – advanced models that exceed current capabilities – and their potential for serious harm.

Signatories include Australia, Canada, China, France, Germany, India, Korea, Singapore, the UK, and the USA for a total of 28 countries plus the EU. Governments will now play a more active role in testing AI models. The AI Safety Institute, a new global hub established in the UK, will collaborate with leading AI institutions to assess the safety of emerging AI technologies before and after their public release. The summit resulted in an agreement to form an international advisory panel on AI risk.

UN’s High-Level Advisory Body on AI

The UN has taken a unique approach by launching a High-Level Advisory Body on AI comprising 39 members. Led by UN Tech Envoy Amandeep Singh Gill, the body plans to publish its first recommendations by the end of this year, with final recommendations expected next year. These recommendations will be discussed during the UN’s Summit of the Future in September 2024.

Unlike previous initiatives that introduced new principles, the UN’s advisory body focuses on assessing existing governance initiatives worldwide, identifying gaps, and proposing solutions. The tech envoy envisions the UN as the platform for governments to discuss and refine AI governance frameworks. 

What can we expect from language models in the future?

If the industry keeps the focus on research and investments, 2024 will bring some massive breakthroughs. For the OpenAI, the Q project is in the focus. The Q project can solve certain math problems, allegedly having a higher reasoning capacity. This could be a potential breakthrough in artificial general intelligence (AGI). If language models expend their powers in the realm of math and reasoning, they will reach higher levels of usefulness. Many experts are reasoning, including Elon Musk, that ‘digital superintelligence’ will exist within the next five to ten years.

When it comes to regulation, the spotlight will continue to be on ensuring the safety of AI usage while removing a bias from future datasets. With further calls for global collaboration in AI governance and for greater transparency of these models.

Must read

Four seasons of AI:  From excitement to clarity in the first year of ChatGPT – Diplo
ChatGPT was launched by OpenAI on the last day of November 2022. It triggered a lot of excitement. Over the last 12 months, the winter of AI excitement was… Read more.
Four seasons of AI:  From excitement to clarity in the first year of ChatGPT – Diplo
ChatGPT was launched by OpenAI on the last day of November 2022. It triggered a lot of excitement. Over the last 12 months, the winter of AI excitement was… Read more.
portrait valtazar bogisic scaled 900x300 1
How can legal wisdom from 19th-century Montenegro and Valtazar Bogišić help AI regulation – Diplo
Our quest for effective AI governance can be informed by the legal wisdom of Valtazar Bogisic, drafter of the Montenegrin civil code (1888) Read more.
portrait valtazar bogisic scaled 900x300 1
How can legal wisdom from 19th-century Montenegro and Valtazar Bogišić help AI regulation – Diplo
Our quest for effective AI governance can be informed by the legal wisdom of Valtazar Bogisic, drafter of the Montenegrin civil code (1888) Read more.
AI risks taxonomy 900x300 1
How can we deal with AI risks?
In the fervent discourse on AI governance, there’s an oversized focus on the risks from future AI compared to more immediate risks, such as short-term risks that include the protection of intellectual property. In this blog post, Jovan Kurbalija explores how we can deal with AI risks. Read more.
AI risks taxonomy 900x300 1
How can we deal with AI risks?
In the fervent discourse on AI governance, there’s an oversized focus on the risks from future AI compared to more immediate risks, such as short-term risks that include the protection of intellectual property. In this blog post, Jovan Kurbalija explores how we can deal with AI risks. Read more.
Kenya typical Kenyan micro enterprise 900x300 1
Jua Kali AI: Bottom-up algorithms for a Bottom-up economy – Diplo
This text is about bottom-up AI for the bottom-up economy. Read more.
Kenya typical Kenyan micro enterprise 900x300 1
Jua Kali AI: Bottom-up algorithms for a Bottom-up economy – Diplo
This text is about bottom-up AI for the bottom-up economy. Read more.
shutterstock 1660018615 scaled 900x300 1
Diplomatic and AI hallucinations: How can thinking outside the box help solve global problems? – Diplo
We examine the use using AI “hallucinations” in diplomacy, showing how AI analysis of UN speeches can reveal unique insights. It argues that the unexpected outputs of AI could lead… Read more.
shutterstock 1660018615 scaled 900x300 1
Diplomatic and AI hallucinations: How can thinking outside the box help solve global problems? – Diplo
We examine the use using AI “hallucinations” in diplomacy, showing how AI analysis of UN speeches can reveal unique insights. It argues that the unexpected outputs of AI could lead… Read more.