Code Llama, an open-source AI Tool for coding

Released under a community license, Code Llama is an extension of Llama 2, fine-tuned with code-specific datasets to enhance its coding capabilities.

A person's hand holding Meta's infinite sign.

Meta has released a Code Llama large language model (LLM) tailored for coding tasks. It can generate and discuss code based on text prompts, potentially streamlining workflows for developers and aiding coding learners. It is designed to enhance productivity and serve as an educational tool, helping programmers create robust and well-documented software.

Released under a community license, Code Llama is an extension of Llama 2, fine-tuned with code-specific datasets to enhance its coding capabilities. It can generate code and natural language explanations for code-related prompts and, support code completion, and debugging in popular programming languages.

Two versions of Code Llama are being released, one specifically tailored for the generation of Python code and another that is optimised for the conversion of natural language commands into code. Additionally, three sizes of models are being made available, with the smallest one capable of running on a single GPU. Meta notes that Code Llama has been trained on publicly available code.

Why does it matter?

Open-source software matters for AI policy because it makes advanced AI technology accessible to more people, fostering innovation and trust.

The aim of Code Llama and similar language models is to enhance developer productivity by automating repetitive tasks, allowing programmers to focus on more creative aspects of their work. The Code Llama is a continuation of Meta’s open-source approach, built on LLaMA models. This, in turn, means that anyone can look under its bonnet to evaluate its capabilities and address vulnerabilities.

Is it really an open-source model?

Code Llama is not made available under conventional open-source software licenses, which typically permit unrestricted commercial usage. Under the licensing terms provided by Meta, users are subject to certain limitations, including a restriction on employing these models in applications or services with a user base exceeding 700 million monthly users.

In line with the abovementioned, the Open Source Initiative (OSI) has recently voiced reservations regarding Meta’s utilisation of the term ‘open source.’ They argue that the licensing terms for LLaMa models, notably LLaMa 2, do not align with the OSI’s open-source definition. This misalignment arises from restrictions on commercial usage and limitations on certain application domains within the licensing terms.

The Open Source Initiative (OSI) is a non-profit organisation based in California. It plays a key role in maintaining the rules that define open-source software, known as the Open Source Definition. OSI serves as a guardian for the open-source movement and is a leading advocate for its principles and policies.

What are its challenges?

These AI models present several challenges. Firstly, how will it affect the workforce, and what are the repercussions for the future of work? Secondly, their potential cybersecurity challenges come to the forefront, particularly in terms of whether these models could empower malicious actors to amplify their capabilities. Finally, these models can sometimes be inaccurate, as shown in some studies.

1. Workforce Impact

The first challenge is somewhat quite evident: as automation becomes more prevalent, it places pressure on the workforce. Experts’ analyses have already indicated that most jobs will need to adapt to AI technology. One study even revealed that almost half of Americans fear job automation by AI. For those in the IT industry, this leads to concerns about job displacement for those whose roles primarily involve repetitive coding work.

2. Cybersecurity

AI-powered cyberattacks are on the rise, and their consequences can be devastating. As AI technology advances, there’s a growing concern that cyber criminals might be able to launch attacks that are more sophisticated and harder to detect. This tool can also help automate cyberattacks, including activities like generating phishing emails, finding vulnerabilities in code, and launching automated malware attacks.

3. Inaccuracies in code

A recent in-depth analysis, the first of its kind, evaluated ChatGPT’s responses to more than five hundred coding questions posted on Stack Overflow. Surprisingly (or not), the study revealed that ChatGPT provided inaccurate answers in over 50% of cases. Consequently, one of the key takeaways from this study underscores the importance of relying on authoritative sources, like reputable websites or experienced professionals, when seeking trustworthy answers to software engineering queries.