AI firm DeepSeek opens up on model deployment tech
Community gains from DeepSeek’s open-source contributions.

Chinese AI startup DeepSeek has announced its intention to share the technology behind its internal inference engine, a move aimed at enhancing collaboration within the open-source AI community.
The company’s inference engine and training framework have played a vital role in accelerating the performance and deployment of its models, including DeepSeek-V3 and R1.
Built on PyTorch, DeepSeek’s training framework is complemented by a modified version of the vLLM inference engine originally developed in the US at UC Berkeley.
While the company will not release the full source code of its engine, it will contribute its design improvements and select components as standalone libraries.
These efforts form part of DeepSeek’s broader open-source initiative, which began earlier this year with the partial release of its AI model code.
Despite this contribution, DeepSeek’s models fall short of the Open Source Initiative’s standards, as the training data and full framework remain restricted.
The company cited limited resources and infrastructure constraints as reasons for not making the engine entirely open-source. Still, the move has been welcomed as a meaningful gesture towards transparency and knowledge-sharing in the AI sector.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!