How OpenAI designs Sora’s recommendation feed for creativity and safety
The feed philosophy emphasises balancing safety and expression, filtering harmful content while giving creators room to experiment.
OpenAI outlines the core principles behind Sora’s content feed in its Sora Feed Philosophy document. The company states that the feed is designed to spark creativity, foster connections, and maintain a safe user environment.
To achieve these goals, OpenAI says it prioritises creativity over passive consumption. The ranking is steered not simply for engagement, but to encourage active participation. Users can also influence what they see via steerable ranking controls.
Another guiding principle is putting users in control. For instance, parental settings let caretakers turn off feed personalisation or continuous scroll for teen accounts.
OpenAI also emphasises connection. The feed is biassed toward content from people you know or connect with, rather than purely global content, so the experience feels more communal.
In terms of safety and expression, OpenAI embeds guardrails at the content creation level. Because every post is generated within Sora, the system can block disallowed content before it appears.
The feed layers additional filtering, removing or deprioritising harmful or unsafe material (e.g. violent, sexual, hate, self-harm content). At the same time, the design aims not to over-censor, allowing space for genuine expression and experimentation.
On how the feed works, OpenAI says it considers signals like user activity (likes, comments, remixes), location data, ChatGPT history (unless turned off), engagement metrics, and author-level data (e.g. follower counts). Safety signals also weigh in to suppress or filter content flagged as inappropriate.
OpenAI describes the feed as a ‘living, breathing’ system. It expects to update and refine algorithms based on user behaviour and feedback while staying aligned with its founding principles.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!