Meta launches new AI model for evaluating AI systems

Meta introduces AI tools to reduce human involvement in development.

Meta expands its AI toolkit with faster models and new datasets.

Meta has released new AI models, including a tool called the Self-Taught Evaluator, which aims to reduce human involvement in the AI development process. The company’s latest batch of models is part of its ongoing efforts to enhance AI accuracy and efficiency across complex fields.

The new tool uses a ‘chain of thought’ technique, similar to one employed by OpenAI, breaking problems into logical steps for improved accuracy in science, coding, and mathematics. Meta trained the evaluator solely with AI-generated data, eliminating the need for human input at that stage.

The ability for AI to reliably assess other AI models could eventually replace costly processes such as Reinforcement Learning from Human Feedback. Meta researchers suggest that self-improving AI systems might perform better than human evaluators, marking progress toward autonomous digital assistants capable of managing complex tasks without supervision.

Meta’s latest release also includes upgrades to its Segment Anything model, tools for faster language model responses, and datasets aimed at discovering new inorganic materials. Unlike competitors Google and Anthropic, Meta makes its models accessible for public use, setting it apart in the industry.