DeepSeek open-sourced DeepSeek-R1, an LLM fine-tuned with support knowing (RL) to improve thinking ability. DeepSeek-R1 attains results on par with OpenAI's o1 design on several criteria, including MATH-500 and SWE-bench.
DeepSeek-R1 is based upon DeepSeek-V3, a mix of experts (MoE) model just recently open-sourced by DeepSeek. This base design is fine-tuned utilizing Group Relative Policy Optimization (GRPO), a reasoning-oriented variation of RL. The research study team likewise performed understanding distillation from DeepSeek-R1 to open-source Qwen and Llama models and released several versions of each
1
DeepSeek Open Sources DeepSeek R1 LLM with Performance Comparable To OpenAI's O1 Model
Antwan Woodd edited this page 2 months ago