Quick answer
AI Summary: Details the application of Reinforcement Learning from Human Feedback (RLHF) to create InstructGPT, fundamentally bridging the gap between raw text prediction and helpful, conversational AI.
AI Summary: Details the application of Reinforcement Learning from Human Feedback (RLHF) to create InstructGPT, fundamentally bridging the gap between raw text prediction and helpful, conversational AI.
Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. We use reinforcement learning from human feedback (RLHF) to fine-tune GPT-3 to follow a broad class of written instructions. The resulting InstructGPT models are much better at following instructions than GPT-3, while making up facts less often and showing small decreases in toxic output generation.
Share your opinion to help other learners triage faster.
Write a reviewInvite someone by email to share an invited review for Training language models to follow instructions with human feedback.