Topic: company:openai-research

Short answer

This page shows the most relevant public items for company:openai-research, ranked by trend activity and review signal. Use weekly for fast changes, monthly for more stable patterns, and all-time for evergreen picks.

WeeklyMonthlyAll time
Current weekPast week2 weeks ago

← Back to home

  1. Scaling Laws for Reward Model Overoptimization

    PaperOct 19, 2022arXivLeo Gao, John Schulman, Jacob Hilton

    When optimizing a policy against a learned reward model, the policy eventually exploits errors in the reward model, leading to a decline in the true underlying objective. This phenomenon, known as ...

  2. TruthfulQA: Measuring How Models Mimic Human Falsehoods

    PaperSep 8, 2021arXivStephanie Lin, Jacob Hilton, Owain Evans

    We propose TruthfulQA, a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including healt...

  3. Zoom In: An Introduction to Circuits

    PaperMar 10, 2020DistillChris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, Shan Carter

    Neural networks are generally regarded as opaque black boxes. However, if we zoom in and carefully examine the weights and activations of convolutional neural networks, we find highly interpretable...

  4. Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments

    PaperJun 7, 2017arXivRyan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, Igor Mordatch

    We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inhere...

  5. Hindsight Experience Replay

    PaperJul 5, 2017arXivMarcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba

    Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay (HER) which allows sample-efficient lear...

  6. Concrete Problems in AI Safety

    PaperJun 21, 2016arXivDario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané

    Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper, we discuss one such poten...

  7. Diffusion Models Beat GANs on Image Synthesis

    PaperMay 11, 2021arXivPrafulla Dhariwal, Alex Nichol

    We show that diffusion models can achieve image sample quality superior to the current state-of-the-art generative models. We achieve this on unconditional image synthesis by finding a better archi...

  8. Scaling Laws for Autoregressive Generative Modeling

    PaperOct 28, 2020arXivTom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun, Tom B. Brown, Prafulla Dhariwal, Scott Gray, Chris Hallacy, Benjamin Mann, Alec Radford, Aditya Ramesh, Nick Ryder, Daniel M. Ziegler, John Schulman, Dario Amodei, Sam McCandlish

    Building upon previous work establishing scaling laws for language models, we investigate whether similar power-law scaling relationships hold across other data modalities. We train autoregressive ...

  9. Multimodal Neurons in Artificial Neural Networks

    PaperMar 4, 2021DistillGabriel Goh, Nick Cammarata, Chelsea Voss, Shan Carter, Michael Petrov, Ludwig Schubert, Alec Radford, Chris Olah

    We investigate the internal representations of the CLIP model and discover the presence of 'multimodal neurons'. These neurons fire not only for specific visual features (like a spider) but also fo...

  10. Language models can explain neurons in language models

    PaperMay 9, 2023OpenAISteven Bills, Nick Cammarata, Dan Mossing, Henk Tillman, Leo Gao, Gabriel Goh, Ilya Sutskever, Jan Leike, Jeff Wu, William Saunders

    Understanding the internal mechanisms of massive language models is a critical bottleneck for AI safety and alignment. Given the billions of parameters in modern models, manual human inspection of ...

  11. Let's Verify Step by Step

    PaperMay 31, 2023arXivHunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, Karl Cobbe

    Large language models often struggle with multi-step logical reasoning, frequently hallucinating incorrect steps that invalidate the final answer. To improve reasoning capabilities, we compare two ...

  12. Generating Long Sequences with Sparse Attention

    PaperApr 23, 2019arXivRewon Child, Scott Gray, Alec Radford, Ilya Sutskever

    Transformers are powerful sequence models, but their self-attention mechanism scales quadratically with sequence length, making them computationally prohibitive for long inputs like high-resolution...

  13. Emergent Tool Use From Multi-Agent Autocurricula

    PaperSep 17, 2019arXivBowen Baker, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew, Igor Mordatch

    We demonstrate that simple multi-agent competition can drive the emergence of highly complex, intelligent behaviors without explicit human design. We train agents using reinforcement learning to pl...

  14. Deep reinforcement learning from human preferences

    PaperJun 12, 2017arXivPaul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, Dario Amodei

    For many complex real-world tasks, defining a mathematical reward function is difficult, leading to misaligned AI behavior when optimized. We explore a method for solving reinforcement learning tas...

  15. Improving Image Generation with Better Captions

    PaperOct 19, 2023OpenAIJames Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, Wesam Manassra, Prafulla Dhariwal, Casey Chu, Yunxing Jiao, Aditya Ramesh

    Current text-to-image models often struggle to faithfully follow detailed or complex prompts, frequently ignoring specific attributes or object relationships. We propose that this issue stems from ...

  16. Shap-E: Generating Conditional 3D Implicit Functions

    PaperMay 3, 2023arXivHeewoo Jun, Alex Nichol

    We present Shap-E, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation, Shap-E directly generates the parameters of...

  17. Video PreTraining (VPT): Learning to Act by Watching Unlabeled Video

    PaperJun 23, 2022arXivBowen Baker, Ilge Akkaya, Peter Zhokhov, Joost Huizinga, Jie Tang, Adrien Ecoffet, Brandon Houghton, Raul Sampedro, Jeff Clune

    Training agents to perform complex, long-horizon tasks typically requires massive amounts of heavily annotated data or prohibitive amounts of reinforcement learning trial-and-error. We introduce Vi...

Related Topics

cs.LG (22)cs.CV (14)cs.CL (10)Generative AI (9)Reinforcement Learning (9)