Topic: cs.CL

Track this topic after sign-in.

Short answer

This page shows the most relevant public items for cs.CL, ranked by trend activity and review signal. Use weekly for fast changes, monthly for more stable patterns, and all-time for evergreen picks.

WeeklyMonthlyAll time
Current monthLast month2 months ago

← Back to home

  1. Training language models to follow instructions with human feedback

    PaperMar 4, 2022arXivLong Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, Ryan Lowe

    Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not he...

  2. Scaling Language Models: Methods, Analysis & Insights from Training Gopher

    PaperDec 8, 2021arXivJack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Eleni Elia, Danilo J. Rezende, Vinyals, Simonyan

    Language modelling provides a step towards intelligent communication systems by harnessing large datasets and expressive models. We provide an analysis of Transformer-based language model architect...

  3. Linguistics and Human Brain: A Perspective of Computational Neuroscience

    PaperFeb 10, 2026arXivFudong Zhang, Bo Chai, Yujie Wu, Wai Ting Siok, Nizhuan Wang

    This paper provides a comprehensive perspective on the intersection of linguistics and computational neuroscience, exploring how modern language models and neural recording technologies can bridge ...

  4. GLM-5: From Vibe Coding to Agentic Engineering

    PaperFeb 17, 2026arXivZhipu AI Team, Tsinghua University Researchers

    We present GLM-5, a foundation model designed to bridge the gap between human-guided 'vibe coding' and autonomous 'agentic engineering.' GLM-5 introduces DeepSeek-inspired Sparse Attention (DSA) to...

← PreviousPage 2Next →

Top Entities In This Topic

Related Topics

FAQ

What does this cs.CL page rank?

It ranks public content for cs.CL using recent discussion, review, and engagement signals so you can triage faster. This guidance is specific to cs.CL topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

How should I use weekly vs monthly vs all-time?

Use weekly for fast-moving updates, monthly for stable trend confirmation, and all-time for evergreen references. This guidance is specific to cs.CL topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

How can I discover organizations active in cs.CL?

Use the linked entities section to jump to labs, companies, and experts connected to this topic and explore their timelines. This guidance is specific to cs.CL topic page on Attendemia and is written so it still makes sense without reading other sections on the page.

Can I follow this topic for updates?

Yes. Use the follow button on this page to subscribe and track new high-signal activity. This guidance is specific to cs.CL topic page on Attendemia and is written so it still makes sense without reading other sections on the page.