OpenAI at NeurIPS 2020

Source Node: 1849626

We demonstrate that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even becoming competitive with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting.

Source: https://openai.com/blog/neurips-2020/

Time Stamp:

More from OpenAI