Solving a machine-learning mystery

07-Feb-23

Using just a few instances, huge language models like GPT-3 can learn a new task without the need for any further training data, according to a recent research. Despite not having been trained to carry out such activities, researchers have described how huge language models like GPT-3 are able to acquire new tasks without altering their parameters. They discovered that the hidden layers of these huge language models include smaller linear models, which the large models may use to learn how to do a new task. Massive neural networks that can produce human-like writing, from poetry to computer code, are large language models like OpenAI’s GPT-3.

Read More…