Could Machines Learn Like Humans?

Deep learning has enabled significant progress in computer perception, natural language understanding and control. However, almost all these successes largely rely on supervised learning, where the machine is required to predict human-provided annotations, or on model-free reinforcement learning, where the machine learns actions to maximize rewards. Supervised learning requires a large number of labeled samples, making it practical only for certain tasks. Reinforcement learning requires a very large number of interactions with the environment (and many failures) to learn even simple tasks. In contrast, animals and humans seem to learn vast amounts of task-independent knowledge about how the world works through mere observation and occasional interactions. Learning new tasks or skills require very few samples or interactions with the world: we learn to drive and fly planes in about 30 hours of practice with no fatal failures. What learning paradigm do humans and animal use to learn so efficiently?

In this lecture, Yann LeCun will propose the hypothesis that self-supervised learning of predictive world models is an essential missing ingredient of current approaches to AI. With such models, one can predict outcomes and plan courses of actions. One could argue that prediction is the essence of intelligence. Good predictive models may be the basis of intuition, reasoning and “common sense,” allowing us to fill in missing information: predicting the future from the past and present or inferring the state of the world from noisy percepts. After a brief presentation of the state of the art in deep learning, he will discuss some promising principles and methods for self-supervised learning.

TEA: 4:15-5:00pm
LECTURE: 5:00-6:15pm











When: Wed., Feb. 6, 2019 at 5:00 pm - 6:15 pm
Where: Simons Foundation
160 Fifth Ave., 2nd Floor
646-654-0066
Price: Free
Buy tickets/get more info now
See other events in these categories:

Deep learning has enabled significant progress in computer perception, natural language understanding and control. However, almost all these successes largely rely on supervised learning, where the machine is required to predict human-provided annotations, or on model-free reinforcement learning, where the machine learns actions to maximize rewards. Supervised learning requires a large number of labeled samples, making it practical only for certain tasks. Reinforcement learning requires a very large number of interactions with the environment (and many failures) to learn even simple tasks. In contrast, animals and humans seem to learn vast amounts of task-independent knowledge about how the world works through mere observation and occasional interactions. Learning new tasks or skills require very few samples or interactions with the world: we learn to drive and fly planes in about 30 hours of practice with no fatal failures. What learning paradigm do humans and animal use to learn so efficiently?

In this lecture, Yann LeCun will propose the hypothesis that self-supervised learning of predictive world models is an essential missing ingredient of current approaches to AI. With such models, one can predict outcomes and plan courses of actions. One could argue that prediction is the essence of intelligence. Good predictive models may be the basis of intuition, reasoning and “common sense,” allowing us to fill in missing information: predicting the future from the past and present or inferring the state of the world from noisy percepts. After a brief presentation of the state of the art in deep learning, he will discuss some promising principles and methods for self-supervised learning.

TEA: 4:15-5:00pm
LECTURE: 5:00-6:15pm

Buy tickets/get more info now