DeepMind just published a mind blowing paper:

Théo Szymkowiak:

Since scientists started building and training neural networks, Transfer Learning has been the main bottleneck. Transfer Learning is the ability of an AI to learn from different tasks and apply its pre-learned knowledge to a completely new task. It is implicit that with this precedent knowledge, the AI will perform better and train faster than de novo neural networks on the new task.

DeepMind is on the path of solving this with PathNet. PathNet is a network of neural networks, trained using both stochastic gradient descent and a genetic selection method.