This project plans to develop the fundamental technology of deep learning, whichhas become the cornerstone of today’s artificial intelligence. We aim at thefundamental technology of deep learning because we believe the breakthroughsin the fundamentals will revolutionize this area comprehensively.This project has three objectives.1. We will study the fundamentals of the optimization techniques on deeplearning, both empirically and theoretically. We are especially interested inoptimization approaches beyond back-propagation because these approachesmay break the limit of back-propagation.2 We will study the fundamentals of the model complexity, both empirically andtheoretically. We are especially interested in over-parameterization and modelsimplification and their relationship with optimization and generalization. 3. We will study the fundamentals of the network structure on deep learning,both empirically and theoretically, and design new network structures, especiallythe structures to fulfill the first two objectives.For all three objectives, we have already developed initial models and conductedinitial experiments. Among them, the initial results of the first objective have beenpublished in MIT Neural Computation. The initial results show that all threedirections deserve further study.