-
Transfer Learning: Standing on the Shoulders of Giants
Guide to achieve state of the art performance on a wide range of tasks with little to no training.
-
Beyond Gradient Descent Optimizer
What are those fancy optimizers as ADAM or RMSprop and why should I use them instead of the old trusty SGD?