Training examples with reproducible performance.
The word "reproduce" should always means reproduce performance. With the magic of SGD, wrong code often appears to still work, unless you check its performance number. See Unawareness of Deep Learning Mistakes.
These examples don't have meaningful performance numbers. They are supposed to be just demos.
- An illustrative MNIST example with explanation of the framework
- Tensorpack supports any symbolic libraries. See the same MNIST example written with tf.layers, tf-slim, and with weights visualizations
- A tiny Cifar ConvNet and SVHN ConvNet
- If you've used Keras, check out Keras examples
- A boilerplate file to start with, for your own tasks
Name | Performance |
---|---|
Train ResNet, ShuffleNet and other models on ImageNet | reproduce paper |
Train Faster-RCNN / Mask-RCNN on COCO | reproduce paper |
DoReFa-Net: training binary / low-bitwidth CNN on ImageNet | reproduce paper |
Generative Adversarial Network(GAN) variants, including DCGAN, InfoGAN, Conditional GAN, WGAN, BEGAN, DiscoGAN, Image to Image, CycleGAN |
visually reproduce |
Fully-convolutional Network for Holistically-Nested Edge Detection(HED) | visually reproduce |
Spatial Transformer Networks on MNIST addition | reproduce paper |
Visualize CNN saliency maps | visually reproduce |
Similarity learning on MNIST | |
Single-image super-resolution using EnhanceNet | visually reproduce |
Learn steering filters with Dynamic Filter Networks | visually reproduce |
Load a pre-trained AlexNet, VGG, or Convolutional Pose Machines |
Name | Performance |
---|---|
Deep Q-Network(DQN) variants on Atari games, including DQN, DoubleDQN, DuelingDQN. |
reproduce paper |
Asynchronous Advantage Actor-Critic(A3C) on Atari games | reproduce paper |
Name | Performance |
---|---|
LSTM-CTC for speech recognition | reproduce paper |
char-rnn for fun | fun |
LSTM language model on PennTreebank | reproduce reference code |