Code for the BMVC 2016 paper Learning local feature descriptors with triplets and shallow convolutional neural networks
We provide 4 variants of the TFeat descriptor trained with combinations of different loss functions, and with and without in-triplet anchor swap. For more details check the paper.
network | description |
---|---|
tfeat-ratio | ratio w/out anchor swap |
tfeat-ratio* | ratio with anchor swap |
tfeat-margin | margin w/out anchor swap |
tfeat-margin* | margin with anchor swap |
To download the networks run the get_nets.sh
script
sh get_nets.sh
Trained model on Caffe and Python script for testing mode can be found here..
Example on how to use and train the network using Pytorch can be found here.
Example on how to use the TFeat descriptor in Torch can be found here. More information and the full training code can be found in the pnnet repository.
Example on how to use and train the network using Tensorflow can be found here.
NOTE: the current version doesn't converge as expected. We highly recommend to use Pytorch version in order to reproduce the paper results.
tfeat_demo.py shows how to use the TFeat descriptor using python and openCV.
To use TFeat to detect an object object_img.png
in a video input_video.webm
using feature point matching
python tfeat_demo.py nets/tfeat_liberty_margin_star.t7 input_video.webm object_img.png'
To use TFeat to just describe patches in image, run
./extract_desciptors_from_hpatch_file.py imgs/ref.png ref.TFEAT