Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Was my result correct? #7

Open
Light-- opened this issue Dec 6, 2019 · 8 comments
Open

Was my result correct? #7

Light-- opened this issue Dec 6, 2019 · 8 comments

Comments

@Light--
Copy link

Light-- commented Dec 6, 2019

After I downloaded the 14.2 GB ActivityNet features (Very hard, must use Tencent Weiyun VIP in my network environment), I ran the auto_run.sh and get the following results:

 [INIT] Loaded annotations from validation subset. 
 processoNumber of ground truth instances: 7292
 processoNumber of proposals: 472700 
Fixed threshold for tiou score: [0.5  0.55 0.6  0.65 0.7  0.75 0.8  0.85 0.9  0.95] 
[RESULTS] Performance on ActivityNet proposal task.
        Area Under the AR vs AN curve: 47.82460230389468%
AR@1 is          0.27855183763027974
AR@5 is          0.3704470652770159
AR@10 is         0.4054306088864509
AR@100 is        0.5950630828304992

@lijiannuist
I wonder if the result i got above is correct....?Why does it look like that the result is not good? Thanks...

@swordlidev
Copy link
Contributor

your result is not good
Area Under the AR vs AN curve should be about 68%.

@Light--
Copy link
Author

Light-- commented Dec 6, 2019

your result is not good
Area Under the AR vs AN curve should be about 68%.

@lijiannuist thanks for your quick reply, but why.... I didn't modify any single line of your codes....
my software environment:
Ubuntu16.04, Titan xp*4, Driver Version: 418.87.00 CUDA Version: 10.1 Cudnn 7.6.4
Tensorflow-gpu 1.9.0, python 3.6.9, what else do i need to provide for comparasion...?

I think the AR@100 is also very low because the result in the paper is 76.65 ?

Do you have any advice to repeat your experiment results?

@linchuming
Copy link
Collaborator

@Light-- Hi, I rerun the code and got the results:
Area Under the AR vs AN curve: 68.37602852441032%
Do you try to recompile the proposal generation layer?
We did not compile the layer and test it in CUDA 10.1.
Our environment is CUDA 9.0.

@Light--
Copy link
Author

Light-- commented Dec 9, 2019

We did not compile the layer and test it in CUDA 10.1.

@linchuming

  1. what do you mean by "recompile the PFG" layer"? Do i need to do anything else before run the code except for setting up the environment and preparing the dataset?

  2. sorry, i didn't make myself clear. The CUDA Driver version(what the nvidia-smi shows) of my environment is 10.1, but the CUDA runtime version(what the nvcc -V shows) is 9.0. All the outputs of cat /usr/local/cuda/version.txt, nvcc -V, stat /usr/local/cuda said my CUDA is 9.0.

@linchuming
Copy link
Collaborator

linchuming commented Dec 9, 2019

@Light--

cd custom_op/src
make

to recompile our proposal feature generation operation.
If the result is still incorrect, you can try to modify the file custom_op/src/Makefile as below:

TF_CFLAGS:=$(shell python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_compile_flags()))')
TF_LFLAGS:=$(shell python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_link_flags()))')
CFLAGS = ${TF_CFLAGS} -fPIC -O2 -std=c++11
LDFLAGS = -shared ${TF_LFLAGS}
CUDA_ARCH = 
all:
	nvcc -std=c++11 -O2 -c -o prop_tcfg_op.cu.o prop_tcfg_op.cu.cc \
			$(TF_CFLAGS) $(LDFLAGS) -I/usr/local -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC \
			-DNDEBUG --expt-relaxed-constexpr -w $(CUDA_ARCH)
	g++ $(CFLAGS) -o ../prop_tcfg.so prop_tcfg_op.cc prop_tcfg_op.cu.o \
			$(LDFLAGS) -L/usr/local/cuda/lib64 -D GOOGLE_CUDA=1 \
			-I/usr/local -I/usr/local/cuda/include -I/usr/local/cuda/targets/x86_64-linux/include \
			-L/usr/local/cuda/targets/x86_64-linux/lib -lcudart 

@Light--
Copy link
Author

Light-- commented Dec 9, 2019

@lijiannuist @linchuming
Many thanks. The result seems correct now after i recompile PFG and re-run the auto_run.sh :

[INIT] Loaded annotations from validation subset.▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒| 590/590 [02:55<00:00,  3.68it/s]
        Number of ground truth instances: 7292
        Number of proposals: 472700
        Fixed threshold for tiou score: [0.5  0.55 0.6  0.65 0.7  0.75 0.8  0.85 0.9  0.95]
[RESULTS] Performance on ActivityNet proposal task.
        Area Under the AR vs AN curve: 68.37602852441032%
AR@1 is          0.30814591332967634
AR@5 is          0.4914838178826111
AR@10 is         0.5723532638507954
AR@100 is        0.767937465715853

But may i ask what have changed after recompile the PFG layer? Did the layer structure change? or anything else changed....

PS. the AR@100 I got here is 0.767937465715853, but the published paper is 76.65, was my result correct? Since the Area Under the AR vs AN curve result(68.37602852441032%) is also higher than that in the paper(68.23), should we ignore the difference after the decimal point in the result by default?

@swordlidev
Copy link
Contributor

Because of randomness, you can ignore the difference after the decimal point in the result.

@Deicide-PiLi
Copy link

@Light--

cd custom_op/src
make

to recompile our proposal feature generation operation.
If the result is still incorrect, you can try to modify the file custom_op/src/Makefile as below:

TF_CFLAGS:=$(shell python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_compile_flags()))')
TF_LFLAGS:=$(shell python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_link_flags()))')
CFLAGS = ${TF_CFLAGS} -fPIC -O2 -std=c++11
LDFLAGS = -shared ${TF_LFLAGS}
CUDA_ARCH = 
all:
	nvcc -std=c++11 -O2 -c -o prop_tcfg_op.cu.o prop_tcfg_op.cu.cc \
			$(TF_CFLAGS) $(LDFLAGS) -I/usr/local -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC \
			-DNDEBUG --expt-relaxed-constexpr -w $(CUDA_ARCH)
	g++ $(CFLAGS) -o ../prop_tcfg.so prop_tcfg_op.cc prop_tcfg_op.cu.o \
			$(LDFLAGS) -L/usr/local/cuda/lib64 -D GOOGLE_CUDA=1 \
			-I/usr/local -I/usr/local/cuda/include -I/usr/local/cuda/targets/x86_64-linux/include \
			-L/usr/local/cuda/targets/x86_64-linux/lib -lcudart 

have you solved with the problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants