Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nvBowtie crash with Segmentation fault #1

Open
ctsgh opened this issue Oct 9, 2014 · 2 comments
Open

nvBowtie crash with Segmentation fault #1

ctsgh opened this issue Oct 9, 2014 · 2 comments

Comments

@ctsgh
Copy link

ctsgh commented Oct 9, 2014

Hi,
I'm trying to evaluate nvBowtie on our workstation against GCAT testcases, but I receive the following crash result:

stats : allocated device driver data (2.35 GB - 0.6s)
stats : device has 3051 of 6143 MB free
stats : processing reads in batches of 1024K
stats : allocating alignment buffers... started
stats : estimated: HOST 0 MB, DEVICE 2336 MB)
verbose : allocating 384 MB of string storage
verbose : CIGARs : 128 MB
verbose : MDs : 256 MB
verbose : allocating 315 MB of DP storage
stats : allocating alignment buffers... done
stats : allocated: HOST 0 MB, DEVICE 2651 MB)
stats : ready to start processing: device has 399 MB free
verbose : starting background input thread
Segmentation fault

Out platform: RHEL 5.10, Geforce Titan Black 6GB
EL5 shipped with gcc 4.4.7 which does not support atomic memory operation, thus we use gcc 4.8.2 from SLC5 build.

We have tried:
gcc 4.4.7, cuda 6.5, nvbio-gpl release
gcc 4.4.7, cuda 6.0, nvbio-gpl release
gcc 4.8.2, cuda 6.5, nvbio release
gcc 4.8.2, cuda 6.0, nvbio release

All of them receive the same crash.

The typical command used on nvBowtie is:

./nvBowtie --file-ref hg19 ../se400_small/gcat_set_051_1.fastq test.bam

While the fastq file is from www.bioplanet.com/gcat

We have don't have more powerful tesla card other than M2050 at this moment, therefore we use Titan Black which has larger memory :/

@achacond
Copy link

I am in the same situation. I am running a 10 Million Single-end query dataset and the nvBowtie gives me a Segmentation fault when it is aligning 250, 500 and 1000nt query sizes. When we are using 100nt query sizes as input, the issue is unreproducible and the segmentation fault doesn't appear.

I checked the issue with 2 versions of NVBIO (the v0.9.9.3 and the current git version). The code was compiled with GCC 4.9.1 and CUDA 6.5. The test was run in a system with 32 CPU threads and a Tesla K20c GPU.

Can anyone instruct me about this issue?
Please, let me know if you need the input datasets or more information to replicate the tests.

Thanks in advance

@JeroenMerks
Copy link

Same here. When the genome gets too big, the Segmentation fault error pops up.
GCC 4.8 and CUDA 7.5 on a graphics card with (perhaps too little) 3GB of RAM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants