You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My cluster server has rather large memory, and I noticed that this error pops up whatever the size of
input file. I suppose the reason is resolution=10000 is too high for SCI.
Here is the error.
################################################################
writing HiC graph: 100%|##########| 231/231 [8:00:33<00:00, 124.82s/it]
['/home/user/software/sci-master/LINE/line', '-train', 'test_HiC_graph.txt', '-order', '1', '-samples', '1', '-output', 'test_HiC_graph_order_1_samples_1M.embedding']
Traceback (most recent call last):
File "/home/user/software/sci-master/sci/sci.py", line 113, in
run_sci()
File "/home/user/software/sci-master/sci/sci.py", line 109, in run_sci
oArgs.clusters)
File "/home/user/software/sci-master/sci/Compartments.py", line 32, in predict_subcompartents
embedding_files = run_LINE(graphFile, samples, order)
File "/home/user/software/sci-master/sci/utils.py", line 63, in run_LINE
tOutput = _LaunchJob(command)
File "/home/user/software/sci-master/sci/utils.py", line 23, in _LaunchJob
stderr=subprocess.PIPE)
File "/home/user/miniconda2/lib/python2.7/subprocess.py", line 394, in init
errread, errwrite)
File "/home/user/miniconda2/lib/python2.7/subprocess.py", line 938, in _execute_child
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
The text was updated successfully, but these errors were encountered:
@Selecton98
It seems that the HiC graph is larger than memory allocated by LINE graph embedding. Can you share with me how many nodes does your HiC graph have?
Hello, thank you all for developing SCI which is really innovative in Hi-C research.
This error occured when I used SCI to analyze resolution=10000 hg19-annotated GM12878 HIC001 matrix (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSM1551550). But I have successfully analyzed resolution=40000-mm10 annotated HIC matrix.
My cluster server has rather large memory, and I noticed that this error pops up whatever the size of
input file. I suppose the reason is resolution=10000 is too high for SCI.
Here is the error.
################################################################
writing HiC graph: 100%|##########| 231/231 [8:00:33<00:00, 124.82s/it]
['/home/user/software/sci-master/LINE/line', '-train', 'test_HiC_graph.txt', '-order', '1', '-samples', '1', '-output', 'test_HiC_graph_order_1_samples_1M.embedding']
Traceback (most recent call last):
File "/home/user/software/sci-master/sci/sci.py", line 113, in
run_sci()
File "/home/user/software/sci-master/sci/sci.py", line 109, in run_sci
oArgs.clusters)
File "/home/user/software/sci-master/sci/Compartments.py", line 32, in predict_subcompartents
embedding_files = run_LINE(graphFile, samples, order)
File "/home/user/software/sci-master/sci/utils.py", line 63, in run_LINE
tOutput = _LaunchJob(command)
File "/home/user/software/sci-master/sci/utils.py", line 23, in _LaunchJob
stderr=subprocess.PIPE)
File "/home/user/miniconda2/lib/python2.7/subprocess.py", line 394, in init
errread, errwrite)
File "/home/user/miniconda2/lib/python2.7/subprocess.py", line 938, in _execute_child
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
The text was updated successfully, but these errors were encountered: