You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm running into a weird memory issue. I'm running snap.pp.scrublet on a large list of adatas and after I get ~66% of the way through, I start seeing the following errors/warnings:
memory allocation of 1179648 bytes failed
memory allocation of 32768 bytes failed
They happen sporadically, but don't panic/kill the process. the exact code I am running is:
importosfromglobimportglobimportsnapatac2assnapimportpandasaspdimportplotly.ioaspioimportnumpyasnpfromtqdmimporttqdmN_JOBS=10# check version of snapprint(snap.__version__)
output_dir=os.path.expandvars("$BRICKYARD/results_analysis/scatlas2/h5ads/")
os.makedirs(output_dir, exist_ok=True)
CELLRANGER_OUTS=os.path.expandvars("$BRICKYARD/results_analysis/scatlas2/fragments/")
# get a list of the fragment files in the data directoryfragment_files=glob(f"{CELLRANGER_OUTS}/*_fragments.tsv.gz")
print(f"Found {len(fragment_files)} fragment files")
outputs= []
forflinfragment_files:
name=fl.split("/")[-1].split(".tsv.gz")[0]
# outputs.append(f"{output_dir}/{name}.h5ad")outputs.append(
os.path.join(output_dir, f"{name}.h5ad")
)
# def main():# import the data (process it and save to h5ad files)adatas=snap.pp.import_fragments(
fragment_files,
file=outputs,
chrom_sizes=snap.genome.hg38,
min_num_fragments=1000,
n_jobs=N_JOBS,
# sorted_by_barcode=False,# tempdir=tmp_dir,
)
# checkpoint# adatas = [# snap.read(f) for f in outputs# ]snap.pp.add_tile_matrix(adatas, bin_size=5000, n_jobs=N_JOBS)
snap.pp.select_features(adatas, n_jobs=N_JOBS)
snap.pp.scrublet(adatas, n_jobs=N_JOBS)
All in all its about 150 fragment files... so its a lot of data. My resources are large, however at:
40 cores
256G RAM
I am curious if you have any insight? The process continues to run, but it feels like i shouldn't ignore this. What do you think? Thank you!
The text was updated successfully, but these errors were encountered:
Hello!
I'm running into a weird memory issue. I'm running
snap.pp.scrublet
on a large list ofadata
s and after I get ~66% of the way through, I start seeing the following errors/warnings:They happen sporadically, but don't panic/kill the process. the exact code I am running is:
All in all its about 150 fragment files... so its a lot of data. My resources are large, however at:
I am curious if you have any insight? The process continues to run, but it feels like i shouldn't ignore this. What do you think? Thank you!
The text was updated successfully, but these errors were encountered: