You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thank you for your great work! I already tested SLAM3R on an RTX 4080 SUPER. Installation and running the demos worked flawlessly. However, one of my own demo image sequences exhausted the VRAM, which resulted in a crash of SLAM3R. I'm searching for a SLAM system running on resource constrained hardware. I tried to install SLAM3R on an NVIDIA Jetson AGX Orin with 64 GB of unified memory. Since this machine has much more memory available, SLAM3R shouldn't crash, but unfortunately won't be used xformers. Did you get the chance to test your system on a Jetson and experienced this issue (facebookresearch/xformers#1193) as well? I already opened an issue in xformers repository.
I was able to install triton from source and also installed xformers from source. However, I receive the following error message when running bash python -m xformers.info:
Traceback (most recent call last):
File "/home/anaconda3/envs/SLAM3R/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/avsjetsonagx1/anaconda3/envs/SLAM3R/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/diffSLAM/dependencies/SLAM3R/thirdparty/xformers/xformers/info.py", line 11, in <module>
from . import __version__, _cpp_lib, _is_opensource, _is_triton_available, ops
File "/home/avsjetsonagx1/diffSLAM/dependencies/SLAM3R/thirdparty/xformers/xformers/ops/__init__.py", line 26, in <module>
from .modpar_layers import ColumnParallelLinear, RowParallelLinear
File "/home/diffSLAM/dependencies/SLAM3R/thirdparty/xformers/xformers/ops/modpar_layers.py", line 15, in <module>
from .seqpar import sequence_parallel_leading_matmul, sequence_parallel_trailing_matmul
File "/home/diffSLAM/dependencies/SLAM3R/thirdparty/xformers/xformers/ops/seqpar.py", line 10, in <module>
from torch.distributed.distributed_c10d import _resolve_process_group
File "/home/anaconda3/envs/SLAM3R/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 22, in <module>
from torch._C._distributed_c10d import (
ModuleNotFoundError: No module named 'torch._C._distributed_c10d'; 'torch._C' is not a package```
The text was updated successfully, but these errors were encountered:
Hi, thank you for your great work! I already tested SLAM3R on an RTX 4080 SUPER. Installation and running the demos worked flawlessly. However, one of my own demo image sequences exhausted the VRAM, which resulted in a crash of SLAM3R. I'm searching for a SLAM system running on resource constrained hardware. I tried to install SLAM3R on an NVIDIA Jetson AGX Orin with 64 GB of unified memory. Since this machine has much more memory available, SLAM3R shouldn't crash, but unfortunately won't be used xformers. Did you get the chance to test your system on a Jetson and experienced this issue (facebookresearch/xformers#1193) as well? I already opened an issue in xformers repository.
I was able to install triton from source and also installed xformers from source. However, I receive the following error message when running bash
python -m xformers.info
:The text was updated successfully, but these errors were encountered: