-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pixi eats all my memory and then dies #2458
Comments
Thanks for reporting! |
No problem! Some more details:
|
Implementing a memory limit as suggested here #2214 (comment) would be super helpful! |
I discussed that with @baszalmstra the other day but it doesn't seem like a memory limit is something we can technically do. But we do need to find a solution for this. Turning off/limiting concurrency should be doable. |
On the main, having robust parallelism for "things pixi does" (be they project-defined |
+1. I experience the same issues. See panel-extensions/copier-template-panel-extension#4 (comment). |
I investigated this further and confirm that the solver is using a lot of memory due to a very high number of possible candidates for some packages. Specifically We will fix this in the solver but its a relatively large refactor that will take some time. |
### Why Pixi can possibly use a big amount of memory during the solve or network requests during the repodata fetching. While we search for a better/automated solution we want to let the user escape the issue by forcing the amount of concurrent jobs. The related issue is: #2458 ### What this PR adds As a user you can now define the max concurrent solves and max network requests in two ways **CLI** ``` pixi install --concurrent-solves 3 pixi install --concurrent-downloads 12 ``` **configuration** ``` pixi config set concurrency.solves 1 pixi config set concurrency.downloads 12 ``` `config.toml` ```toml [concurrency] solves = 2 downloads = 12 ``` ### TODO: After initial approval of design I'll add the following: - [x] : Add documentation - [x] : Add basic cli and configuration test to the integration tests
### Why Pixi can possibly use a big amount of memory during the solve or network requests during the repodata fetching. While we search for a better/automated solution we want to let the user escape the issue by forcing the amount of concurrent jobs. The related issue is: prefix-dev#2458 ### What this PR adds As a user you can now define the max concurrent solves and max network requests in two ways **CLI** ``` pixi install --concurrent-solves 3 pixi install --concurrent-downloads 12 ``` **configuration** ``` pixi config set concurrency.solves 1 pixi config set concurrency.downloads 12 ``` `config.toml` ```toml [concurrency] solves = 2 downloads = 12 ``` ### TODO: After initial approval of design I'll add the following: - [x] : Add documentation - [x] : Add basic cli and configuration test to the integration tests
In #2569 we implemented an escape hatch so you can define a max set of |
As an update, huge improvements are on it's way. @baszalmstra had some fun over Christmas: |
We're currently not actively searching for more solutions to this problem. If you encounter this, please share the environment and on what machine you are running this. |
OK, this seems good now! |
Checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pixi, using
pixi --version
.Reproducible example
it gets further than that, but it’s always a race against time to be able to copy it:
Issue description
I have 16GB of memory, and pixi doesn’t seem to have a setting to reduce the number of threads/processes it uses.
I’m pretty sure the OOM killer kills it.
Expected behavior
not that.
The text was updated successfully, but these errors were encountered: