You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are now on the VM, and have recently limited total docker memory usage. However, if we run job-runner in docker, then it will be inside that limit and subject to memory exhaustion pressure from running jobs. It would be nice if we could ensure job-runner had it's own memory pool within that, but it's not critical. This is probably better solved by per-job limits
The text was updated successfully, but these errors were encountered:
@bloodearnest We labelled this as part of the Graphnet work a while ago, just in order to nudge ourselves into getting it done. But it's really orthogonal, isn't it, and since there's no other work for this team on the Graphnet stuff at the moment it seems a bit odd. Shall we remove the label?
Currently, job-runner is checked out in a git repo in /srv/jobrunner/code, and run with a systemd unit.
We build a job-runner docker image currently, but don't use it. But partners like graphnet do.
Running as a docker image gives us a better deployment story, and parity with our partners.
There are two issue blocking this:
job-runner needs access to the host docker instance. This is working in theory, but we cannot get it to pass tests atm: https://github.com/opensafely-core/backend-server/tree/madwort/deploy-job-runner-with-docker-2
We are now on the VM, and have recently limited total docker memory usage. However, if we run job-runner in docker, then it will be inside that limit and subject to memory exhaustion pressure from running jobs. It would be nice if we could ensure job-runner had it's own memory pool within that, but it's not critical. This is probably better solved by per-job limits
The text was updated successfully, but these errors were encountered: