Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some launchers create intermediate .sh files outside of parsl's log directories #3067

Open
svandenhaute opened this issue Feb 11, 2024 · 1 comment · May be fixed by #3068
Open

Some launchers create intermediate .sh files outside of parsl's log directories #3067

svandenhaute opened this issue Feb 11, 2024 · 1 comment · May be fixed by #3068
Labels

Comments

@svandenhaute
Copy link

Many launchers generate intermediate files in the current working directory when they execute their command. Wouldn't it be easier to store these intermediate files in the submit_scripts folder in which the provider scripts are also located? Right now, they clog the original working directory of the user.

For example, see here for the part of the script of SrunLauncher which writes the launch command to bash script, which then gets executed with srun. Other launchers do it in a similar manner.
For SLURM in particular, the stderr and stdout paths are known inside the jobscript, so it would make sense to use something like path_aux_scripts=$(dirname $SLURM_JOB_STDOUT) and ensure any temporary files are created within $path_aux_scripts.

@benclifford
Copy link
Collaborator

There's some history here from the early days of Parsl when there was a serious use case of running without a shared file system, which tangles up trying to deal with where to place files in a way that no one has properly made a written-down model for, and which makes reasoning about this a bit frustrating to me: in that model, a launcher in the abstract can't really rely on any submit-side paths at all existing where it's running.

In the specific case of this issue #3067, I think it's probably OK to assume that you're always running inside a SLURM job (because the SrunLauncher code already assumes that, with statements like this export CORES=$SLURM_CPUS_ON_NODE) - and something like you suggest would probably be OK.

@svandenhaute svandenhaute linked a pull request Feb 11, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants