Skip to content

meeting CernVM FS Aug 17th 2020

Kenneth Hoste edited this page Aug 19, 2020 · 4 revisions

Slides

Attendees

  • Jakob Blomer (CERN)
  • Maxime Boissonneault (Compute Canada)
  • Bob Droge (Univ. of Groningen)
  • Carlos Fenoy (Roche)
  • John Hearns (Dell Technologies)
  • Kenneth Hoste (HPC-UGent)
  • Simone Mosciatti (CERN)
  • Alan O'Cais (Jülich Supercomputing Centre)
  • Ward Poelmans (Vrij Universiteit Brussel)
  • Thomas Röblitz (Univ. of Bergen)
  • Ryan Taylor (Compute Canada)
  • Oscar ter Weeme (Dell Technologies)
  • Bas van der Vlies (SURF.nl)
  • Caspar van Leeuwen (SURF.nl)
  • Davide Vanzo (Microsoft)

Notes

(by Kenneth Hoste)

  • reluctance of HPC sites to install FUSE client has been a large obstacle to adoption for CernVM-FS

    • in particular with large DOE sites
    • recent efforts using containers to mitigate this
    • several workarounds available for this already (see Jakob's slides)
    • good collaboration with Marconi system in Italy + CSCS
  • some care should be taken w.r.t. gateway/publisher setup

    • probably not suited yet for production use because it has not really been battle tested
    • alternate approaches are known to work, incl. using cloud storage like S3
  • interaction with CernVM-FS developers: mailing list, bug tracker, GitHub repo, monthly calls (5-10 people), user meeting

  • discussion about best approach to publish software to /cvmfs repo

    • gateway/publisher setup may not be fully mature yet, but it seems like the best way forward for EESSI in the long run
      • some concerns w.r.t. requiring root privileges, can this be avoided?
      • one significant advantage of gateway/publisher setup is support for ACLs
        • build nodes can be allowed to only write in a specific subtree of the filesystem (e.g. Haswell build node in /cvmfs/.../software/x86_84/intel/haswell/)
        • restrict who can change core scripts/configuration, make changes in compatibility layer, tweak toolchains, install apps, etc.
    • other approach is to mount read-only filesystem in different location and use overlayfs to get illusion of read-write filesystem
      • there are some concerns here with a long-living setup like this, especially when read-only /cvmfs repo gets updated
      • overlayfs assumes it's immutable, weird things can happen when it changes (dixit overlayfs developer)
    • ComputeCanada uses different approach to work around problems with none-gateway/publisher setup
      • build node is not a CernVM-FS client
      • installations are rsync'ed once they are fully done to Stratum 0 via transaction that is started remotely
      • software build/installation process and publishing to /cvmfs is deliberately kept separate due to concerns about long-running transactions
        • installations can sometimes take weeks to finish up (debugging, testing, etc.), keeping transaction open for that long doesn't make sense
        • even "normal" installations may take hours
        • also need to account for network problems
    • only one transaction can be "live" at the same time on a given hosts
      • unless containers are used to run cvmfs create sessions in...
        • note: cvmfs create is currently a prototype
      • read-only /cvmfs mount is frozen by cvmfs create to avoid trouble with overlayfs
      • this basically needs every EasyBuild installation should be a separate transaction, to avoid problems due to common dependencies
    • shortcoming reported by Compute Canada w.r.t. mount point clash in gateway/publisher setup
    • probably worth setting up a follow-up conf call on this topic specifically...
  • Jakob: advise against switching to https (see https://github.com/EESSI/filesystem-layer/issues/6)

    • integrity is already ensured by CernVM-FS itself
    • man-in-the-middle attack isn't possible
  • combination of cvmfsdirtab and self-managed catalogs probably makes most sense for EESSI

    • there are some performance issues with software publishing when lots of catalogs are used (several thousand)
      • hasn't been an issue for ComputeCanada?
  • service containers can avoid need to have CernVM-FS installed on clients

    • container to mount /cvmfs filesystem in another container
    • only intended for CernVM-FS clients
Clone this wiki locally