Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
!!! danger "Work in progress"

*(30 April 2025)*

The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS,
and to be well integrated in the EESSI documentation.

It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on
4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices.


# Alternative ways to access CernVM-FS repositories

While a [native installation of CernVM-FS on the client system](client.md),
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
!!! danger "Work in progress"

*(30 April 2025)*

The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS,
and to be well integrated in the EESSI documentation.

It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on
4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices.


# CernVM-FS client system

The recommended way to gain access to CernVM-FS repositories is to set up
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Accessing CernVM-FS repositories

- [Setting up a CernVM-FS client system](client.md)
- [Setting up a proxy server](proxy.md)
- [Setting up a Stratum 1 replica server](stratum1.md)
- [Alternative ways to access CernVM-FS repositories](alternatives.md)
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
!!! danger "Work in progress"

*(30 April 2025)*

The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS,
and to be well integrated in the EESSI documentation.

It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on
4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices.


# Squid proxy server

As a first step towards a production-ready CernVM-FS setup
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
!!! danger "Work in progress"

*(30 April 2025)*

The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS,
and to be well integrated in the EESSI documentation.

It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on
4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices.


# Private Stratum 1 replica server

In this section of the tutorial, we will set up a [Stratum 1 replica server](
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
!!! danger "Work in progress"

*(30 April 2025)*

The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS,
and to be well integrated in the EESSI documentation.

It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on
4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices.


# CernVM-FS Terminology

An overview of terms used in the context of CernVM-FS, in alphabetical order.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
!!! danger "Work in progress"

*(30 April 2025)*

The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS,
and to be well integrated in the EESSI documentation.

It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on
4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices.


# Configuring CernVM-FS on HPC infrastructure

In the [previous section](access/index.md) we have outlined how to set up a robust CernVM-FS infrastructure, by having a private Stratum 1 replica server and/or dedicated Squid proxy servers. While this approach will work for many HPC systems, some may have slightly more esoteric setups that require specific solutions, which we will discuss in this section.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
!!! danger "Work in progress"

*(30 April 2025)*

The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS,
and to be well integrated in the EESSI documentation.

It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on
4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices.


# Containers and CernVM-FS

CernVM-FS can also be used to distribute container images, providing many of the same benefits that come with any CernVM-FS installation. Especially the on-demand download of accessed files means that containers start nearly instantly, and are more efficient for large images when only a fraction of the files are read, which is typically the case.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
!!! danger "Work in progress"

*(30 April 2025)*

The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS,
and to be well integrated in the EESSI documentation.

It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on
4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices.


# Creating a CernVM-FS repository

Although creating a new CernVM-FS repository and making it available to the world is not in scope for this
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
!!! danger "Work in progress"

*(30 April 2025)*

The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS,
and to be well integrated in the EESSI documentation.

It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on
4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices.


# Flagship CernVM-FS repositories

Here we list a couple of flagship CernVM-FS repositories, all of which are **publicly available**.
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Introduction to CernVM-FS

<div align="center">
<img src="../img/logos/CernVM-FS_logo_with_name.png" alt="CernVM-FS logo" width="50%"/></br>
</div>

- [What is CernVM-FS?](what-is-cvmfs.md)
- [Technical details](technical-details.md)
- [Flagship repositories](flagship-repositories.md)
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
!!! danger "Work in progress"

*(30 April 2025)*

The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS,
and to be well integrated in the EESSI documentation.

It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on
4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices.


# Technical details of CernVM-FS

CernVM-FS is implemented as a *POSIX read-only [filesystem in user space (FUSE)](https://en.wikipedia.org/wiki/Filesystem_in_Userspace)*
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
!!! danger "Work in progress"

*(30 April 2025)*

The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS,
and to be well integrated in the EESSI documentation.

It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on
4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices.


# What is CernVM-FS?

<div align="center">
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
!!! danger "Work in progress"

*(30 April 2025)*

The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS,
and to be well integrated in the EESSI documentation.

It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on
4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices.


# High-level design of EESSI

The design of EESSI is very similar to that of the [Compute Canada software stack](inspiration.md) it is inspired by,
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# EESSI

## European Environment for Scientific Software Installations

<div align="center">
<img src="../img/logos/EESSI_logo_horizontal.png" alt="EESSI logo" width="50%"/></br>
</div>

* [What is EESSI?](what-is-eessi.md)
* [Motivation & Goals](motivation-goals.md)
* [Inspiration](inspiration.md)
* [High-level design](high-level-design.md)
* [Using EESSI](using-eessi.md)
* [Getting support](support.md)
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
!!! danger "Work in progress"

*(30 April 2025)*

The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS,
and to be well integrated in the EESSI documentation.

It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on
4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices.


# Inspiration for EESSI

The EESSI concept is heavily inspired by software stack provided by the
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
!!! danger "Work in progress"

*(30 April 2025)*

The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS,
and to be well integrated in the EESSI documentation.

It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on
4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices.


# Motivation & Goals of EESSI

## Motivation
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
!!! danger "Work in progress"

*(30 April 2025)*

The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS,
and to be well integrated in the EESSI documentation.

It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on
4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices.


# Getting support for EESSI

<div align="center">
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
!!! danger "Work in progress"

*(30 April 2025)*

The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS,
and to be well integrated in the EESSI documentation.

It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on
4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices.


# Using EESSI

Using the software installations provided by the EESSI CernVM-FS repository `software.eessi.io`
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
!!! danger "Work in progress"

*(30 April 2025)*

The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS,
and to be well integrated in the EESSI documentation.

It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on
4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices.


# What is EESSI?

The [European Environment for Scientific Software Installations](https://www.eessi.io) (EESSI, pronounced as "easy")
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
!!! danger "Work in progress"

*(30 April 2025)*

The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS,
and to be well integrated in the EESSI documentation.

It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on
4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices.


# Best Practices for CernVM-FS in HPC

<p align="center">
Expand All @@ -28,6 +17,26 @@ on HPC infrastructure.
</div>


## Recording

A first long-form (~3h15min) virtual edition of this tutorial was held on 4 December 2023,
see [here](https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices/) (recording available)

A slightly shorter (~2h) updated version of this tutorial was presented as a part of the
[EESSI webinar series in May 2025](../webinar-series-2025Q2.md).

The recording of this session is embedded below:

<div align="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/5-IYnxCz_aQ?si=zqgYBiZCdY5islK8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div>


## Slides

[Available for download here](../EESSI-webinars-MayJune-2025-002-Introduction-to-CernVM-FS-20250512.pdf)


## Contents

- [Home](index.md)
Expand Down Expand Up @@ -55,23 +64,6 @@ on HPC infrastructure.
- [Creating a CernVM-FS repository](creating-repo.md)
- [Appendix: Terminology](appendix/terminology.md)


## Recording

A first virtual edition of this tutorial was held on 4 December 2023,
the recording is available here:

<p align="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/L0Mmy7NBXDU?si=Ob0DtYN2FH3K169V" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

<br/><a href="https://raw.githubusercontent.com/multixscale/cvmfs-tutorial-hpc-best-practices/main/files/Best-Practices-for-CernVM-FS-in-HPC-20231204.pdf">
<em>slides (PDF) available for download here</em></a>
</p>

### Slides

[Available for download here](https://raw.githubusercontent.com/multixscale/cvmfs-tutorial-hpc-best-practices/main/files/Best-Practices-for-CernVM-FS-in-HPC-20231204.pdf)

## Intended audience

This tutorial is intended for people with a background in HPC (system administrators, support team members,
Expand All @@ -92,12 +84,6 @@ CernVM-FS repositories on HPC infrastructure.

## Practical information

### Registration

Attendance is free, but **registration is required**: <https://event.ugent.be/registration/cvmfshpc202312>.

Registration for online tutorial on Mon 4 Dec 2023 is **closed** *(since Sun 3 Dec 2023)*

### Slack channel

Dedicated channel in EESSI Slack: [`#cvmfs-best-practices-hpc`](https://eessi-hpc.slack.com/archives/C068DV7GY3V)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
!!! danger "Work in progress"

*(30 April 2025)*

The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS,
and to be well integrated in the EESSI documentation.

It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on
4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices.


# Monitoring CernVM-FS

There are multiple options available to automate the monitoring of CernVM-FS clients (see [CernVM-FS documentation](https://cvmfs.readthedocs.io/en/stable/cpt-configure.html#monitoring) ):
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
!!! danger "Work in progress"

*(30 April 2025)*

The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS,
and to be well integrated in the EESSI documentation.

It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on
4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices.


# Performance aspects of CernVM-FS

One aspect we can not ignore in the context of software on HPC infrastructure is *performance* (the P in HPC).
Expand Down
Loading