Skip to content

3. Methods

Bruno Manuel Santos Saraiva edited this page Oct 19, 2023 · 10 revisions

Methods implemented in NanoPyx

Currently NanoPyx implements the methods previously available as part of the NanoJ plugin family for ImageJ.

Image registration

NanoPyx makes available 2D drift correction and channel registration. Both are based on using phase correlation to find the best match possible between frames/channels.
You can read more about the implementation here: Romain F Laine et al 2019 J. Phys. D: Appl. Phys. 52 163001

Estimate Drift Correction:

Requires an image stack with shape: (time, rows, columns)

Parameters:

  • Reference Frame: Which frame to be used as reference. Either always use the first frame (better for fixed cells) or the previous frame (better for live cells).
  • Time Averaging: Whether to register each individual frame, if using 1, or how many frames to average before calculating drift correction (better for single molecule data). Output keeps the original number of frames, single frame drift estimation is calculated by interpolating using the calculated drift of the averaged image stack.
  • Max Expected Drift: Maximum amount of expected drift in pixels.
  • Use ROI: to be selected if you want to perform the drift based on a roi that is defined as a shape layer in napari.
  • roi: shape layer to be used as a roi
  • Shift Calculation Method: Max - Uses the maximum value of the correlation; Subpixel Fitting - Performs interpolation in the Cross-correlation map to achieve subpixel accuracy in the drift estimation.
  • Save drift table as npy: If not selected drift table is selected as a .csv file, if selected as a numpy file.
  • Apply: if selected it will apply the calculated drift table and generate an aligned version of the input image.
  • Save Drift Table to: path where the drift table will be saved.

Apply Drift Correction:

Requires an image stack with shape: (time, rows, columns) and a previously calculated drift table.

Parameters:

  • Path to Drift Table: path to the drift table that is going to be used in the drift correction.

Estimate Channel Registration Parameters:

Requires an image stack with shape: (channel, rows, columns).

Parameters:

  • Reference Channel: Which channel to be used as reference.
  • Max Expected Shift: Maximum amount of expected shift between channels, in pixels.
  • Blocks per Axis: As channel misalignmnet is not always homogeneous across the field of view, shift can be calculated for individual blocks of the field of view. This parameters sets how many blocks are created along both axis.
  • Minimum Similarity: Since smaller blocks may lead to shift calculation in areas of the image without any cells, minimum similarity can be used to define the minimum Pearson's Correlation Coefficient, between two blocks of different channels, required to use the calculated shifts as part of the registration.
  • Shift Calculation Method: Max - Uses the maximum value of the correlation; Subpixel Fitting - Performs interpolation in the Cross-correlation map to achieve subpixel accuracy in the drift estimation.
  • Save Translation Masks: Whether to save the calculated translation masks.
  • Save Cross Correlation Maps: Whether to save the cross correlation maps calculated for each channel.
  • Apply: If selected, after estimation of the channels misalignment it will apply the correction and create a new aligned image.
  • Save Translation Masks to: Path where translation masks are going to be saved.
  • Save Cross Correlation Maps to: Path where cross correlation maps are going to be saved.

Apply Channel Alignment:

Requires an image stack with shape: (channel, rows, columns) and a previously calculated translation mask.

Parameters:

  • Path to Translation Mask: path to the translation mask that is going to be used in the channel alignment.

SRRF

Currently NanoPyx allows users to generate super-resolved images using SRRF. You can read more about it here: Culley S, Tosheva KL, Matos Pereira P, Henriques R. SRRF: Universal live-cell super-resolution microscopy. Int J Biochem Cell Biol. 2018 Aug;101:74-79. doi: 10.1016/j.biocel.2018.05.014. Epub 2018 May 28. PMID: 29852248; PMCID: PMC6025290.

SRRF Parameters:

  • Frames-per-timepoint (0=auto): How many frames of the original image stack are used to calculated a single SRRF frame. For example, given an input image with 500 frames, if using 100 frames per timepoint, SRRF will generate an image stack with 5 super-resolved frames.
  • Magnification: Desired magnification for the generated radiality image.
  • Ring Radius: Radius of the ring used to calculate the radiality (in pixels).
  • SRRF order: Flag for types of SRRF temporal correlations. Order = SUM: pairwise product sum; Order = Maximum: maximum intensity projection; Order = Mean: mean intensity projection; Order = Autocorrelation order 2,3 or 4: autocorrelation function of order 2, 3 or 4.

eSRRF

NanoPyx also implements eSRRF (enhanced Super-Resolution Radial Fluctuations) which is an extension of the SRRF method. You can read more about it here: Romain F. Laine et al., ‘High-fidelity 3D live-cell nanoscopy through data-driven enhanced super-resolution radial fluctuation’, bioRxiv, p. 2022.04.07.487490, Jan. 2022, doi: 10.1101/2022.04.07.487490.

eSRRF Parameters:

Requires an image with shape [frames, height, width].

  • Magnification: Desired magnification for the generated radiality image.
  • Sensitivity: sensitivity of the RGC
  • Ring Radius: radius of for the radial gradient convergence (RGC)
  • Frames-per-timepoint: How many frames of the original image stack are used to calculate a single eSRRF frame. For example, given an input image with 500 frames, if using 100 frames per timepoint, eSRRF will generate an image stack with 5 super-resolved frames.
  • Apply Intensity Weighting: if selected computes the final image using the intensity weights of the input image.
  • eSRRF order: Flag for types of eSRRF temporal correlations. Order = Average: mean intensity projection; Order = Variange: variance intensity projection; Order = Autocorrelation: uses an autocorrelation function to create the projection.

Quality Control

Requires an image with shape [frames, height, width].

NanoPyx implements the methods available in NanoJ-SQUIRREL (Error Map and FRC), as well as Image Decorrelation Analysis.
References:
Error Map: Culley, S., Albrecht, D., Jacobs, C. et al. Quantitative mapping and minimization of super-resolution optical imaging artifacts. Nat Methods 15, 263–266 (2018). https://doi.org/10.1038/nmeth.4605
FRC: Nieuwenhuizen RP, Lidke KA, Bates M, Puig DL, Grünwald D, Stallinga S, Rieger B. Measuring image resolution in optical nanoscopy. Nat Methods. 2013 Jun;10(6):557-62. doi: 10.1038/nmeth.2448. Epub 2013 Apr 28. PMID: 23624665; PMCID: PMC4149789.
DecorrAnalysis: Descloux A, Grußmayer KS, Radenovic A. Parameter-free image resolution estimation based on decorrelation analysis. Nat Methods. 2019 Sep;16(9):918-924. doi: 10.1038/s41592-019-0515-7. Epub 2019 Aug 26. PMID: 31451766.

Error Map Parameters:

Requires a diffraction limited image with shape [frames, height, width] and a super resolved image with the same shape.

  • img ref: image to be used as the diffraction limited image
  • img sr: image to be used as the super resolved image

FRC Parameters:

Requires an image with shape [frames, height, width].

  • Frame 1/Frame 2: As FRC is calculated between two frames of the same image stack, these parameters determines which two frames are used for the calculation.
  • Pixel Size: Pixel size of the image. Used to calculate resolution values.
  • Units: Pixel size units.

Image Decorrelation Analysis Parameters:

Requires an image with shape [frames, height, width].

  • Frame: Frame of the image stack to be used in the calculation.
  • Pixel Size: Pixel size of the image. Used to calculate resolution values.
  • Units: Pixel size units.
Clone this wiki locally