Tools & Techniques Diagnostics & prognostics

A Spatial Biology Startup Guide – Part 2

As a result of quickly advancing technologies for multiplex tissue assessment, researchers are faced with ever more complex image datasets to analyze and interpret. Here, we discuss common questions a researcher may have prior to setting up a multiplex immunofluorescence/immunohistochemistry (mIF/IHC) or other multi-channel image analysis workflow in the lab. We provide general guidance that will help enable efficient, productive, and reproducible mIF/IHC research results. These considerations apply largely to mIF/IHC data acquired from relatively low-plex (up to eight markers) staining of 3–5 μm formalin-fixed tissue sections.

The questions and answers that follow provide an overview of topics related to mIF/IHC image analysis, ranging from analytical workflow and software questions to publishing technicalities. A successful mIF/IHC analysis workflow strategy must meet each research group’s unique needs – and every implementation will benefit from input given by a collaborative group of experts in immuno-oncology, pathology, microscopy, and image analysis. A carefully designed image analysis workflow will reward users with a wealth of tissue-based biological information.

1. What are the basic steps for multiplex image analysis?

Analysis of multiplex images is generally divided into four key steps:

  1. Tissue identification and segmentation
  2. Cell segmentation
  3. Cell classification and validation of any classifier
  4. Measurement exportation and data interpretation

Sometimes additional steps, such as tissue type segmentation, may be needed; in other cases, certain steps (such as cell segmentation in pixel-based approaches) may be unnecessary.

2. How long does image analysis take? 

The answer to this will vary according to the complexity and number of images. Much will depend on the number and size of slides under investigation, in terms of both file size and pixel count. A field of view might take a few seconds for cell detection and classification, whereas a whole-slide image with 30 channels might take hours to process the tissue, segment tumor from stroma, and eventually segment cells. In addition, the image hosting location can have a significant impact. Processing images across a network or on the cloud can help alleviate data storage issues but comes with the cost of needing to transfer potentially terabytes of image data.

3. Should we analyze every image of a tissue slide? 

The number of images to analyze depends on the project and makeup of tissues being imaged – for instance, not every image needs analysis in a simple survey of cell densities. Additionally, consider the heterogeneity of the tissue being analyzed. Focusing on areas with a high expression of cellular markers, “hotspot” analysis has been used to stratify patient risk (1,2). However, when image selection occurs, it’s vital to avoid operator-dependent sampling bias (3). More complex spatial characteristics and cell-cell interaction measurements may benefit from whole-slide analysis.

4. What image analysis steps can be automated? What steps can be batched?

Most automation is successful because of manual work done up front – and most steps can eventually be automated. Creating validation and test sets for your cell classifiers allows users to test a variety of variables and thresholds but requires the creation of those sets one cell at a time. 

Batching should always be done with care, because different instruments in different settings will not produce exactly the same data – even with exactly the same samples. Ideally, all steps are eventually batchable with sufficient training and tuning of parameters, but safeguards must be in place – bulbs die out, optics drift, and antibodies change. Using a separate tissue (standard block) or cell pellet control slide with each batch can help identify potential issues.

5. What are some common metrics produced by image analysis?

The most common metrics are cell density within segmented tissues (e.g., CD3+ cells per cancer nest or per mm2), percentages of cells (e.g., % CD3+ cells that are also CD8+), and nearest-neighbor measurements (e.g., how many microns on average from a CD3+ cell to the nearest cancer cell).

Multiplex IF/IHC’s ability to identify and localize complex cell phenotypes within tissues enables spatial tools to better understand and characterize cell biology within tissue microenvironments. Individual cell objects can be characterized according to both a user-defined cell phenotype classification and an individual cell’s unique X-Y coordinate location within an image. This robust dataset of all cells characterized within images can then be interrogated for spatial characteristics and cell-cell interaction measurements. These measurements may include hotspots/clusters, distances to other cell types, distances to tissue annotations (tumor edge), and micro-neighborhood analyses of cell composition (4,5,6).

6. My lab does not have a bioinformatician. Will this limit our ability to analyze mIF/IHC data?

Not at all! Many software choices are user-friendly and yield datasets that can be handled in simple spreadsheets. Although scripting and complex data processing may benefit from a bioinformatician’s input in the long run, one is not needed to establish mIF/IHC capabilities. The forum is also an excellent place to ask questions about starting a pipeline but make sure to provide enough information about the problem to ensure an adequate response.

7. What should I look for when hiring new staff for image analysis?

Look for people who enjoy working with computers and networks, have good problem-solving and troubleshooting skills, and find Internet searches to be second nature. The ability to script and visually interpret raw data are important. A collaborative spirit is essential because most image analysts are not experts in biology or statistics. Willingness to seek support from experts is helpful for filling in knowledge gaps.

8. What software do I need to perform image analysis?

Multiplex IF/IHC image analysis can be performed using either commercial (e.g., InForm, HALO, Visiopharm) or open-source (e.g., QuPath, CellProfiler, ImageJ) software. Commercial software offers programming-free, built-in functionality and technical support, but the cost is high (often US$20,000 to $100,000). Typically, open-source software is free and capable of limited image analysis functions, including spectral unmixing, cell segmentation, and image stitching.

In general, commercial software offers straightforward analysis pipelines with training and support that may benefit users new to image analysis (or help with employee turnover). Open-source software offers greater flexibility and customization capabilities but may require a steeper learning curve to develop and refine a user workflow. Achieving custom functionality with open-source software may require additional programming by users, but community applications and extensions may already be available for a given analysis need.

9. What hardware do I need to perform image analysis?

Hardware needs depend on the number and size of images you routinely analyze and your preferred software. However, mIF/IHC analysis software often can run on standard desktop or laptop setups. 16 GB of RAM is usually sufficient for simple whole-slide analysis, whereas 32 GB or more might be required for complex whole-slide analysis, especially if running multiple threads or programs simultaneously. RAM limitations can be a bottleneck for certain software processes because pixel classifiers and more complex analyses can require significant overhead.

Other processes, like cell detection, are less taxing. Processes that can be multi-threaded, such as cell detection, benefit from CPUs with increased core counts (and multiple CPUs). With sufficiently technical staff, much of the processing can be offloaded to cloud-based pipelines. Remember, however, that multi-gigabyte images still take time to transfer, both for processing and for visualization.

10. What are the concerns if a collaborator or contractor handles the image analysis? 

Ideally, the same laboratory will perform staining and image analysis to enable efficient troubleshooting. However, expert collaborators and shared resource cores are a viable avenue for acquiring and analyzing mIF/IHC data. In discussions with collaborators, take time to understand their image analysis workflow in detail over the course of several meetings.

In particular, pay attention to how cell subsets are classified as staining positively for a given marker. Are they thresholding to what you think is real staining? Review positive staining, negative staining, and background, and be sure to discuss what areas of the tissue are included or excluded in the analysis. Inspect the original segmentation results, not just summary documents and selected snapshots. Much can go wrong in an image analysis pipeline and contractors might not be as motivated as researchers to look for discrepancies. Ultimately, the researcher is responsible for the data once published.

11. Should I process my images before analyzing them or start with raw images?

The answer will depend upon the method used to generate the images. Certain data collection methods have significant crosstalk between channels that almost requires linear unmixing. Other types of data collection, such as imaging mass cytometry, can show crosstalk between closely associated isotopes but need no processing for well-separated channels. Some manual quality control is recommended; not all samples survive the staining and cover-slipping processes sufficiently intact. In some cases, stitching together many fields of view into a single file can allow for more complex spatial analysis and better visualization.

12. What red flags should we be wary of in our images?

A clear understanding of how the staining was performed and images generated is critical to quality image analysis. Spectral unmixing issues can cause significant spillover from one channel into another, causing false positive signals. Batch or slide variations in fluorescent intensities or background levels may require re-staining, re-imaging, or a slide-specific image analysis strategy. Staining issues to watch for include nonspecific staining and non-optimized staining sequences that lead to “umbrella effect” blocking of antigen availability. Finally, excluding tissue samples or areas of poor quality early in a pipeline will enable more accurate image analysis downstream.

13. How should I quality check my image analysis?

Quality control (QC) for mIF/IHC imaging is critical to ensure that reported data are accurate. Multiplex IF/IHC QC should address technical aspects of histology, preanalytical phase, staining, and imaging as well as pathology and immunology considerations. Some facets of QC can be conducted with image analysis software, whereas others require human observation. Briefly, folds, tears, dried areas of tissue, and gross fixation artifacts should be annotated and excluded from analysis (either manually or using a pixel classifier). Saturation (due to staining or imaging), out-of-focus areas, obvious blocking of one marker with a colocalized marker, spectral bleed, and clear staining gradients should be ameliorated by reimaging or excluded. Finally, unexpected staining patterns, marker localization, and multimarker phenotypes (markers that should not colocalize in the same cell) can disqualify individual slides or entire batches.

Negative controls should contain only autofluorescence or nonspecific signal in each of the channels of interest. Positive controls, which should be run consistently with each batch, should be compared with the same positive control from preceding batches to verify that the peak, mean and minimum intensities observed in each channel are comparable.

14. How do I deal with batch variation?

Establish the image processing pipeline carefully so that it can be reproduced. Use tools that allow unbiased quantification (e.g., histograms and measurement maps). Use internal controls (e.g., tonsil staining or other tissue-specific control tissue) for guidance. Common technical variations in mIF/IHC studies arise from poor tissue quality, inconsistent staining processes, and instability in image acquisition. If these issues cannot be mitigated prior to image analysis, data normalization with z-scoring may be a solution. Otherwise, you may require batch-specific thresholding and cell classification.

15. How do I deal with intra-patient variation?

Tissue microenvironments are often heterogeneous, so it is crucial to analyze multiple regions of interest (ROI) at the same time. Tissue segmentation into areas such as tumor cancer cell and stroma regions spatially resolve data characterization. Good concordance across randomly selected ROIs will sufficiently support the use of aggregate summary values (e.g., a mean cell density for each patient). Regardless of the size of the tissue section, whole-slide tissue, or tissue microarray core, heterogeneity in the microenvironment architecture is inevitably present and tolerable with a goal towards identification of spatial immune biomarkers with robust clinical applicability.

16. My data results are incredibly complex, with dozens of markers. How do I make sense of the results?

The easiest way to reduce a project’s complexity is to perform an initial pilot study with dozens of markers and select the simplest, most efficient combination required for the study. Discuss the selection of the ideal set of markers with immunologists or pathologists and base decisions on unbiased dimensionality reduction approaches, such as principal component analysis of detected complex phenotypes.

Additionally, machine learning classifiers and/or unsupervised clustering (t-SNE/UMAP) can often identify complex cell phenotypes. Be aware and be flexible; some cells will not fit predetermined phenotypes! A user must carefully consider how to handle and validate these identified cell classes. To do so, carefully revisit the accuracy of the project’s cell segmentation and phenotyping and ensure that they are best optimized for your goals (e.g., approaches that might easily identify densely packed T cells in the lymph nodes might fail to identify large macrophages in tumor tissues).

17. The potential phenotypes and spatial metrics in mIF/IHC are numerous. How can I limit the report to only the most useful data?

The hypothesis should drive design of an mIF/IHC study, with phenotypes central to the research question defined beforehand. This approach prevents statistical power loss due to multiple comparison adjustments. Nonetheless, secondary exploratory analyses using machine-learned phenotypes that associate with certain patient or tissue characteristics can be important for discovering novel phenotypes (7). Variable selection (e.g., L1 regularization) or reduction (e.g., principal component analysis) methods can be used to subset phenotypes or distance metrics with potential clinical value (8, 9).

18. How much information about my analysis workflow should I share in a publication?

The entire script of any form of automation should be included in the supplementary data. Except in rare, problematic cases, representative images of the analysis should also be provided. Software increasingly facilitates such reporting, which is a consideration in the selection process. Some commercial software can make describing the analysis quite difficult, which reduces the chances of anyone replicating or building on (and citing) your work.

19. Where can I find further information to help with image analysis and manuscripts?

Purchased software usually includes sufficient support and training for the initial phase. Software-specific websites and online classes and lectures are fantastic ways to get started with a particular software or decide if mIF/IHC is the right option for your project. Online forums such as are wonderful resources for open-source projects. However, their benefits are often a direct result of how much time the scientist puts into framing questions and providing sufficient information. Of course, several excellent reviews on multiplex imaging are also great places to start (10, 11, 12).

20. What online resources do you recommend?

With thanks to JEDI – the Council for Multiplex IHC/IF Global Standardization.

Receive content, products, events as well as relevant industry updates from The Translational Scientist and its sponsors.

When you click “Subscribe” we will email you a link, which you must click to verify the email address above and activate your subscription. If you do not receive this email, please contact us at [email protected].
If you wish to unsubscribe, you can update your preferences at any point.

  1. F Pagès et al., “International validation of the consensus Immunoscore for the classification of colon cancer: a prognostic and accuracy study,” Lancet, 391, 2128 (2018). PMID: 29754777.
  2. E Gudlaugsson et al., “Comparison of the effect of different techniques for measurement of Ki67 proliferation on reproducibility and prognosis prediction accuracy in breast cancer,” Histopathology, 61, 1134 (2012). PMID: 22963617.
  3. S Berry et al., “Analysis of multispectral imaging with the AstroPath platform informs efficacy of PD-1 blockade,” Science, 372, eaba2609 (2021). PMID: 34112666. 
  4. CM Schürch et al., “Coordinated cellular neighborhoods orchestrate antitumoral immunity at the colorectal cancer invasive front,” Cell, 182, 1341 (2020). PMID: 32763154.
  5. A Rasmusson et al., “Immunogradient indicators for antitumor response assessment by automated tumor-stroma interface zone detection,” Am J Pathol, 190, 1309 (2020). PMID: 32194048. 
  6. C Gong et al., “Quantitative characterization of CD8+ T cell clustering and spatial heterogeneity in solid tumors,” Front Oncol, 8, 649 (2019). PMID: 30666298. 
  7. MC Lau et al., “Tumor-Immune Partitioning and Clustering (TIPC) algorithm reveals distinct signatures of tumor-immune cell interactions within the tumor microenvironment” (2020). Available at:
  8. S Barua et al., “Spatial interaction of tumor cells and regulatory T cells correlates with survival in non-small cell lung cancer,” Lung Cancer, 117, 73 (2018). PMID: 29409671.
  9. JL Carstens et al., “Spatial computation of intratumoral T cells correlates with survival of patients with pancreatic cancer,” Nat Commun, 8, 15095 (2017). PMID: 28447602.
  10. WCC Tan et al., “Overview of multiplex immunohistochemistry/immunofluorescence techniques in the era of cancer immunotherapy,” Cancer Commun (Lond), 40, 135 (2020). PMID: 32301585.
  11. TZ Tien et al., “Delineating the breast cancer immune microenvironment in the era of multiplex immunohistochemistry/immunofluorescence,” Histopathology, 79, 139 (2021). PMID: 33400265.
  12. A Viratham Pulsawatdi et al., “A robust multiplex immunofluorescence and digital pathology workflow for the characterisation of the tumour immune microenvironment,” Mol Oncol, 14, 2384 (2020). PMID: 32671911.
About the Authors
Michael S. Nelson

Biomedical Engineering PhD candidate in the Laboratory for Optical and Computational Instrumentation at the University of Wisconsin-Madison, Wisconsin, USA.

Shawn Jensen

Senior Scientist at Earle A. Chiles Research Institute, Portland Providence Medical Center, Portland, Oregon, USA.

Lau Mai Chan

Senior Research Fellow at the Institute of Molecular and Cell Biology, A*STAR, Singapore.

Michael Surace

Associate Director of Translational Medicine Oncology at AstraZeneca, Gaithersburg, Maryland, USA.

Trevor McKee

Director of Image Analysis at HistoWiz, Brooklyn, New York, USA.

Joe Yeong

Group Leader at the Institute of Molecular and Cell Biology, A*STAR, Singapore.

Colt Egelston

Assistant Research Professor in the Beckman Research Institute at City of Hope National Medical Center, Duarte, California, USA.

Register to The Translational Scientist

Register to access our FREE online portfolio, request the magazine in print and manage your preferences.

You will benefit from:

  • Unlimited access to ALL articles
  • News, interviews & opinions from leading industry experts