This guide outlines the steps involved in setting up a new labeling project on the RedBrick platform.
- Team Roles and Responsibilities
- Labeling Taxonomy
- Creating a Project
- Importing Data
- Segmentation Workflow
- Exporting Data
- Appendix
The first step in starting a project in RedBrick is defining team roles and adding members.
The core team includes:
- Team Lead: Serves as Internal Reviewer, but this member has Organizational Admin status to set up new projects.
- Internal Reviewers: Serve as Project Admins, manage labeling stages, and conduct quality reviews.
- Labelers: Perform data annotation as directed by Internal Reviewers.
Additional roles may include:
- External Reviewers: Validate labels during the review stage.
- External Project Managers: Oversee tasks with full admin privileges.
- Labelers: Medical students and residents.
- Internal Reviewers: Residents.
- External Reviewers/Managers: Faculty or industry clients.
- Navigate to the Team tab.
- Select Invite Member.
- Assign appropriate roles:
- Project Members: Labelers and External Reviewers.
- Project Admins: Internal Reviewers and External Managers.
Taxonomies define the labeling schema in RedBrick AI, ensuring consistency across projects. It is necessary to define a taxonomy prior to creating a project.
- Label Types: Examples include Segmentation or Bounding Boxes.
- Label Names: The name of the object being labeled.
- Attributes: Add extra details using checkboxes (True/False), dropdown options, multiple choices, or text fields.
- Classifications: Attributes added to studies, series, or video frames to provide more information.
- Hints: Instructions visible when hovering over labels.
The following are examples of how attributes/classifications might be used in a project.
- Brain mass segmentation: Suspected intra-axial vs extra-axial location.
- Liver segmentation: Mass present vs absent.
- Lung nodule segmentation: Clear vs indeterminate margins.
- Study classification: Adequate vs poor quality.
- Navigate to the Taxonomies tab.
- Select New Taxonomy.
- Define labels, attributes, and hints.
A RedBrick project workflow can be customized according to team needs. The appendix of this document will include workflow recommendations for various project types.
- Pre-Review: Optional stage for evaluating data before labeling.
- The reviewer can remove studies of poor quality from the labeling pool.
- Pre-Label: Optional stage that allows for studies to be given preliminary label prior to official labeling stage.
- Single Labeling: Studies are labeled once by each annotator.
- Multiple Labeling: Studies receive several labels, and Similarity Scores identify discrepancies for review.
- Single Output Labeling: Manual or automated data merge stage to select the best labels.
- Multiple Output Labeling: Includes all labels in the final dataset.
- Review Stage: Labels are validated post-labeling.
- Any number of review stages can be added in series (complete or fractional).
- Internal reviews are recommended prior to finalizing labels.
- Press the + symbol next to the Home tab.
- Select New Project.
- Name the project and select the appropriate taxonomy.
- Add stages as needed. For basic projects, press + Add Review Stage to add a single review stage.
There are several options for importing data into RedBrick. For projects importing data from the UTSA AI consortium or uploading prelabeled data, an Items List (.json file) must be generated and uploaded using the RedBRick CLI (command line interface).
See Instructions for Various Cloud Storage Integrations
- Navigate to your Storage Account on the Azure portal, select Security + Networking and Access Keys.
- Copy a connections string.
- Select the RedBrick Integrations tab and click + New storage method.
- Select Azure blob storage type and enter connection string and storage account name.
- Navigate to Storage Account > Settings > Resource Sharing (CORS). Use the following settings.
The Items List is a .json
file that contains:
- A path to each item being imported.
- Metadata to organize items into tasks.
Example Path:
container/folder/item.dcm
To include annotations, specify:
- A path to each segmentation file.
- A segment map associating segments with taxonomy items.
Segmentation files must be in nifti format.
See Examples of Item Lists for Importing Segmentations
- In the Integrations tab, click … for your storage method and select Verify Storage Method.
- Create a script to generate the
items.json
file for your data.
Use RedBrick’s command line interface (CLI) to import or export data from a project. Think of the CLI as a delivery service: after installing it, set up a "drop-off point" (a local directory) linked to your project. To upload or download data, navigate to this directory and run RedBrick commands—everything will be stored there.
- Create a virtual environment and install redbrick SDK/CLI.
python -m venv venv && source ./venv/bin/activate && pip install -U redbrick-sdk
- Generate API key under Integrations tab.
- Copy OrgID from RedBrick Url.
https://app.redbrickai.com/<org_id>/projects/<project_id>
- Add CLI credentials.
redbrick config
- Create a directory for your project.
mkdir new-project && cd new-project
- Type
redbrick clone
to pull up a list of existing projects. Select the project you would like to associate with your local directory.- Type
redbrick info
to verify your current directory.
- Copy StorageID from the Storage Method in the Integrations tab.
- Upload items list. Use your own STORAGE ID.
redbrick upload items.json --storage STORAGEID
The team lead is responsible for assigning tasks. By default, tasks in the Labeling stage are auto-assigned to team members. Any members with organizational or project admin status can select any number of studies and reassign them or manually change their stage. Additionally, prior to beginning a labeling task, it is recommended that teams have well defined labeling criteria. The Test Batch method mentioned in the appendix is recommended for testing and refining labeling criteria to avoid labeler confusion.
- Ensure documented labeling criteria.
- Monitor labeling progress.
- As studies pass Labeling stage, assign Review tasks to designated team members.
The following short video playlists are recommended viewing prior to beginning a labeling task.
See Overview of the Labeling Process (Video)
See Segmentation Toolkit Overview (Video)
- Review labeling criteria and segmentation toolkit videos.
- Complete assigned labeling tasks.
- For difficult or confusing labels, include comments.
Just like importing data, exporting segmentations in RedBrick uses the command line interface. RedBrick allows for exporting labels alone or labels with studies. It also allows for conversion of DICOM studies to Nifti upon export or export of studies from specific stages.
- Navigate to local project directory.
- Run
redbrick export
to export all tasks.
Workflows tailored to team sizes and project types:
- Rapid Labeling: Best for small teams or rapid prototyping.
- Test Batch: Ideal for large teams or gold-standard labeling.
- Tiered Labeling: Recommended for segmentation tasks, with multiple review stages.
- Consensus Labeling: Effective for less time-intensive tasks (e.g., bounding boxes).
Designed for solo or small teams aiming to quickly label datasets. This method uses single labeling and optional internal or external review for quality assurance.
- Add team members.
- Define the labeling taxonomy.
- Create a project.
- Set up a single-labeling workflow, adding internal/external reviews if needed.
Used to train large teams and establish gold-standard instructions. Labelers complete an initial set of 10 studies each, with consensus-based quality checks.
- Add team members and define the labeling taxonomy.
- Create a project and enable multiple labeling with a manual single output.
- Set the minimum number of labelers to 2.
- Assign a dataset of 5x the number of labelers (e.g., 10 studies per labeler).
- After labeling, calculate similarity scores to identify inconsistencies and refine instructions.
Optimized for segmentation tasks, where each study undergoes multiple review stages to ensure accuracy.
- Use the predefined taxonomy to create a project.
- Configure 2 review stages: internal (by experienced team members) and external (by a faculty reviewer).
- Assign labelers to the labeling stage and reviewers to respective review stages.
- Labels pass through internal review for corrections or feedback, followed by external review for final approval.
- Export finalized labels as ground truth.
Suitable for tasks like bounding boxes or landmarks, this approach uses multiple annotators to improve accuracy.
- Use the predefined taxonomy to create a project.
- Select a consensus approach with multiple labels, single output, and manual review. Set a minimum of 2 labelers.
- Add a review stage titled “external review.”
- An experienced labeler conducts a manual review, selecting the best label and making necessary edits.
- Use similarity scores to identify challenging cases.
- Finalized labels undergo external review and are exported as ground truth.
For clinical validation, follow the gold-standard workflow, with model-generated labels included in the dataset. Labelers remain blinded to model labels. As labeling progresses, similarity scores compare model-generated and gold-standard labels to evaluate performance.