Python SDK#
The SDK is best for writing Python scripts to interact with your RedBrick AI organization & projects. The SDK offers granular functions for programmatically manipulating data, importing annotations, assigning tasks, and more.
RedBrick#
RedBrick AI Python SDK.
- class redbrick.StorageMethod#
Storage method integration for organizations.
PUBLIC
- Access files from a public cloud storage service using their absolute URLs.(i.e. files available publicly)
REDBRICK
- Access files stored on RedBrick AI’s servers.(i.e. files uploaded directly to RBAI from a local machine)
ALTA_DB
- Access files stored on AltaDB.
Storage methods:#
redbrick.StorageMethod.Public (
Public
)redbrick.StorageMethod.RedBrick (
RedBrick
)redbrick.StorageMethod.AWSS3 (
AWSS3
)redbrick.StorageMethod.GoogleCloud (
GoogleCloud
)redbrick.StorageMethod.AzureBlob (
AzureBlob
)redbrick.StorageMethod.AltaDB (
AltaDB
)
- class Public#
Public storage provider (Sub class of
StorageProvider
).- Variables:
storage_id (str) –
redbrick.StorageMethod.PUBLIC
name (str) –
"Public"
details (redbrick.StorageMethod.Public.Details) – Public storage method details.
- class RedBrick#
RedBrick storage provider (Sub class of
StorageProvider
).- Variables:
storage_id (str) –
redbrick.StorageMethod.REDBRICK
name (str) –
"Direct Upload"
details (redbrick.StorageMethod.RedBrick.Details) – RedBrick storage method details.
- class AWSS3(storage_id, name, details)#
AWS S3 storage provider (Sub class of
StorageProvider
).- Variables:
storage_id (str) – AWS S3 storage id.
name (str) – AWS S3 storage name.
details (redbrick.StorageMethod.AWSS3.Details) – AWS S3 storage method details.
- class Details(bucket, region, transfer_acceleration=False, endpoint=None, access_key_id=None, secret_access_key=None, role_arn=None, role_external_id=None, session_duration=3600)#
AWS S3 storage provider details.
- Variables:
bucket (str) – AWS S3 bucket.
region (str) – AWS S3 region.
transfer_acceleration (bool) – AWS S3 transfer acceleration.
endpoint (str) – Custom endpoint (For S3 compatible storage, e.g. MinIO).
access_key_id (str) – AWS access key id.
secret_access_key (str) – AWS secret access key. (Will be None in output for security reasons)
role_arn (str) – AWS assume_role ARN. (For short-lived credentials instead of access keys)
role_external_id (str) – AWS assume_role external id. (Will be None in output for security reasons)
session_duration (int) – AWS S3 assume_role session duration.
- property key: str#
AWS S3 storage proivder details key.
- abstract to_entity()#
Get entity from object.
- Return type:
Dict
[str
,Any
]
- validate(check_secrets=False)#
Validate AWS S3 storage provider details.
- Return type:
None
- class GoogleCloud(storage_id, name, details)#
Google cloud storage provider (Sub class of
StorageProvider
).- Variables:
storage_id (str) – Google cloud storage id.
name (str) – Google cloud storage name.
details (redbrick.StorageMethod.GoogleCloud.Details) – Google cloud storage method details.
- class Details(bucket, service_account_json=None)#
Google cloud storage provider details.
- Variables:
bucket (str) – GCS bucket.
service_account_json (str) – GCS service account JSON. (Will be None in output for security reasons)
- property key: str#
Google cloud storage proivder details key.
- abstract to_entity()#
Get entity from object.
- Return type:
Dict
[str
,Any
]
- validate(check_secrets=False)#
Validate Google cloud storage provider details.
- Return type:
None
- class AzureBlob(storage_id, name, details)#
Azure blob storage provider (Sub class of
StorageProvider
).- Variables:
storage_id (str) – Azure blob storage id.
name (str) – Azure blob storage name.
details (redbrick.StorageMethod.AzureBlob.Details) – Azure blob storage method details.
- class Details(connection_string=None, sas_url=None)#
Azure blob storage provider details.
- Variables:
connection_string (str) – Azure connection string. (Will be None in output for security reasons)
sas_url (str) – Azure Shared Access Signature URL for granular blob access. (Will be None in output for security reasons)
- property key: str#
Azure blob storage proivder details key.
- abstract to_entity()#
Get entity from object.
- Return type:
Dict
[str
,Any
]
- validate(check_secrets=False)#
Validate Azure blob storage provider details.
- Return type:
None
- class AltaDB#
AltaDB storage provider (Sub class of
StorageProvider
).- Variables:
storage_id (str) –
redbrick.StorageMethod.ALTA_DB
name (str) –
"Alta DB"
details (redbrick.StorageMethod.AltaDB.Details) – AltaDB storage method details.
- class redbrick.StorageProvider(storage_id, name, details)#
Base storage provider.
Sub-classes:#
redbrick.StorageMethod.Public (
Public
)redbrick.StorageMethod.RedBrick (
RedBrick
)redbrick.StorageMethod.AWSS3 (
AWSS3
)redbrick.StorageMethod.GoogleCloud (
GoogleCloud
)redbrick.StorageMethod.AzureBlob (
AzureBlob
)redbrick.StorageMethod.AltaDB (
AltaDB
)
- class Details#
Storage details.
- abstract property key: str#
Storage proivder details key.
- abstract to_entity()#
Get entity from object.
- Return type:
Dict
[str
,Any
]
- abstract validate(check_secrets=False)#
Validate storage provider details.
- Return type:
None
- classmethod from_entity(entity)#
Get object from entity
- Return type:
- class redbrick.ImportTypes(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)#
Enumerates the supported data import types.
Please see supported data types, and file extensions in our documentation here.
- class redbrick.TaskEventTypes(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)#
Enumerate the different types of task events.
TASK_CREATED
- A new task has been created.TASK_SUBMITTED
- A task has been submitted for review.TASK_ACCEPTED
- A submitted task has been accepted in review.TASK_REJECTED
- A submitted task has been rejected in review.TASK_CORRECTED
- A submitted task has been corrected in review.TASK_ASSIGNED
- A task has been assigned to a worker.TASK_REASSIGNED
- A task has been reassigned to another worker.TASK_UNASSIGNED
- A task has been unassigned from a worker.TASK_SKIPPED
- A task has been skipped by a worker.TASK_SAVED
- A task has been saved but not yet submitted.GROUNDTRUTH_TASK_EDITED
- A ground truth task has been edited.CONSENSUS_COMPUTED
- The consensus for a task has been computed.COMMENT_ADDED
- A comment has been added to a task.CONSENSUS_TASK_EDITED
- A consensus task has been edited.
- class redbrick.TaskFilters(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)#
Enumerate the different task filters.
ALL
- All tasks.GROUNDTRUTH
- Ground truth tasks only.UNASSIGNED
- Tasks that have not yet been assigned to a worker.QUEUED
- Tasks that are queued for labeling/review.DRAFT
- Tasks that have been saved as draft.SKIPPED
- Tasks that have been skipped by a worker.COMPLETED
- Tasks that have been completed successfully.FAILED
- Tasks that have been rejected in review.ISSUES
- Tasks that have issues raised and cannot be completed.
- class redbrick.TaskStates(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)#
Task Status.
UNASSIGNED
- The Task has not been assigned to a Project Admin or Member.ASSIGNED
- The Task has been assigned to a Project Admin or Member,but work has not begun on it.
IN_PROGRESS
- The Task is currently being worked on by a Project Admin or Member.COMPLETED
- The Task has been completed successfully.PROBLEM
- A Project Admin or Member has raised an Issue regarding the Task,and work cannot continue until the Issue is resolved by a Project Admin.
SKIPPED
- The Task has been skipped.STAGED
- The Task has been saved as a Draft.
- class redbrick.Stage(stage_name, config)#
Base stage.
Sub-classes:#
redbrick.LabelStage (
LabelStage
)redbrick.ReviewStage (
ReviewStage
)redbrick.ModelStage (
ModelStage
)
- class Config#
Stage config.
- abstract classmethod from_entity(entity=None, taxonomy=None)#
Get object from entity
- Return type:
- abstract to_entity(taxonomy=None)#
Get entity from object.
- Return type:
Dict
- abstract to_entity(taxonomy=None)#
Get entity from object.
- Return type:
Dict
- class redbrick.LabelStage(stage_name, config=<factory>, on_submit=True)#
Label Stage (Sub class of
Stage
).- Variables:
stage_name (str) – Label stage name.
on_submit (Union[bool, str]) – The next stage for the task when submitted in current stage. If True (default), the task will go to ground truth. If False, the task will be archived.
config (redbrick.LabelStage.Config) – Label stage config.
- class Config(auto_assignment=None, auto_assignment_queue_size=None, show_uploaded_annotations=None, read_only_labels_edit_access=None, is_pre_label=None, is_consensus_label=None)#
Label Stage Config.
- Parameters:
auto_assignment (Optional[bool]) – Enable task auto assignment. (Default: True)
auto_assignment_queue_size (Optional[int]) – Task auto-assignment queue size. (Default: 5)
show_uploaded_annotations (Optional[bool]) – Show uploaded annotations to users. (Default: True)
read_only_labels_edit_access (Optional[ProjectMember.Role]) – Access level to change the read only labels. (Default: None)
is_pre_label (Optional[bool]) – Is pre-labeling stage. (Default: False)
is_consensus_label (Optional[bool]) – Is consensus-labeling stage. (Default: False)
- to_entity(taxonomy=None)#
Get entity from object.
- Return type:
Dict
- classmethod from_entity(entity, taxonomy=None)#
Get object from entity
- Return type:
- to_entity(taxonomy=None)#
Get entity from object.
- Return type:
Dict
- class redbrick.ReviewStage(stage_name, config=<factory>, on_accept=True, on_reject=False)#
Review Stage (Sub class of
Stage
).- Variables:
stage_name (str) – Review stage name.
on_accept (Union[bool, str]) – The next stage for the task when accepted in current stage. If True (default), the task will go to ground truth. If False, the task will be archived.
on_reject (Union[bool, str]) – The next stage for the task when rejected in current stage. If True, the task will go to ground truth. If False(default), the task will be archived.
config (redbrick.ReviewStage.Config) – Review stage config.
- class Config(review_percentage=None, auto_assignment=None, auto_assignment_queue_size=None, read_only_labels_edit_access=None, is_pre_review=None, is_consensus_merge=None)#
Review Stage Config.
- Parameters:
review_percentage (Optional[float]) – Percentage of tasks in [0, 1] that will be sampled for review. (Default: 1)
auto_assignment (Optional[bool]) – Enable task auto assignment. (Default: True)
auto_assignment_queue_size (Optional[int]) – Task auto-assignment queue size. (Default: 5)
read_only_labels_edit_access (Optional[ProjectMember.Role]) – Access level to change the read only labels. (Default: None)
is_pre_review (Optional[bool]) – Is pre-review stage. (Default: False)
is_consensus_merge (Optional[bool]) – Is consensus-merge (V2) stage. (Default: False)
- to_entity(taxonomy=None)#
Get entity from object.
- Return type:
Dict
- classmethod from_entity(entity, taxonomy=None)#
Get object from entity
- Return type:
- to_entity(taxonomy=None)#
Get entity from object.
- Return type:
Dict
- class redbrick.ModelStage(stage_name, config=<factory>, on_submit=True)#
Model Stage (Sub class of
Stage
).- Variables:
stage_name (str) – Model stage name.
on_submit (Union[bool, str]) – The next stage for the task when submitted in current stage. If True (default), the task will go to ground truth. If False, the task will be archived.
config (redbrick.ModelStage.Config) – Model stage config.
- class ModelTaxonomyMap#
Model taxonomy map.
- Parameters:
modelCategory (str) – Model category name.
rbCategory (str) – Category name as it appears in the RedBrick project’s taxonomy.
- class MONAIConfig(batch_count=None, target_spacing=None, roi_size=None, max_epochs=None, early_stop_patience=None, training_threshold_count=None)#
MONAI config.
- Parameters:
batch_count (Optional[int]) – Number of in-progress tasks.
target_spacing (List[float]) – Target spacing of images.
roi_size (List[int]) – ROI size.
max_epochs (int) – Maximum number of epochs.
early_stop_patience (int) – Early stop patience.
training_threshold_count (int) – Training threshold count.
- classmethod from_entity(config=None)#
Get object from entity.
- Return type:
- to_entity()#
Get entity from object.
- Return type:
Dict
- class Config(name, version=None, app=None, url=None, taxonomy_objects=None, monai=None)#
Model Stage Config.
- Parameters:
name (str) – Model name.
version (Optional[str]) – Model version.
app (Optional[str]) – App name.
url (Optional[str]) – URL for self-hosted model.
taxonomy_objects (Optional[List[ModelStage.ModelTaxonomyMap]]) – Mapping of model classes to project’s taxonomy objects.
monai (Optional[ModelStage.MONAIConfig] = None) – MONAI config.
- to_entity(taxonomy=None)#
Get entity from object.
- Return type:
Dict
- classmethod from_entity(entity, taxonomy=None)#
Get object from entity
- Return type:
- to_entity(taxonomy=None)#
Get entity from object.
- Return type:
Dict
- class redbrick.OrgMember(user_id, email, given_name, family_name, role, tags, is_2fa_enabled, is_active, last_active=None, sso_provider=None)#
Organization Member.
- Parameters:
user_id (str) – User ID.
email (str) – User email.
given_name (str) – User given name.
family_name (str) – User family name.
role (OrgMember.Role) – User role in organization.
tags (List[str]) – Tags associated with the user.
is_2fa_enabled (bool) – Whether 2FA is enabled for the user.
is_active (bool) – Whether the user is active.
last_active (Optional[datetime] = None) – Last time the user was active.
sso_provider (Optional[str] = None) – User identity SSO provider.
- class Role(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)#
Enumerate access levels for Organization.
OWNER
- Organization OwnerADMIN
- Organization AdminMEMBER
- Organization Member
- class redbrick.OrgInvite(email, role, sso_provider=None, status=Status.PENDING)#
Organization Invite.
- Parameters:
email (str) – User email.
role (OrgMember.Role) – User role in organization.
sso_provider (Optional[str] = None) – User identity SSO provider.
status (OrgInvite.Status = OrgInvite.Status.PENDING) – Invite status.
- class Status(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)#
Enumerate invite status.
PENDING
- Pending invitationACCEPTED
- Accepted invitationDECLINED
- Declined invitation
- to_entity()#
Get entity from object.
- Return type:
Dict
- class redbrick.ProjectMember(member_id, role, stages=None, org_membership=None)#
Project Member.
- Parameters:
member_id (str) – Unique user ID or email.
role (ProjectMember.Role) – User role in project.
stages (Optional[List[str]] = None) – Stages that the member has access to (Applicable for MEMBER role).
org_membership (Optional[OrgMember] = None) – Organization memberhip. This is not required when adding/updating a member.
- class Role(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)#
Enumerate access levels for Project.
ADMIN
- Project AdminMANAGER
- Project ManagerMEMBER
- Project Member (Labeler/Reviewer)
- classmethod from_entity(member)#
Get object from entity.
- Return type:
- redbrick.get_org(org_id, api_key, url='https://api.redbrickai.com')#
Get an existing redbrick organization object.
Organization object allows you to interact with your organization and perform high level actions like creating a project.
>>> org = redbrick.get_org(org_id, api_key)
- Parameters:
org_id (str) – Your organizations unique id https://app.redbrickai.com/[org_id]
api_key (str) – Your secret api_key, can be created from the RedBrick AI platform.
url (str = DEFAULT_URL) – Should default to https://api.redbrickai.com
- Return type:
- redbrick.get_workspace(org_id, workspace_id, api_key, url='https://api.redbrickai.com')#
Get an existing RedBrick workspace object.
Workspace objects allow you to interact with your RedBrick AI workspaces, and perform actions like importing data, exporting data etc.
>>> workspace = redbrick.get_workspace(org_id, workspace_id, api_key)
- Parameters:
org_id (str) – Your organizations unique id https://app.redbrickai.com/[org_id]
workspace_id (str) – Your workspaces unique id https://app.redbrickai.com/[org_id]/workspaces/[workspace_id]
api_key (str) – Your secret api_key, can be created from the RedBrick AI platform.
url (str = DEFAULT_URL) – Should default to https://api.redbrickai.com
- Return type:
- redbrick.get_project(org_id, project_id, api_key, url='https://api.redbrickai.com')#
Get an existing RedBrick project object.
Project objects allow you to interact with your RedBrick Ai projects, and perform actions like importing data, exporting data etc.
>>> project = redbrick.get_project(org_id, project_id, api_key)
- Parameters:
org_id (str) – Your organizations unique id https://app.redbrickai.com/[org_id]
project_id (str) – Your projects unique id https://app.redbrickai.com/[org_id]/projects/[project_id]
api_key (str) – Your secret api_key, can be created from the RedBrick AI platform.
url (str = DEFAULT_URL) – Should default to https://api.redbrickai.com
- Return type:
- redbrick.get_org_from_profile(profile_name=None)#
Get the org from the profile name in credentials file
>>> org = get_org_from_profile()
- Parameters:
profile_name (str) – Name of the profile stored in the credentials file
- Return type:
- redbrick.get_project_from_profile(project_id=None, profile_name=None)#
Get the RBProject object using the credentials file
project = get_project_from_profile()
- Parameters:
project_id (Optional[str] = None) – project id which has to be fetched. None is valid only when called within project directory.
profile_name (str) – Name of the profile stored in the credentials file
- Return type:
- redbrick.get_dataset(org_id, dataset_name, api_key, url='https://api.redbrickai.com')#
Get an existing RedBrick dataset object.
Dataset objects allow you to interact with your RedBrick AI datasets, and perform actions like importing data, exporting data etc.
>>> dataset = redbrick.get_dataset(org_id, dataset_name, api_key)
- Parameters:
org_id (str) – Your organizations unique id https://app.redbrickai.com/[org_id]
dataset_name (str) – Your datasets unique name https://app.redbrickai.com/[org_id]/datasets/[dataset_name]
api_key (str) – Your secret api_key, can be created from the RedBrick AI platform.
url (str = DEFAULT_URL) – Should default to https://api.redbrickai.com
- Return type:
- redbrick.get_dataset_from_profile(dataset_name, profile_name=None)#
Get the RBDataset object using the credentials file
dataset = get_dataset_from_profile()
- Parameters:
dataset_name (str) – dataset name which has to be fetched.
profile_name (str) – Name of the profile stored in the credentials file
- Return type:
- redbrick.get_workspace_from_profile(workspace_id, profile_name=None)#
Get the RBWorkspace object using the credentials file
workspace = get_workspace_from_profile()
- Parameters:
workspace_id (str) – workspace id which has to be fetched.
profile_name (str) – Name of the profile stored in the credentials file
- Return type:
Organization#
- class redbrick.RBOrganization#
Bases:
ABC
Representation of RedBrick organization.
The
redbrick.RBOrganization
object allows you to programmatically interact with your RedBrick organization. This class provides methods for querying your organization and doing other high level actions. Retrieve the organization object in the following way:- Variables:
team (redbrick.common.member.Team) – Organization team management.
storage (redbrick.common.storage.Storage) – Organization storage methods integration.
>>> org = redbrick.get_org(org_id="", api_key="")
- abstract property org_id: str#
Retrieve the unique org_id of this organization.
- abstract property name: str#
Retrieve unique name of this organization.
- abstract taxonomies(only_name=True, concurrency=10)#
Get a list of taxonomy names/objects in the organization.
- Return type:
Union
[List
[str
],List
[Taxonomy
]]
- abstract workspaces_raw()#
Get a list of active workspaces as raw objects in the organization.
- Return type:
List
[Dict
]
- abstract projects_raw()#
Get a list of active projects as raw objects in the organization.
- Return type:
List
[Dict
]
- abstract projects()#
Get a list of active projects in the organization.
- Return type:
List
[RBProject
]
- abstract create_dataset(dataset_name)#
Create a new dataset.
- Return type:
Dict
- abstract delete_dataset(dataset_name)#
Delete a dataset.
- Return type:
bool
- abstract create_workspace(name, exists_okay=False)#
Create a workspace within the organization.
This method creates a worspace in a similar fashion to the quickstart on the RedBrick AI create workspace page.
- Parameters:
name (str) – A unique name for your workspace
exists_okay (bool = False) – Allow workspaces with the same name to be returned instead of trying to create a new workspace. Useful for when running the same script repeatedly when you do not want to keep creating new workspaces.
- Returns:
A RedBrick Workspace object.
- Return type:
- abstract create_project_advanced(name, taxonomy_name, stages, exists_okay=False, workspace_id=None, sibling_tasks=None, consensus_settings=None)#
Create a project within the organization.
This method creates a project in a similar fashion to the quickstart on the RedBrick AI create project page.
- Parameters:
name (str) – A unique name for your project
taxonomy_name (str) – The name of the taxonomy you want to use for this project. Taxonomies can be found on the left side bar of the platform.
stages (List[Stage]) – List of stage configs.
exists_okay (bool = False) – Allow projects with the same name to be returned instead of trying to create a new project. Useful for when running the same script repeatedly when you do not want to keep creating new projects.
workspace_id (Optional[str] = None) – The id of the workspace that you want to add this project to.
sibling_tasks (Optional[int] = None) – Number of tasks created for each uploaded datapoint.
consensus_settings (Optional[Dict[str, Any]] = None) –
- Consensus settings for the project. It has keys:
minAnnotations: int
autoAcceptThreshold?: float (range [0, 1])
- Returns:
A RedBrick Project object.
- Return type:
- Raises:
ValueError: – If a project with the same name exists but has a different type or taxonomy.
- abstract create_project(name, taxonomy_name, reviews=0, exists_okay=False, workspace_id=None, sibling_tasks=None, consensus_settings=None)#
Create a project within the organization.
This method creates a project in a similar fashion to the quickstart on the RedBrick AI create project page.
- Parameters:
name (str) – A unique name for your project
taxonomy_name (str) – The name of the taxonomy you want to use for this project. Taxonomies can be found on the left side bar of the platform.
reviews (int = 0) – The number of review stages that you want to add after the label stage.
exists_okay (bool = False) – Allow projects with the same name to be returned instead of trying to create a new project. Useful for when running the same script repeatedly when you do not want to keep creating new projects.
workspace_id (Optional[str] = None) – The id of the workspace that you want to add this project to.
sibling_tasks (Optional[int] = None) – Number of tasks created for each uploaded datapoint.
consensus_settings (Optional[Dict[str, Any]] = None) –
- Consensus settings for the project. It has keys:
minAnnotations: int
autoAcceptThreshold?: float (range [0, 1])
- Returns:
A RedBrick Project object.
- Return type:
- Raises:
ValueError: – If a project with the same name exists but has a different type or taxonomy.
- abstract archive_project(project_id)#
Archive a project by ID.
- Return type:
bool
- abstract unarchive_project(project_id)#
Unarchive a project by ID.
- Return type:
bool
- abstract delete_project(project_id)#
Delete a project by ID.
- Return type:
bool
- abstract delete_projects(project_ids)#
Delete a list of projects by ID.
- Return type:
None
- abstract labeling_time(start_date, end_date, concurrency=50)#
Get information of tasks labeled between two dates (both inclusive).
- Return type:
List
[Dict
]
- abstract create_taxonomy(name, study_classify=None, series_classify=None, instance_classify=None, object_types=None)#
Create a Taxonomy V2.
- Parameters:
name (
str
) – Unique identifier for the taxonomy.study_classify (
Optional
[List
[Attribute
]]) – Study level classification applies to the task.series_classify (
Optional
[List
[Attribute
]]) – Series level classification applies to a single series within a task.instance_classify (
Optional
[List
[Attribute
]]) – Instance classification applies to a single frame (video) or slice (3D volume).object_types (
Optional
[List
[ObjectType
]]) – Object types are used to annotate features/objects in tasks, for example, segmentation or bounding boxes.
- Raises:
ValueError: – If there are validation errors.
- Return type:
None
- abstract get_taxonomy(name=None, tax_id=None)#
Get a taxonomy created in your organization based on id or name.
Format reference for categories and attributes objects: https://sdk.redbrickai.com/formats/taxonomy.html
- Return type:
- abstract update_taxonomy(tax_id, study_classify=None, series_classify=None, instance_classify=None, object_types=None)#
Update the categories/attributes of Taxonomy (V2) in the organization.
Format reference for categories and attributes objects: https://sdk.redbrickai.com/formats/taxonomy.html
- Raises:
ValueError: – If there are validation errors.
- Return type:
None
- abstract delete_taxonomy(name=None, tax_id=None)#
Delete a taxonomy by name or ID.
- Return type:
bool
- abstract delete_taxonomies(tax_ids)#
Delete a list of taxonomies by ID.
- Return type:
None
Team#
- class redbrick.common.member.Team#
Bases:
ABC
Abstract interface to Team module.
- abstract get_member(member_id)#
Get a team member.
org = redbrick.get_org(org_id, api_key) member = org.team.get_member(member_id)
- Parameters:
member_id (str) – Unique member userId or email.
- Return type:
- abstract list_members(active=True)#
Get a list of all organization members.
org = redbrick.get_org(org_id, api_key) members = org.team.list_members()
- Parameters:
active (bool) – Only return active members if True, else return all members.
- Return type:
List[OrgMember]
- abstract disable_members(member_ids)#
Disable organization members.
org = redbrick.get_org(org_id, api_key) org.team.disable_members(member_ids)
- Parameters:
member_ids (List[str]) – Unique member ids (userId or email).
- Return type:
None
- abstract enable_members(member_ids)#
Enable organization members.
org = redbrick.get_org(org_id, api_key) org.team.enable_members(member_ids)
- Parameters:
member_ids (List[str]) – Unique member ids (userId or email).
- Return type:
None
- abstract list_invites()#
Get a list of all pending or declined invites.
org = redbrick.get_org(org_id, api_key) members = org.team.list_invites()
- Return type:
List[OrgInvite]
- abstract invite_user(invitation)#
Invite a user to the organization.
org = redbrick.get_org(org_id, api_key) invitation = org.team.invite_user(OrgInvite(email="...", role=OrgMember.Role.MEMBER))
Storage#
- class redbrick.common.storage.Storage#
Bases:
ABC
Storage Method Controller.
- abstract get_storage(storage_id)#
Get a storage method by ID.
- Return type:
- abstract list_storages()#
Get a list of storage methods in the organization.
- Return type:
List
[StorageProvider
]
- abstract create_storage(storage)#
Create a storage method.
- Return type:
- abstract update_storage(storage_id, details)#
Update a storage method.
- Return type:
- abstract delete_storage(storage_id)#
Delete a storage method.
- Return type:
bool
- abstract verify_storage(storage_id, path)#
Verify a storage method by ID.
- Return type:
bool
Dataset#
- class redbrick.RBDataset#
Bases:
ABC
Abstract interface to RBDataset.
- Variables:
upload (redbrick.common.upload.DatasetUpload) – Upload data to dataset.
export (redbrick.common.export.DatasetExport) – Dataset data export.
>>> dataset = redbrick.get_dataset(org_id="", dataset_name="", api_key="")
- abstract property org_id: str#
Read only property.
Retrieves the unique Organization UUID that this dataset belongs to
- abstract property dataset_name: str#
Read only name property.
Retrieves the dataset name.
DatasetUpload#
DatasetExport#
- class redbrick.common.export.DatasetExport#
Bases:
ABC
Primary interface for various export methods.
The export module has many functions for exporting annotations and meta-data from datasets. The export module is available from the
redbrick.RBProject
module.>>> dataset = redbrick.get_dataset(api_key="", org_id="", dataset_name="") >>> dataset.export # Export
- abstract get_data_store_series(*, search=None, page_size=30)#
Get data store series.
- Return type:
Iterator
[Dict
[str
,str
]]
- abstract export_to_files(path, page_size=30, number=None, search=None)#
Export dataset to folder.
- Parameters:
dataset_name (str) – Name of the dataset.
path (str) – Path to the folder where the dataset will be saved.
page_size (int) – Number of series to export in parallel.
number (int) – Number of series to export in total.
search (str) – Search string to filter the series to export.
- Return type:
None
Workspace#
- class redbrick.RBWorkspace#
Bases:
ABC
Interface for interacting with your RedBrick AI Workspaces.
- abstract property org_id: str#
Read only property.
Retrieves the unique Organization UUID that this workspace belongs to
- abstract property workspace_id: str#
Read only property.
Retrieves the unique Workspace ID UUID.
- abstract property name: str#
Read only name property.
Retrieves the workspace name.
- abstract property metadata_schema: List[Dict]#
Retrieves the workspace metadata schema.
- abstract property classification_schema: List[Dict]#
Retrieves the workspace classification schema.
- abstract property cohorts: List[Dict]#
Retrieves the workspace cohorts.
- abstract update_schema(metadata_schema=None, classification_schema=None)#
Update workspace metadata and classification schema.
- Return type:
None
- abstract update_cohorts(cohorts)#
Update workspace cohorts.
- Return type:
None
- abstract get_datapoints(*, concurrency=10)#
Get datapoints in a workspace.
- Return type:
Iterator
[Dict
]
- abstract archive_datapoints(dp_ids)#
Archive datapoints.
- Return type:
None
- abstract unarchive_datapoints(dp_ids)#
Unarchive datapoints.
- Return type:
None
- abstract add_datapoints_to_cohort(cohort_name, dp_ids)#
Add datapoints to a cohort.
- Return type:
None
- abstract remove_datapoints_from_cohort(cohort_name, dp_ids)#
Remove datapoints from a cohort.
- Return type:
None
- abstract update_datapoint_attributes(dp_id, attributes)#
Update datapoint attributes.
- Return type:
None
- abstract add_datapoints_to_projects(project_ids, dp_ids, is_ground_truth=False)#
Add datapoints to project.
- Parameters:
project_ids (List[str]) – The projects in which you’d like to add the given datapoints.
dp_ids (List[str]) – List of datapoints that need to be added to projects.
is_ground_truth (bool = False) – Whether to create tasks directly in ground truth stage.
- Return type:
None
- abstract create_datapoints(storage_id, points, *, concurrency=50)#
Create datapoints in workspace.
Upload data to your workspace (without annotations). Please visit our documentation to understand the format for
points
.workspace = redbrick.get_workspace(org_id, workspace_id, api_key, url) points = [ { "name": "...", "series": [ { "items": "...", } ] } ] workspace.create_datapoints(storage_id, points)
- Parameters:
storage_id (str) – Your RedBrick AI external storage_id. This can be found under the Storage Tab on the RedBrick AI platform. To directly upload images to rbai, use redbrick.StorageMethod.REDBRICK.
points (List[
InputTask
]) – Please see the RedBrick AI reference documentation for overview of the format. https://sdk.redbrickai.com/formats/index.html#import. Fields with annotation information are not supported in workspace.concurrency (int = 50) –
- Returns:
List of datapoint objects with key response if successful, else error
- Return type:
List[Dict]
Note
1. If doing direct upload, please use
redbrick.StorageMethod.REDBRICK
as the storage id. Your items path must be a valid path to a locally stored image.2. When doing direct upload i.e.
redbrick.StorageMethod.REDBRICK
, if you didn’t specify a “name” field in your datapoints object, we will assign the “items” path to it.
- abstract update_datapoints_metadata(storage_id, points)#
Update datapoints metadata.
Update metadata for datapoints in workspace.
workspace = redbrick.get_workspace(org_id, workspace_id, api_key, url) points = [ { "dpId": "...", "metaData": { "property": "value", } } ] workspace.update_datapoints_metadata(storage_id, points)
- Parameters:
storage_id (str) – Storage method where the datapoints are stored.
points (List[
InputTask
]) – List of datapoints with dpId and metaData values.
- Return type:
None
- abstract delete_datapoints(dp_ids, concurrency=50)#
Delete workspace datapoints based on ids.
>>> workspace = redbrick.get_workspace(org_id, workspace_id, api_key, url) >>> workspace.delete_datapoints([...])
- Parameters:
dp_ids (List[str]) – List of datapoint ids to delete.
concurrency (int = 50) – The number of datapoints to delete at a time. We recommend keeping this less than or equal to 50.
- Returns:
True if successful, else False.
- Return type:
bool
- abstract import_from_dataset(dataset_name, *, import_id=None, series_ids=None, group_by_study=False)#
Import tasks from a dataset for a given import_id or list of series_ids.
- Parameters:
dataset_name (str) – The name of the dataset to import from.
import_id (Optional[str] = None) – The import id of the dataset to import from.
series_ids (Optional[List[str]] = None) – The series ids to import from the dataset.
group_by_study (bool = False) – Whether to group the tasks by study.
- Return type:
None
Project#
- class redbrick.RBProject#
Bases:
ABC
Abstract interface to RBProject.
- Variables:
upload (redbrick.common.upload.Upload) – Upload data to project.
labeling (redbrick.common.labeling.Labeling) – Labeling activities.
review (redbrick.common.labeling.Labeling) – Review activities.
export (redbrick.common.export.Export) – Project data export.
settings (redbrick.common.settings.Settings) – Project settings management.
workforce (redbrick.common.member.Workforce) – Project workforce management.
>>> project = redbrick.get_project(org_id="", project_id="", api_key="")
- abstract property org_id: str#
Read only property.
Retrieves the unique Organization UUID that this project belongs to
- abstract property project_id: str#
Read only property.
Retrieves the unique Project ID UUID.
- abstract property name: str#
Read only name property.
Retrieves the project name.
- abstract property url: str#
Read only property.
Retrieves the project URL.
- abstract property taxonomy_name: str#
Read only taxonomy_name property.
Retrieves the taxonomy name.
- abstract property workspace_id: str | None#
Read only workspace_id property.
Retrieves the workspace id.
- abstract property label_storage: Tuple[str, str]#
Read only label_storage property.
Retrieves the label storage id and path.
- abstract property created_at: datetime#
Get creation time of project.
- abstract property updated_at: datetime#
Get last updated time of project.
- abstract set_label_storage(storage_id, path)#
Set label storage method for a project.
By default, all annotations get stored in RedBrick AI’s storage i.e.
redbrick.StorageMethod.REDBRICK
. Set a custom external storage, within which RedBrick AI will write all annotations.>>> project = redbrick.get_project(org_id, project_id, api_key) >>> project.set_label_storage(storage_id)
- Parameters:
storage_id (str) – The unique ID of your RedBrick AI storage method integration. Found on the storage method tab on the left sidebar.
path (str) – A prefix path within which the annotations will be written.
- Returns:
Returns [storage_id, path]
- Return type:
Tuple[str, str]
Important
You only need to run this command once per project.
- Raises:
ValueError: – If there are validation errors.
- abstract update_stage(stage)#
Update stage.
- Return type:
None
Export#
- class redbrick.common.export.Export#
Bases:
ABC
Primary interface for various export methods.
The export module has many functions for exporting annotations and meta-data from projects. The export module is available from the
redbrick.RBProject
module.>>> project = redbrick.get_project(api_key="", org_id="", project_id="") >>> project.export # Export
- abstract export_tasks(*, concurrency=10, only_ground_truth=False, stage_name=None, task_id=None, from_timestamp=None, old_format=False, without_masks=False, without_json=False, semantic_mask=False, binary_mask=None, no_consensus=None, with_files=False, dicom_to_nifti=False, png=False, rt_struct=False, dicom_seg=False, mhd=False, destination=None)#
Export annotation data.
Meta-data and category information returned as an Object. Segmentations are written to your disk in NIfTI-1 format. Please visit our documentation for more information on the format.
>>> project = redbrick.get_project(org_id, project_id, api_key, url) >>> project.export.export_tasks()
- Parameters:
concurrency (int = 10) –
only_ground_truth (bool = False) – If set to True, will only return data that has been completed in your workflow. If False, will export latest state.
stage_name (Optional[str] = None) – If set, will only export tasks that are currently in the given stage.
task_id (Optional[str] = None) – If the unique task_id is mentioned, only a single datapoint will be exported.
from_timestamp (Optional[float] = None) – If the timestamp is mentioned, will only export tasks that were labeled/updated since the given timestamp. Format - output from datetime.timestamp()
old_format (bool = False) – Whether to export tasks in old format.
without_masks (bool = False) – Exports only tasks JSON without downloading any segmentation masks. Note: This is not recommended for tasks with overlapping labels.
without_json (bool = False) – Doesn’t create the tasks JSON file.
semantic_mask (bool = False) – Whether to export all segmentations as semantic_mask. This will create one instance per class. If this is set to True and a task has multiple instances per class, then attributes belonging to each instance will not be exported.
binary_mask (Optional[bool] = None) – Whether to export all segmentations as binary masks. This will create one segmentation file per instance. If this is set to None and a task has overlapping labels, then binary_mask option will be True for that particular task.
no_consensus (Optional[bool] = None) – Whether to export tasks without consensus info. If None, will default to export with consensus info, if it is enabled for the given project. (Applicable only for new format export)
with_files (bool = False) – Export with files (e.g. images/video frames)
dicom_to_nifti (bool = False) – Convert DICOM images to NIfTI. Applicable when with_files is True.
png (bool = False) – Export labels as PNG masks.
rt_struct (bool = False) – Export labels as DICOM RT-Struct. (Only for DICOM images)
dicom_seg (bool = False) – Export labels as DICOM Segmentation. (Only for DICOM images)
mhd (bool = False) – Export segmentation masks in MHD format.
destination (Optional[str] = None) – Destination directory (Default: current directory)
- Returns:
Datapoint and labels in RedBrick AI format. See https://sdk.redbrickai.com/formats/index.html#export
- Return type:
Iterator[
OutputTask
]
Note
If both semantic_mask and binary_mask options are True, then one binary mask will be generated per class.
- abstract list_tasks(*, concurrency=10, limit=50, search=None, stage_name=None, user_id=None, task_id=None, task_name=None, exact_match=False, completed_at=None)#
Search tasks based on multiple queries for a project. This function returns minimal meta-data about the queried tasks.
>>> project = redbrick.get_project(org_id, project_id, api_key, url) >>> result = project.export.list_tasks()
- Parameters:
concurrency (int = 10) – The number of requests that will be made in parallel.
limit (Optional[int] = 50) – The number of tasks to return. Use None to return all tasks matching the search query.
search (Optional[
TaskFilters
] = None) – Task filter type. (Default: TaskFilters.ALL)stage_name (Optional[str] = None) –
- If present, will return tasks that are:
Available in stage_name: If search == TaskFilters.QUEUED
Completed in stage_name: If search == TaskFilters.COMPLETED
user_id (Optional[str] = None) –
- User id/email. If present, will return tasks that are:
Assigned to user_id: If search == TaskFilters.QUEUED
Completed by user_id: If search == TaskFilters.COMPLETED
task_id (Optional[str] = None) – If present, will return data for the given task id.
task_name (Optional[str] = None) – If present, will return data for the given task name. This will do a prefix search with the given task name.
exact_match (bool = False) – Applicable when searching for tasks by task_name. If True, will do a full match instead of partial match.
completed_at (Optional[Tuple[Optional[float], Optional[float]]] = None) – If present, will return tasks that were completed in the given time range. The tuple contains the from and to timestamps respectively.
- Returns:
>>> [{ "taskId": str, "name": str, "createdAt": str, "storageId": str, "updatedAt": str, "currentStageName": str, "createdBy"?: {"userId": str, "email": str}, "priority"?: float([0, 1]), "metaData"?: dict, "series"?: [{"name"?: str, "metaData"?: dict}], "assignees"?: [{ "user": str, "status": TaskStates, "assignedAt": datetime, "lastSavedAt"?: datetime, "completedAt"?: datetime, "timeSpentMs"?: float, }] }]
- Return type:
Iterator[Dict]
- abstract get_task_events(*, task_id=None, only_ground_truth=True, concurrency=10, from_timestamp=None, with_labels=False)#
Generate an audit log of all actions performed on tasks.
Use this method to get a detailed summary of all the actions performed on your tasks, including:
Who uploaded the data
Who annotated your tasks
Who reviewed your tasks
and more.
This can be particulary useful to present to auditors who are interested in your quality control workflows.
- Parameters:
task_id (Optional[str] = None) – If set, returns events only for the given task.
only_ground_truth (bool = True) – If set to True, will return events for tasks that have been completed in your workflow.
concurrency (int = 10) – The number of requests that will be made in parallel.
from_timestamp (Optional[float] = None) – If the timestamp is mentioned, will only export tasks that were labeled/updated since the given timestamp. Format - output from datetime.timestamp()
with_labels (bool = False) – Get metadata of labels submitted in each stage.
- Returns:
>>> [{ "taskId": str, "currentStageName": str, "events": List[Dict] }]
- Return type:
Iterator[Dict]
- abstract get_active_time(*, stage_name, task_id=None, concurrency=100)#
Get active time spent on tasks for labeling/reviewing.
- Parameters:
stage_name (str) – Stage for which to return the time info.
task_id (Optional[str] = None) – If set, will return info for the given task in the given stage.
concurrency (int = 100) – Request batch size.
- Returns:
>>> [{ "orgId": string, "projectId": string, "stageName": string, "taskId": string, "completedBy": string, "timeSpent": number, # In milliseconds "completedAt": datetime, "cycle": number # Task cycle }]
- Return type:
Iterator[Dict]
Upload#
- class redbrick.common.upload.Upload#
Bases:
ABC
Primary interface for uploading to a project.
>>> project = redbrick.get_project(api_key="", org_id="", project_id="") >>> project.upload
- abstract create_datapoints(storage_id, points, *, is_ground_truth=False, segmentation_mapping=None, rt_struct=False, dicom_seg=False, mhd=False, label_storage_id=None, label_validate=False, prune_segmentations=False, concurrency=50)#
Create datapoints in project.
Upload data, and optionally annotations, to your project. Please visit our documentation to understand the format for
points
.project = redbrick.get_project(org_id, project_id, api_key, url) points = [ { "name": "...", "series": [ { "items": "...", # These fields are needed for importing segmentations. "segmentations": "...", "segmentMap": {...} } ] } ] project.upload.create_datapoints(storage_id, points)
- Parameters:
storage_id (str) – Your RedBrick AI external storage_id. This can be found under the Storage Tab on the RedBrick AI platform. To directly upload images to rbai, use redbrick.StorageMethod.REDBRICK.
points (List[
InputTask
]) – Please see the RedBrick AI reference documentation for overview of the format. https://sdk.redbrickai.com/formats/index.html#import. All the fields with annotation information are optional.is_ground_truth (bool = False) – If labels are provided in
points
, and this parameters is set to true, the labels will be added to the Ground Truth stage.segmentation_mapping (Optional[Dict] = None) – Optional mapping of semantic_mask segmentation class ids and RedBrick categories.
rt_struct (bool = False) – Upload segmentations from DICOM RT-Struct files.
dicom_seg (bool = False) – Upload segmentations from DICOM Segmentation files.
mhd (bool = False) – Upload segmentations from MHD files.
label_storage_id (Optional[str] = None) – Optional label storage id to reference nifti segmentations. Defaults to items storage_id if not specified.
label_validate (bool = False) – Validate label nifti instances and segment map.
prune_segmentations (bool = False) – Prune segmentations that are not part of the series.
concurrency (int = 50) –
- Returns:
List of task objects with key response if successful, else error
- Return type:
List[Dict]
Note
1. If doing direct upload, please use
redbrick.StorageMethod.REDBRICK
as the storage id. Your items path must be a valid path to a locally stored image.2. When doing direct upload i.e.
redbrick.StorageMethod.REDBRICK
, if you didn’t specify a “name” field in your datapoints object, we will assign the “items” path to it.
- abstract archive_tasks(task_ids, concurrency=50)#
Archive project tasks based on task ids.
>>> project = redbrick.get_project(org_id, project_id, api_key, url) >>> project.upload.archive_tasks([...])
- Parameters:
task_ids (List[str]) – List of task ids to archive.
concurrency (int = 50) – The number of tasks to delete at a time. We recommend keeping this less than or equal to 50.
- Returns:
True if successful, else False.
- Return type:
bool
- abstract delete_tasks(task_ids, concurrency=50)#
Delete project tasks based on task ids.
>>> project = redbrick.get_project(org_id, project_id, api_key, url) >>> project.upload.delete_tasks([...])
- Parameters:
task_ids (List[str]) – List of task ids to delete.
concurrency (int = 50) – The number of tasks to delete at a time. We recommend keeping this less than or equal to 50.
- Returns:
True if successful, else False.
- Return type:
bool
- abstract delete_tasks_by_name(task_names, concurrency=50)#
Delete project tasks based on task names.
>>> project = redbrick.get_project(org_id, project_id, api_key, url) >>> project.upload.delete_tasks_by_name([...])
- Parameters:
task_names (List[str]) – List of task names to delete.
concurrency (int = 50) – The number of tasks to delete at a time. We recommend keeping this less than or equal to 50.
- Returns:
True if successful, else False.
- Return type:
bool
- abstract update_task_items(storage_id, points, concurrency=50, append=False)#
Update task items, meta data, heat maps, transforms, etc. for the mentioned task ids.
project = redbrick.get_project(org_id, project_id, api_key, url) points = [ { "taskId": "...", "series": [ { "items": "...", } ] } ] project.upload.update_task_items(storage_id, points)
- Parameters:
storage_id (str) – Your RedBrick AI external storage_id. This can be found under the Storage Tab on the RedBrick AI platform. To directly upload images to rbai, use redbrick.StorageMethod.REDBRICK.
points (List[
InputTask
]) – List of objects with taskId and series, where series contains a list of items paths to be updated for the task.concurrency (int = 50) –
append (bool = False) – If True, the series will be appended to the existing series. If False, the series will replace the existing series.
- Returns:
List of task objects with key response if successful, else error
- Return type:
List[Dict]
Note
1. If doing direct upload, please use
redbrick.StorageMethod.REDBRICK
as the storage id. Your items path must be a valid path to a locally stored image.
- abstract import_tasks_from_workspace(source_project_id, task_ids, with_labels=False)#
Import tasks from another project in the same workspace.
project = redbrick.get_project(org_id, project_id, api_key, url) project.upload.import_tasks_from_workspace(source_project_id, task_ids)
- Parameters:
source_project_id (str) – The source project id from which tasks are to be imported.
task_ids (List[str]) – List of task ids to be imported.
with_labels (bool = False) – If True, the labels will also be imported.
- Return type:
None
- abstract update_tasks_priority(tasks, concurrency=50)#
Update tasks’ priorities. Used to determine how the tasks get assigned to annotators/reviewers in auto-assignment.
- Parameters:
tasks (List[Dict]) – List of taskIds and their priorities. - [{“taskId”: str, “priority”: float([0, 1]), “user”?: str}]
concurrency (int = 50) – The number of tasks to update at a time. We recommend keeping this less than or equal to 50.
- Return type:
None
- abstract update_tasks_labels(tasks, *, rt_struct=False, dicom_seg=False, mhd=False, label_storage_id='22222222-2222-2222-2222-222222222222', label_validate=False, prune_segmentations=False, concurrency=50, finalize=False, time_spent_ms=None, extra_data=None)#
Update tasks labels at any point in project pipeline.
project = redbrick.get_project(...) tasks = [ { "taskId": "...", "series": [{...}] }, ] # Overwrite labels in tasks project.upload.update_tasks_labels(tasks)
- Parameters:
points (List[
OutputTask
]) – Please see the RedBrick AI reference documentation for overview of the format. https://sdk.redbrickai.com/formats/index.html#export. All the fields with annotation information are optional.rt_struct (bool = False) – Upload segmentations from DICOM RT-Struct files.
dicom_seg (bool = False) – Upload segmentations from DICOM Segmentation files.
mhd (bool = False) – Upload segmentations from MHD files.
label_storage_id (Optional[str] = None) – Optional label storage id to reference nifti segmentations. Defaults to project annnotation storage_id if not specified.
label_validate (bool = False) – Validate label nifti instances and segment map.
prune_segmentations (bool = False) – Prune segmentations that are not part of the series.
concurrency (int = 50) –
finalize (bool = False) – Submit the task in current stage.
time_spent_ms (Optional[int] = None) – Time spent on the task in milliseconds.
extra_data (Optional[Dict] = None) – Extra data to be stored along with the task.
- Return type:
None
- abstract send_tasks_to_stage(task_ids, stage_name, concurrency=50)#
Send tasks to different stage.
- Parameters:
task_ids (List[str]) – List of tasks to move.
stage_name (str) – The stage to which you want to move the tasks. Use “END” to move tasks to ground truth.
concurrency (int = 50) – Batch size per request.
- Returns:
True if successful, else False.
- Return type:
bool
- abstract import_from_dataset(dataset_name, *, import_id=None, series_ids=None, group_by_study=False, is_ground_truth=False)#
Import tasks from a dataset for a given import_id or list of series_ids.
- Parameters:
dataset_name (str) – The name of the dataset to import from.
import_id (Optional[str] = None) – The import id of the dataset to import from.
series_ids (Optional[List[str]] = None) – The series ids to import from the dataset.
group_by_study (bool = False) – Whether to group the tasks by study.
is_ground_truth (bool = False) – Whether to import the tasks as ground truth.
- Return type:
None
- abstract create_comment(task_id, text_comment, reply_to_comment_id=None, comment_pin=None, label_id=None)#
Create a task comment.
- Parameters:
task_id (str) – The task id.
text_comment (str) – The comment to create.
reply_to_comment_id (Optional[str] = None) – The comment id to reply to.
comment_pin (Optional[
CommentPin
] = None) – The pin to add to the comment.label_id (Optional[str] = None) – Label ID for entity-level comments.
- Returns:
The comment object.
- Return type:
Dict
Labeling#
- class redbrick.common.labeling.Labeling#
Bases:
ABC
Perform programmatic labeling and review tasks.
The Labeling class allows you to programmatically submit tasks. This can be useful for times when you want to make bulk actions e.g accepting several tasks, or make automated actions like using automated methods for review.
Information
The Labeling module provides several methods to query tasks and assign tasks to different users. Refer to this section for guidance on when to use each method:
assign_tasks
. Use this method when you already have thetask_ids
you want to assign to a particular user. If you don’t have thetask_ids
, you can query the tasks usinglist_tasks
.
- abstract put_tasks(stage_name, tasks, *, finalize=True, existing_labels=False, rt_struct=False, dicom_seg=False, mhd=False, review_result=None, review_comment=None, label_storage_id='22222222-2222-2222-2222-222222222222', label_validate=False, prune_segmentations=False, concurrency=50)#
Put tasks with new labels or a review result.
Use this method to programmatically submit tasks with labels in Label stage, or to programmatically accept/reject/correct tasks in a Review stage. If you don’t already have a list of
task_id
, you can uselist_tasks
to get a filtered list of tasks in your project, that you want to work upon.project = redbrick.get_project(...) tasks = [ { "taskId": "...", "series": [{...}] }, ] # Submit tasks with new labels project.labeling.put_tasks("Label", tasks) # Save tasks with new labels, without submitting project.labeling.put_tasks("Label", tasks, finalize=False) # Submit tasks with existing labels project.labeling.put_tasks("Label", [{"taskId":"..."}], existing_labels=True)
project = redbrick.get_project(...) # Set review_result to True if you want to accept the tasks project.review.put_tasks("Review_1", [{taskId: "..."}], review_result=True) # Set review_result to False if you want to reject the tasks project.review.put_tasks("Review_1", [{taskId: "..."}], review_result=False) # Add labels if you want to accept the tasks with correction project.review.put_tasks("Review_1", [{taskId: "...", series: [{...}]}])
- Parameters:
stage_name (str) – The stage to which you want to submit the tasks. This must be the same stage as which you called get_tasks on.
tasks (List[
OutputTask
]) – Tasks with new labels or review result.finalize (bool = True) – Finalize the task. If you want to save the task without submitting, set this to False.
existing_labels (bool = False) – If True, the tasks will be submitted with their existing labels. Applies only to Label stage.
rt_struct (bool = False) – Upload segmentations from DICOM RT-Struct files.
dicom_seg (bool = False) – Upload segmentations from DICOM Segmentation files.
mhd (bool = False) – Upload segmentations from MHD files.
review_result (Optional[bool] = None) – Accepts or rejects the task based on the boolean value. Applies only to Review stage.
review_comment (Optional[
Comment
] = None) – Comment for the review result. Applies only to Review stage.label_storage_id (Optional[str] = None) – Optional label storage id to reference external nifti segmentations. Defaults to project settings’ annotation storage_id if not specified.
label_validate (bool = False) – Validate label nifti instances and segment map.
prune_segmentations (bool = False) – Prune segmentations that are not part of the series.
concurrency (int = 50) –
- Returns:
A list of tasks that failed.
- Return type:
List[
OutputTask
]
- abstract assign_tasks(task_ids, *, email=None, emails=None, refresh=True)#
Assign tasks to specified email or current API key.
Unassigns all users from the task if neither of the
email
orcurrent_user
are set.>>> project = redbrick.get_project(org_id, project_id, api_key) >>> project.labeling.assign_tasks([task_id], email=email)
- Parameters:
task_ids (List[str]) – List of unique
task_id
of the tasks you want to assign.email (Optional[str] = None) – The email of the user you want to assign this task to. Make sure the user has adequate permissions to be assigned this task in the project.
emails (Optional[List[str]] = None) – Used for projects with Consensus activated. The emails of the users you want to assign this task to. Make sure the users have adequate permissions to be assigned this task in the project.
refresh (bool = True) – Used for projects with Consensus activated. If True, will overwrite the assignment to the current users.
- Returns:
- List of affected tasks.
>>> [{"taskId", "name", "stageName"}]
- Return type:
List[Dict]
- abstract move_tasks_to_start(task_ids)#
Move groundtruth tasks back to start.
- Return type:
None
Settings#
- class redbrick.common.settings.Settings#
Bases:
ABC
Abstract interface to Settings module.
- abstract property label_validation: LabelValidation#
Label Validation.
Use custom label validation to prevent annotation errors in real-time. Please visit label validation for more info.
Format: {“enabled”: bool, “enforce”: bool, “script”: str}
project = redbrick.get_project(org_id, project_id, api_key, url) label_validation = project.settings.label_validation
project = redbrick.get_project(org_id, project_id, api_key, url) project.settings.label_validation = label_validation
- abstract property hanging_protocol: HangingProtocol#
Hanging Protocol.
Use hanging protocol to define the visual layout of tool. Please visit hanging protocol for more info.
Format: {“enabled”: bool, “script”: str}
project = redbrick.get_project(org_id, project_id, api_key, url) hanging_protocol = project.settings.hanging_protocol
project = redbrick.get_project(org_id, project_id, api_key, url) project.settings.hanging_protocol = hanging_protocol
- abstract property webhook: Webhook#
Project webhook.
Use webhooks to receive custom events like tasks entering stages, and many more.
Format: {“enabled”: bool, “url”: str}
project = redbrick.get_project(org_id, project_id, api_key, url) webhook = project.settings.webhook
project = redbrick.get_project(org_id, project_id, api_key, url) project.settings.webhook = webhook
- abstract toggle_reference_standard_task(task_id, enable)#
Toggle reference standard task.
- Return type:
None
- abstract property task_duplication: int | None#
Sibling task count.
Use task duplication to create multiple tasks for a single uploaded datapoint. Please visit task duplication for more info.
Format: Optional[int]
project = redbrick.get_project(org_id, project_id, api_key, url) count = project.settings.task_duplication
project = redbrick.get_project(org_id, project_id, api_key, url) project.settings.task_duplication = count
Workforce#
- class redbrick.common.member.Workforce#
Bases:
ABC
Abstract interface to Workforce module.
- abstract get_member(member_id)#
Get a project member.
project = redbrick.get_project(org_id, project_id, api_key) member = project.workforce.get_member(member_id)
- Parameters:
member_id (str) – Unique member userId or email.
- Return type:
- abstract list_members()#
Get a list of all project members.
project = redbrick.get_project(org_id, project_id, api_key) members = project.workforce.list_members()
- Return type:
List[ProjectMember]
- abstract add_members(members)#
Add project members.
project = redbrick.get_project(org_id, project_id, api_key) member = project.workforce.add_members([{"member_id": "...", "role": "...", "stages": ["..."]}, ...])
- Parameters:
members (List[ProjectMember]) – List of members to add.
- Returns:
List of added project members.
- Return type:
List[ProjectMember]
- abstract update_members(members)#
Update project members.
project = redbrick.get_project(org_id, project_id, api_key) member = project.workforce.update_members([{"member_id": "...", "role": "...", "stages": ["..."]}, ...])
- Parameters:
members (List[ProjectMember]) – List of members to update.
- Returns:
List of updated project members.
- Return type:
List[ProjectMember]
- abstract remove_members(member_ids)#
Remove project members.
project = redbrick.get_project(org_id, project_id, api_key) member = project.workforce.remove_members([...])
- Parameters:
member_ids (List[str]) – List of member ids (user_id/email) to remove from the project.
- Return type:
None