Skip navigation


Based on the fact that clinical experience plays a key role in the performance of medical professionals, it is conjectured that a Clinical Experience Sharing (CES) platform, i.e. a searchable collective clinical experience knowledge-base accessible by a large community of medical professionals, would be of great practical value in clinical practice as well as in medical education.  Such a CES would be composed of a multi-modal medical case database, would incorporate a Content Based Case Retrieval (CBCR) engine and would be specialized for different domains.

Project CaReRa aims at developing such a CES for the domain of liver cases. During the course of the project, multi-modal case data will be collected, anonymized and stored in a structural database, CBCR technologies will be developed, experiments for the assessment of its impact on the clinical workflow as well as medical education will be designed and conducted.

Two typical use cases are
  • Given a difficult case to be diagnosed, the medical expert may retrieve similar past cases, review their data (image and non-image), their diagnosis and the follow-up info. for comparative decision making. This is a way of sharing medical experience among a community, which would improve the individuals' performance.
  • A medical student and/or a resident doctor may retrieve seemingly similar yet actually different cases. Thus he/she can highlight the differences between similar cases, which is extremely critical in diagnostic decision making where subtle difference may be of great importance.
The research components/problems are
  • 3D Liver\Vessel\Lesion Segmentation in CT Data
  • Development of an Ontology of Liver for Radiology
  • 3D Image Based Similarity Analysis
  • Case Similarity Analysis (Image + Metadata)
  • Database Development and Management for Structured Reporting of Liver Cases

This project is a first step towards a multi-center consortium that we aim to build for further research and development over a broader range of medical domains. The outputs are expected to have broad impact and raise interest from healthcare industry.


Download CaReRa infosheet
"Better Health Care Through Data" report by K. Pretz (IEEE)

Case upload, browse, search and review server)
CaReRa Web Server

Login to "Demo" version, using "demo / demo" as account name and password.
IMPORTANT: We suggest you to use Firefox. In your first attempt, your browser may warn you with "Your connection is not secure". In that case, please proceed to "Advanced" options and "Add Exception"

Ontologies for Liver Case Representation:

ONLIRA ontology

Ontologies available for download:
  • ONLIRA: Ontology of Liver for Radiology
  • LiCO: Liver Case Ontology
Current Project Team:
Burak Acar, PhD (PI); Suzan Uskudarli, PhD; Ceyhun B. Akgul, PhD; Erdem Yoruk, PhD; Neda B. Marvasti, MS; Abdulkadir Yazici, MS; Rustu Turkay, MD; Baris Bakir, MD

imageCLEF workshop liver CT
annotation task is
organized by the CaReRa project
Past Members & Contributors:
Pinar Yolum, PhD; Nadin Kokciyan, MS; Serkan Cimen, MS; Remzi O. Kafali, PhD; Neslihan Tasdelen, MD; Bengi Gurses, MD; Ozcan Gokce, MD; T. Burak Gurel, PhD; Murat Saraclar, PhD;
Resources / Projects:
TUBITAK ARDEB 1001 (110E264)
Bogazici University BAP (5324)
ESF (Travel grant 5112)
COST Action 1302 "Semantic Keyword-based Search on Structured Data Sources (KEYSTONE)"

Computer  Aided Medical Image Annotation
Domain-aware Bayesian Network Model
A radiologist-in-the- loop semi-automatic CMIA system is proposed. It is based on a Bayesian tree structured model, linked to RadLex. The experiments with liver lesions in computed tomography (CT) images. show that on average 7.50 (out of 29) manual annotations is sufficient for 95% accuracy in liver lesion annotations. The proposed system guides the radiologist to input the most critical information in each iteration and uses a network model to update the full annotation online. The results also suggest that the domain-aware models perform better than the domain-blind models learned from data. Figure: The domain-aware network model constructed manually by exploiting the domain knowledge.
Semantic Annotations vs Low-level Image Features
CoG vs UsE performance in retrieval
Low-level image features (CoG) have been widely used in content based image retrieval systems, where as it has been accepted that high-level semantic descriptors have a higher potential both in terms of retrieval performance and interpretability. The latter is especially important in medical applications, where the MDs need to understand the output of computer systems and reason them. In this work, we have compared the low-level image features (CoG) and high-level semantic features (UsE) in radiological (specifically liver lesion CT images) image retrieval. The study has been presented in 1st ACM MM Workshop on Multimedia Indexing and Information Retrieval for Healthcare (ACM MM'13). Figure: NDCG (Normalized Discounted Cumulative Gain) vs Number of Retrieved Cases/Images, using a linear combination of UsE and CoG (alpha=0 --> UsE only)
LIVERworks Application for segmentation, visualization and fetaure extraction
LIVERworks is a desktop application to build a CRR query from a given case. Its main functionalities include CT preprocessing (liver, vessel and lesion segmentation), image feature (CoG) extraction, semantic feature prediction/annotation (UsE) and queryin the CRR-Db via CRR-Web. The current in-house application targets medical professionals and research groups.

ONLIRA: Ontology of Liver for Radiology 

Radiologists inspect CT scans and record their observations for purposes of communication and further use. A description language with clear semantics is essential for consistent interpretation by medical profes- sionals as well as automated tasks. RadLex is a large lexi- con that extends SNOMED CT and DICOM towards this purpose. While, the the vocabulary is extensive, RadLex has not yet specified some of the semantic relations. ONLIRA (Ontology of the Liver for Radiology) focuses on a semantic specification of imaging observations of CT scans for the liver. ONLIRA extends RadLex with semantic relationships that describe and relate the concepts. Thus, automated processing tasks, such as identifying similar patients, are supported. Download ONLIRA

3D CT Liver Segmentation

An in-house semi-automatic liver segmentation method has been developed by means of several improvements over existing algorithms, following similar approaches. Namely, an initial conservative segmentation is obtained by adaptive thresholding using a GMM model of the voxel distribution in the user delineated VOI. A smooth non-singular vector field flowing outwards is obtained by means of Poisson Eqn., along which the sampled 1D profiles are graded according to their probability of being a true liver boundary normal. Finally, minCut-maxFlow graph-cut algorithm is applied without using any regional terms. Figure: 1D profile classification based edge maps and resultant segmentation masks. Left: SDF; Right: Poisson Eqn.

3D Vessel & Lesion Segmentation

The infamous Frangi maps have been applied for vessel segmentation with automatic global threshold selection for these Hessian based vesselness maps. Namely, the threshold is selected by keeping track of a significant change in the histogram of the segmented voxels' CT values as a threshold is varied. The method is to be applied to contrast enhanced liver CT, hence the vessels are expected to be bright and a too low edgeness threshold would result in increasing number of segmented voxels with relatively low CT values. The change in histograms is tracked by means of Chi-squared histogram difference and a global vesselness threshold is selected. The lesions are segmented in non-normal-tissue and non-vessel regions by means of graph-cuts where the boundary terms (the n-links) in the graph are set to be sensitive to an estimated difference between the probabilities of being background (normal tissue).