help > Questions regarding White Matter Lesion Modul
Showing 1-2 of 2 posts
Jan 20, 2011 03:01 PM | Mark Scully
Questions regarding White Matter Lesion Modul
Questions copied from the slicer-users mailing list, originally
posted by Sandya Venugopal.
-------- Original Message --------
Subject: [slicer-users] Questions regarding White Matter Lesion Module
on Slicer
Date: Sat, 15 Jan 2011 10:43:55 -0800
From: sandya venugopal
To: slicer-users@bwh.harvard.edu
I had originally addressed this email to Mr.Bockholt and Mr.Scully, however I have not had any response so far, hence posting it on the user forum. I am a new Slicer user. I want to use "Longitudinal Lesion Comparison" on my datasets. However I am not exactly sure how the initial images are generated for it. I have gone through the "White Matter Lesion Detection in Lupus" tutorial, but am still at a loss to understand how the initial images are generated. Using tutorial datasets to obtain expected results has been very easy, but translating that into
analyzing my own data has been quite difficult.
I do not have a programming background (I am a physician), so if someone could kindly send me an "idiot-proof" version of how to perform pre-processing, including co-registration, brain extraction, bias correction, and intensity standardization, in order to obtain the reg+bias.nii and brain_mask.nii files from T1, T2 and FLAIR DICOM images, I would be very grateful.
1. Can I do the preprocessing on Slicer, if not what program should I use, or is there a script I need to run on Linux?
2. T1, T2, FLAIR-Do I have to convert DICOM images to nrrd format first-I know how to do that?
3. How do I generate the brain mask dataset?
4. Where do I find the supporting model files?
5. In the Lupus tutorial, lupus003 seems to be the patient's dataset, and lupus002 seems to be the reference volume. How do I obtain reference volumes for TBI datasets? Do I create them myself? If so, how?
6. Kindly also clarify any other point I might not have thought about.
Once the above questions are clarified, generating the Predict Lesion volume appears quite straight forward as per the tutorials. My understanding is that the lesion volume map generated through this step is the input for the longitudinal lesion comparison module. Kindly correct me if I am wrong please. Other questions I have are:
7. How are Change Tracker and Longitudinal Lesion Comparison modules different? The former seems to be only for tumors from what I've read, or can it be used for every pathology?
8. Can you please explain how to co-register the Predict lesion map with DTI images to assess diffusion deficits? Do I use our FLIRT corrected Nifti images as DTI input? Or something else?
9. Is it possible to do a group comparison of longitudinal lesion changes? Or is it only possible to do it for a single individual?
10. Any other answers I should know to successfully run the above modules, but have not been able to think of it, would be immensely helpful.
Looking forward to your guidance with anticipation.
Thanks and kind regards,
Dr.Sandya Venugopal
Postdoc Research Fellow
Neuroradiology, UCSF, SF, CA
-------- Original Message --------
Subject: [slicer-users] Questions regarding White Matter Lesion Module
on Slicer
Date: Sat, 15 Jan 2011 10:43:55 -0800
From: sandya venugopal
To: slicer-users@bwh.harvard.edu
I had originally addressed this email to Mr.Bockholt and Mr.Scully, however I have not had any response so far, hence posting it on the user forum. I am a new Slicer user. I want to use "Longitudinal Lesion Comparison" on my datasets. However I am not exactly sure how the initial images are generated for it. I have gone through the "White Matter Lesion Detection in Lupus" tutorial, but am still at a loss to understand how the initial images are generated. Using tutorial datasets to obtain expected results has been very easy, but translating that into
analyzing my own data has been quite difficult.
I do not have a programming background (I am a physician), so if someone could kindly send me an "idiot-proof" version of how to perform pre-processing, including co-registration, brain extraction, bias correction, and intensity standardization, in order to obtain the reg+bias.nii and brain_mask.nii files from T1, T2 and FLAIR DICOM images, I would be very grateful.
1. Can I do the preprocessing on Slicer, if not what program should I use, or is there a script I need to run on Linux?
2. T1, T2, FLAIR-Do I have to convert DICOM images to nrrd format first-I know how to do that?
3. How do I generate the brain mask dataset?
4. Where do I find the supporting model files?
5. In the Lupus tutorial, lupus003 seems to be the patient's dataset, and lupus002 seems to be the reference volume. How do I obtain reference volumes for TBI datasets? Do I create them myself? If so, how?
6. Kindly also clarify any other point I might not have thought about.
Once the above questions are clarified, generating the Predict Lesion volume appears quite straight forward as per the tutorials. My understanding is that the lesion volume map generated through this step is the input for the longitudinal lesion comparison module. Kindly correct me if I am wrong please. Other questions I have are:
7. How are Change Tracker and Longitudinal Lesion Comparison modules different? The former seems to be only for tumors from what I've read, or can it be used for every pathology?
8. Can you please explain how to co-register the Predict lesion map with DTI images to assess diffusion deficits? Do I use our FLIRT corrected Nifti images as DTI input? Or something else?
9. Is it possible to do a group comparison of longitudinal lesion changes? Or is it only possible to do it for a single individual?
10. Any other answers I should know to successfully run the above modules, but have not been able to think of it, would be immensely helpful.
Looking forward to your guidance with anticipation.
Thanks and kind regards,
Dr.Sandya Venugopal
Postdoc Research Fellow
Neuroradiology, UCSF, SF, CA
Jan 20, 2011 10:01 PM | Mark Scully
RE: Questions regarding White Matter Lesion Modul
Making a user-friendly pipeline that performs the typical steps in
preprocessing is a goal of many projects and grants. Unfortunately,
preprocessing is still a fairly involved process, which is part of
why this module's tutorials don't directly address it. There are
many, MANY things that can go wrong or simply be performed
incorrectly. There are preprocessing pipelines that exist (BRAINS
Autoworkup, Freesurfer, SPM, etc) but they rely on the user having
a substantial amount of neuroimaging knowledge and can't be
described as "Idiot-proof". Alternately, there are modules /
programs that can be used to perform most of the steps, but again,
they may not be ready for clinical / non-expert use.
1) Many typical preprocessing steps can be performed from Slicer, however, there is no official or unofficial preprocessing pipeline in Slicer3.
2) The lesion applications don't currently support dicom, so yes, they need to be converted from dicom to something else such as nrrd.
3) Brain masks are a normal output of preprocessing pipelines. However, there are many applications that can create a brain mask.
a) The SkullStrip module which should be available as a Slicer extension (from within Slicer, View-> Extension Manager->Next->Select SkullStrip->Download&Install->Finish.
b) FSL has the Brain Extraction Tool (BET and BET2): http://www.fmrib.ox.ac.uk/fsl/bet2/index...
c) SPECTRE, which is being integrated into Slicer but to use it now is an involved process.
d) The BRAINS tools out of Iowa include a command line skull stripping tool called BRAINSMush. There's currently no binary releases but a stand-alone version can be built (Look for BRAINSMush on that page)
4) The model files consists of the lesion and non-lesion centroids, the distributions of the lesion and non-lesion data sets after distance thresholding, and finally the Support Vectors separating the two classes.
The model files (lesionSegmentation.model, svm.model), trained solely on lupus data, are available in the tutorial data set: http://www.nitrc.org/frs/download.php/86... It has never been tested on TBI data, or anything but lupus. It may work for you, but I have no data one way or the other.
If you want a new model file trained on TBI data that is, unfortunately, an involved process. The source code used to do it has not been released, mainly because it is a combination of scripts and custom programs with minimal documentation. It requires at least 8 patients (preferably 10) with T1, T2, FLAIR, and hand traced lesions (tracings should be as good as humanly possible). It then requires a large amount of processing. Originally the plan was to release model files for multiple disorders but funding was not approved. Support for more disorders may happen at some point, but likely not within 6 months.
A much more user friendly option would be the white matter lesion (WML) segmentation module. It is available for slicer, allows you to train your own classifier, and has a tutorial on its use. I don't know how it's segmentations compare on the same data as I am currently working on the comparison. Their published results are good.
5) The reference subject in the Intensity Standardization is just the scans whose intensity profile everything is being matched to. If you are going to use the lupus model file then it's best to continue to use the lupus002 files.
6) I will point out that when dealing with longitudinal data the longitudinal images should always be co-registered to the baseline T1.
The input to the longitudinal lesions comparison is the two time points lesion masks. The masks do not have to be the output of PredictLesions; they could be hand traced or produced by a different segmentation method. They DO have to be aligned.
7) Change tracker is specifically for tumors. It also includes some segmentation functionality that is not present in the Longitudinal Lesion Comparison (LLC). All the LLC does is take two label maps and produce a new label map with 3 possible values. One for gained, one for lost, and one for unchanged. Then the Compare View functionality in slicer can be used to examine multiple images and slices with that label map on top.
8) This is also a very involved question. GTRACT has tools that will register a B0 image from a DWI sequence to a T1 and output a transform, which you can then apply to the lesion image (assuming the lesion image was generated from images coregistered to that T1) which will put the lesion image in the same space as the diffusion data. However, you need to apply motion correction and eddy current correction to your DWI data, and you need to throw out bad gradients (Something DTIPrep can do). Working with diffusion data can be complicated. There are a LOT of issues that may come up, which is part of why there are no "fire and forget" tools.
9) If you mean within the tools I've written then no group comparison is possible. If you load the longitudinal difference images and the images you want statistics from into something like matlab, python, or ruby, a group comparison is possible but you have to write it. Alternately you can write the data out in a form SPSS can read and analyze it that way.
1) Many typical preprocessing steps can be performed from Slicer, however, there is no official or unofficial preprocessing pipeline in Slicer3.
2) The lesion applications don't currently support dicom, so yes, they need to be converted from dicom to something else such as nrrd.
3) Brain masks are a normal output of preprocessing pipelines. However, there are many applications that can create a brain mask.
a) The SkullStrip module which should be available as a Slicer extension (from within Slicer, View-> Extension Manager->Next->Select SkullStrip->Download&Install->Finish.
b) FSL has the Brain Extraction Tool (BET and BET2): http://www.fmrib.ox.ac.uk/fsl/bet2/index...
c) SPECTRE, which is being integrated into Slicer but to use it now is an involved process.
d) The BRAINS tools out of Iowa include a command line skull stripping tool called BRAINSMush. There's currently no binary releases but a stand-alone version can be built (Look for BRAINSMush on that page)
4) The model files consists of the lesion and non-lesion centroids, the distributions of the lesion and non-lesion data sets after distance thresholding, and finally the Support Vectors separating the two classes.
The model files (lesionSegmentation.model, svm.model), trained solely on lupus data, are available in the tutorial data set: http://www.nitrc.org/frs/download.php/86... It has never been tested on TBI data, or anything but lupus. It may work for you, but I have no data one way or the other.
If you want a new model file trained on TBI data that is, unfortunately, an involved process. The source code used to do it has not been released, mainly because it is a combination of scripts and custom programs with minimal documentation. It requires at least 8 patients (preferably 10) with T1, T2, FLAIR, and hand traced lesions (tracings should be as good as humanly possible). It then requires a large amount of processing. Originally the plan was to release model files for multiple disorders but funding was not approved. Support for more disorders may happen at some point, but likely not within 6 months.
A much more user friendly option would be the white matter lesion (WML) segmentation module. It is available for slicer, allows you to train your own classifier, and has a tutorial on its use. I don't know how it's segmentations compare on the same data as I am currently working on the comparison. Their published results are good.
5) The reference subject in the Intensity Standardization is just the scans whose intensity profile everything is being matched to. If you are going to use the lupus model file then it's best to continue to use the lupus002 files.
6) I will point out that when dealing with longitudinal data the longitudinal images should always be co-registered to the baseline T1.
Once the above questions are clarified,
generating the Predict Lesion volume appears quite straight forward
as per the tutorials. My understanding is that the lesion volume
map generated through this step is the input for the longitudinal
lesion comparison module. Kindly correct me if I am wrong
please.
The input to the longitudinal lesions comparison is the two time points lesion masks. The masks do not have to be the output of PredictLesions; they could be hand traced or produced by a different segmentation method. They DO have to be aligned.
7) Change tracker is specifically for tumors. It also includes some segmentation functionality that is not present in the Longitudinal Lesion Comparison (LLC). All the LLC does is take two label maps and produce a new label map with 3 possible values. One for gained, one for lost, and one for unchanged. Then the Compare View functionality in slicer can be used to examine multiple images and slices with that label map on top.
8) This is also a very involved question. GTRACT has tools that will register a B0 image from a DWI sequence to a T1 and output a transform, which you can then apply to the lesion image (assuming the lesion image was generated from images coregistered to that T1) which will put the lesion image in the same space as the diffusion data. However, you need to apply motion correction and eddy current correction to your DWI data, and you need to throw out bad gradients (Something DTIPrep can do). Working with diffusion data can be complicated. There are a LOT of issues that may come up, which is part of why there are no "fire and forget" tools.
9) If you mean within the tools I've written then no group comparison is possible. If you load the longitudinal difference images and the images you want statistics from into something like matlab, python, or ruby, a group comparison is possible but you have to write it. Alternately you can write the data out in a form SPSS can read and analyze it that way.