DTI Quality Control - Part 3: Tools

Thursday, March 06, 2014 Do Tromp 0 Comments



Question:
"We wish to know if there is a quality control program we could run the initial DTI data for each subject through to give us some sort of objective metric output about its quality."  Deborah L. Kerr, Ph.D.
Diffusion imaging quality insurance is very important (as discussed in this and this post) and there are a few - but not nearly enough - tools that can help with that. A fairly new tool is DTIPrep;
DTIPrep is the first comprehensive and fully automatic pre-processing tool for DWI and DTI quality control can provide a crucial piece for robust DTI analysis studies.

It is able to do:
  1. Dicom to NRRD converting
  2. Image info checking
  3. Diffusion information checking
  4. Rician LMMSE noise filter
  5. Slice-wise intensity checking
  6. Interlace-wise intensity checking
  7. Averaging baseline images
  8. Eddy current and motion correction
  9. Gradient-wise checking of residual motion/deformations
  10. Joint rician LMMSE noise filter
  11. Brain masking
  12. DTI computing
  13. Dominant direction artifact (vibration artifact) checking
  14. Optional visual checking
  15. Simulation-based bias analysis
It is unfortunately at this time not yet able to implement a fieldmap correction. Hopefully this will be added soon.
For more information check out their website: http://www.na-mic.org/Wiki/index.php/Projects:DTI_DWI_QualityControl
Download it here: http://www.nitrc.org/projects/dtiprep/


A different toolbox, called Camino, helps you estimate the signal to noise ratio (SNR) and noise variance of your diffusion image. The tool is called estimatesnrTheir explanation is somewhat complicated but what it comes down to is this -
If you have 2 B0 images: 
The traditional method for estimating the noise is to sample two ROIs, one in brain white matter, and one in the background. Assuming that the background signal contained only noise, we can estimate the noise standard deviation as
  sigma = sqrt(2.0 / (4.0 - PI)) * stddev(signal in background region)
where the constant scaling corrects for the Rician distribution of the noise, giving us the standard deviation sigma of the original signal. To synthesize data with the same noise conditions, we would take the true signal S_0 and calculate
  S = |[S_0 + N(0, sigma), N(0, sigma)]|
where N(0, sigma) is a random sample drawn from normal distribution with mean 0 and standard deviation sigma.
If you have more than 2 B0 images:
The second method requires multiple b=0 images, and defines sigma_mult as the standard deviation of the signal over the ROI, across all K b=0 images. Again, let i be a voxel index, then
  sigma_i = stddev(S_{i1},...,S{iK}))  sigma_mult = mean(sigma_1,...,sigma_N)
And finally SNR is
  mean(S_{11}, S_{12},...,S_{1K}, S_{21},...,S_{NK}) / sigma_mult
If there are two or more b=0 images, both snr_diff and snr_mult will both be computed. The more b=0 images there are, the better the estimate via sigma_mult, but sigma_diff only ever uses the first two b=0 images.
You can use a combination of SNR and maximum intensity of the DWI image - as extracted with fslstats option -r - to get insight into the quality of the data: