[Neuroimaging] DICOM Orientation and World<-->Image coordinate transformation
pieper at isomics.com
Tue Sep 13 14:20:29 EDT 2016
Hi Athanasios -
It's very likely that your data is not sheared, which you can check by
seeing if the cross product of the in plane (row and column from Image
Orientation Patient) vectors is parallel to the line you have plotted (as
noted before it may point the same way or the opposite way).
It's not clear to me from your question where you obtained the ROI , but
it's also very likely that your ROI is in the same pixel space as the MR,
in which case it would share the same matrix from pixel to patient spaces.
If it turns out they have different sampling grids then you may need to
resample one or the other. If the data files have valid headers this is a
fairly trivial operation to do with the GUI in Slicer or you could write
some code to do it.
Maybe the easiest if for you to read through the DICOM documentation for
Hope that helps,
On Tue, Sep 13, 2016 at 8:58 AM, Athanasios Anastasiou <
athanastasiou at gmail.com> wrote:
> Hello Matthew & Steven
> Alright, I am not sure what to make of this because the
> ImagePositionPatient for the part of the scan I am interested in seems to
> be a vector that cuts across the space diagonally. I am trying to take
> Steve's viewpoint here, as it looks like the "easiest" too (just rotate the
> "track" to align to one of the axes).
> The values of ImagePositionPatient are:
> array([[-127.773 , -105.599 , -94.5758],
> [-127.841 , -106.584 , -90.1855],
> [-127.91 , -107.569 , -85.7951],
> [-127.978 , -108.554 , -81.4048],
> [-128.046 , -109.539 , -77.0145],
> [-128.115 , -110.524 , -72.6241],
> [-128.183 , -111.509 , -68.2338],
> [-128.251 , -112.494 , -63.8435],
> [-128.32 , -113.479 , -59.4531],
> [-128.388 , -114.464 , -55.0628],
> [-128.456 , -115.449 , -50.6725]])
> And the 3d plot of that is available further below
> [image: Inline image 1]
> They are all regular (good) and all intermediate slice thicknesses are the
> same (double good).
> Now, the "trouble" with this is that this represents the direction of one
> of the axes. Let's call it the Z axis. The other two define a plane that is
> perpendicular to this direction. I can rotate the axes so that this "Z" is
> aligned with one of the Zs in space.
> The other thing that is a bit of a "problem" here of course is the third
> dimension of my ROI data. Because so far, in my preliminary tests, I have
> been ignoring it. This means, that just by looking at X,Y, I may have been
> seeing something that is distorted. And looking at this track, I may be
> seeing something that is thinner on the vertical projection than it really
> is although the increases in 2/3 axes are small.
> Can I please ask the following:
> 1) Am I right to assume that the ROI data are perpendicular to this
> 2) All I have to do now then, is workout a rotation around Z and Y (or,
> 2/3 axes) to make this track parallel to one of the axes (?) and then apply
> that transformation to the ROIs so that, when I set them on the image
> (which is always properly alligned), it appears to be properly aligned. (By
> the looks of this, -45 deg around Z, -45 around Y and I am there.
> 3) How does the DICOM rotation data relate to this track? (If at all).
> 4) Is there an API (or part of an API) for a [DICOM Data Type].getPixel or
> [DICOM Data Type].getVoxel kind of operation? Even if it is via a class
> ecosystem that is making sense within the context of Slicer or other piece
> of software. The problem here is that I don't have a volumetric DICOM
> (Multiple images single file). I have a ROI DICOM that references
> individual images. So, I build my own volume aware data type (based on
> pydicom) that will implement its getVoxel method and is aware of a few
> other things I am going to be doing with these volumes. But if this is
> already done somewhere, maybe I could re-use it (?).
> Looking forward to hearing from you
> On Thu, Sep 8, 2016 at 9:37 PM, Athanasios Anastasiou <
> athanastasiou at gmail.com> wrote:
>> Hello Matthew & Steven
>> Thank you for your email. Of course I am missing the third column :( I am
>> paying too much attention on the two numbers I am after right now, to bring
>> the contour right where it should be when plotting it over the image.
>> Thank you for your help, I will have another go at establishing the
>> matrix with the helpful comments provided here.
>> All the best
>> On 6 Sep 2016 18:25, "Matthew Brett" <matthew.brett at gmail.com> wrote:
>>> On Tue, Sep 6, 2016 at 6:34 AM, Steve Pieper <pieper at isomics.com> wrote:
>>> > Hi Athanasios -
>>> > To get the scan direction you'll need to look at the relative
>>> > ImagePositionPatient points from slice to slice. Note that the scan
>>> > direction is not always the cross product of the row and column
>>> > since the scan may go in the other direction from a right handed cross
>>> > product or the slices can be sheared (or even at arbitrary locations)..
>>> > There are lots of other things that can happen too, like irregular
>>> > missing slices, etc, but usually just normalizing the vector between
>>> > origin and any slice in the scan will be what you want.
>>> > This code will give you an idea:
>>> > https://github.com/Slicer/Slicer/blob/master/Modules/Scripte
>>> From your code, you are missing a valid third column for your affine.
>>> I believe that column will be all zeros from your code. This is what
>>> the later part of the DICOM orientation page is talking about, and
>>> what Steve is referring to as the "slice direction".
>>> Steve is quite right that the slice direction need not be the
>>> cross-product of the first two, and the DICOM information can often
>>> tell you what that slice direction vector is, but assuming for a
>>> moment that it is the cross product, and that you are looking at the
>>> first slice of the volume, then you'd want something like:
>>> import numpy as np
>>> ImageOrientationPatient = [0.999857, 0.00390641, 0.0164496,
>>> -0.00741602, 0.975738, 0.218818]
>>> ImagePositionPatient = [-127.773, -105.599, -94.5758]
>>> PixelSpacing = [0.4688, 0.4688]
>>> slice_spacing = 3.0 # ?
>>> # Make F array from DICOM orientation page
>>> F = np.fliplr(np.reshape(ImageOrientationPatient, (2, 3)).T)
>>> rotations = np.eye(3)
>>> rotations[:, :2] = F
>>> # Third direction cosine from cross-product of first two
>>> rotations[:, 2] = np.cross(F[:, 0], F[:, 1])
>>> # Add the zooms
>>> zooms = np.diag(PixelSpacing + [slice_spacing])
>>> # Make the affine
>>> affine = np.diag([0., 0, 0, 1])
>>> affine[:3, :3] = rotations.dot(zooms)
>>> affine[:3, 3] = ImagePositionPatient
>>> np.set_printoptions(precision=4, suppress=True)
>>> But - Steve's suggestion is more general - this code is just to give
>>> you an idea.
>>> Neuroimaging mailing list
>>> Neuroimaging at python.org
-------------- next part --------------
An HTML attachment was scrubbed...
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 178149 bytes
Desc: not available
More information about the Neuroimaging