Tischenko O. Hoeschen C. We show that the integral of the approximation function in OPED can be given explicitly and evaluated efficiently. As a consequence, the reconstructed image over a pixel can be effectively represented by its average over the pixel, instead of by its value at a single point in the pixel, which can help to reduce the aliasing caused by under sampling. Numerical examples are presented to show that the averaging process indeed improves the quality of the reconstructed images. We consider the case of 2D in this paper.
The integral 1. The main task is to find an approximation, say A f , to the function f that uses a finite number of Radon projections of f. Once the A f is computed, the reconstructed image is shown by the values of A f at the pixel points. In other words, the image shown on a computer screen is that of a step function. For reconstructed image in CT, the value on a pixel is usually taken as the evaluation of A f at either a corner or the center of the pixel. Such a choice, however, is not satisfactory from the approximation point of view and, in practice, it is one of the main reasons for the aliasing artifacts.
The purpose of this paper is to introduce an additional procedure in the algorithm OPED that will overcome this problem. OPED is a new reconstruction algorithm based on the orthogonal polynomial expansion on the disk, it is developed recently in [9—11] and tested in [4, 12]. The algorithm is stable and can be implemented easily using fast Fourier transform FFT. Our main result in this paper shows that the approximation A f in OPED can be integrated exactly over any pixel and the result of the integration can be evaluated efficiently with an implementation that is as fast as the original OPED.
As a result we can use the average of A f on a pixel as the value of the image on the pixel, instead of evaluation of A f on a point of the pixel. The result is an improved version of the algorithm, which we shall call OPED with averaging. There are several advantages of taking averaging over the pixel. First of all, from a mathematical point of view, a good measure of the quality of the image is the error measured in L1 norm.
Taking averaging over the pixel is evidently much better than evaluation at one point on the pixel in this regard. Furthermore, aliasing artifacts appear because of under sampling, taking average over the pixel should avoid this problem altogether, as confirmed by the numerical examples in the paper.
We should emphasis that OPED with averaging works because of the exact analytic formula for integration over pixels. This means that the integral can be carried out analytically, in contrast to approximation by using numerical integration. For other algorithms, it is often not possible to find an analytic formula for the integral over pixels; for example, it does not hold for FBP Filtered back-projection algorithm, the main algorithm currently used in medical imaging see, for example, [1, 5].
Numer Algor The paper is organized as follows. In Section 3 we state the new algorithm OPED with averaging, which shows how the integrals of A f over the pixels can be implemented effectively. Results of numerical tests are given in Section 4. The derivation of the formula as stated above was carried out in  from scratch. It turns out, however, that there are several earlier results that can be used to derive the algorithm. More directions mean better resolution in the reconstruction.
The arrangement of the lines on which the Radon projections are taken is called scanning geometry. Thus, OPED algorithm is better equipped with a scanning geometry of odd number of directions. To derive the algorithm 2. In , the infinite sum that is the sum 2.
One more reference that needs to be mentioned is , which also derived such an algorithm from scratch but without realizing the connection to the orthogonal partial sums. The algorithm in  did not specify the scanning geometry of odd directions, and it uses the quadrature based on the zeros of Chebyshev polynomials of the second kind to discretize the integral with respect to t, which results to a scanning geometry different from that of 2.
For practical application, it is crucial that the algorithm can be implemented with fast Fourier transform and an interpolation step , which makes the algorithm fast in speed. The additional interpolation step uses linear interpolation, which implies that the resulted approximation function, call it AI2m , no longer preserves polynomials. Such a choice is in common practice in image reconstruction. The same is clearly not true for the step function defined by point evaluation.
From a practical point of view, the point evaluation on the pixel could lead to aliasing artifact due to under sampling, the average over the pixel avoids the problem of under sampling altogether. First and foremost, the scientific question at hand should drive the research process. The first question to answer should be: does your institution have the capability to synthesize or obtain the ligand you need to answer your burning question about neuroscience? If the answer is yes, then the next step is in-depth consultation with the research PET experts at the institution, so that the study design and data analysis pathway s are clearly defined from the outset.
The study design, data acquisition protocols, image processing stream, and analysis will differ from study to study, and will depend heavily on both the radioligand and the neurophysiological phenomenon of interest. Types of questions that need to be addressed include but are not limited to the following:. In the clinic, non-quantitative i.
- Patch-dictionary method for whole image recovery?
- 1. Basic Image Handling and Processing - Programming Computer Vision with Python [Book].
- Data availability;
- The Radio Sky and How to Observe It (Astronomers Observing Guides).
- Adored (It Girl, Book 8).
Getting larger? However, in research, there is a requirement for numerical characterization of the dependent variable. For most neuroligand tracers, extensive work has been done to determine what the best and most appropriate approaches are for generating the endpoint of interest.
These can range from relatively simple, semi-quantitative methods, to conceptually complex and mathematically rigorous processes that may require additional invasive procedures arterial cannulation , as well as computational expertise for implementation. Ultimately, the success of a neuroligand PET study will depend on understanding what the field accepts as reasonable outcome measures for a given tracer, and ensuring that the proper infrastructure exists to provide this information.
What type of effect size is expected? This is relevant for determining the number of subjects needed for the study — which, given the great expense of PET, is a nontrivial concern. If possible, it is helpful to know the test-retest reliability of a particular ligand, and to have a general idea of whether your effect of interest is expected to rise above this inherent background noise in the data. In the absence of this, relative variance could be ascertained from previously published data. Study design is a key component of arriving at a sample size: Are the tests to be single measurements between groups for example, relative receptor availability between healthy normals and a disease condition , or multiple measurements within subjects?
Is the tracer known for having either poor or stellar signal-to-noise ratio? All these factors- and others- will affect the ability to detect significant differences. Group size is not the only consideration- knowledge of the expected spatial extent of the effect is also important. The newer-generation human PET scanners and most small animal PET scanners have excellent spatial resolution mm 3 , but excitement about this technological progress may be mitigated if your hypothesis is restricted to the CA3 region of the hippocampus in humans, or even the whole hippocampus in a mouse.
Additionally, the spatial extent of the effect in question will affect the decision to use a region-of-interest based approach versus a voxel-wise analysis see below. At this point, hopefully the reader is now familiar with the importance of understanding the type of data that will result from the study, even before the study begins. Although study design is critically important, a thorough discussion of this topic is beyond the scope of this chapter. The remainder of the text will focus on defining concepts and outlining processes for preparing and analyzing neuroligand PET data.
Within each subsection, the descriptions will be presented in a linear fashion. Dynamic data acquisition is the only way to obtain truly quantitative measurements of the system of interest. The behavior of the tracer in the system the TACs can be described by sets of differential equations; the solutions to these equations yield quantitative outcome parameters. In most cases, quantitative outcomes are preferable to semi-quantitative measures see below.
Dynamic data can be acquired in two ways. The scanner records all the coincidence events that occur during each specified time frame, and the reconstructed image consists of the average amount of radioactivity detected at each voxel during each time frame. After acquisition, the investigator specifies how the data should be binned into time frames during reconstruction. Listmode acquisition offers more flexibility for the investigator, especially when the ideal time frame sequence has not been identified.
The capability for listmode acquisition varies across scanner platforms. The result is a single frame that represents the average amount of radioactivity during the scan period. Only semi-quantitative information can be derived from static acquisitions, the most common of which is Standardized Uptake Value SUV.
SUV is the amount of radioactivity in the tissue e. Static acquisitions are often preceded by a tracer uptake period outside of the scanner environment. When deciding upon a static versus dynamic protocol, it should be kept in mind that capturing dynamic data leaves open the possibility for quantitative metrics if the proper methods are available ; static acquisition does not.
Static images can always be created from dynamic data by calculating the weighted average of radioactivity over a specified set of time frames. The advantage of PET imaging is that it provides unique information about the chemistry and physiology of the brain. However, even with high-resolution scanners, PET data often do not contain sufficient neuroanatomic information for identification of specific structures within the brain.
However, the PET images and MRI images, fresh off the scanner and reconstruction queues, will not be automatically matched up in image space. This is caused by many factors, but the biggest one is differences in final voxel dimensions and final image volume. A second main objective of post-processing is motion correction of the PET data.
PET acquisitions typically require the subject to try and lay still for as little as 15 minutes, or for up to 90 minutes at a time.
Basic PET Data Analysis Techniques
It is not uncommon for subjects to move their heads- from coughing, talking, or falling asleep singing subjects have also been observed. In some protocols, subjects are allowed to get up for a break during the scan acquisition — which automatically means that the PET data will not be in the same exact place in the scanner. Some institutions have developed sophisticated motion-detection and correction systems that work at the level of the reconstruction; however, most investigators do not have access to this technology. Here, we describe a post-hoc method for motion correction after the image has been generated.
Because the brain is encased by the skull, there is little concern about movement of the brain within its external bony boundaries. Therefore, the concept of using temporal gating to correct for organ motion, which is a major concern for cardiac and pulmonary imaging, will not be addressed here. Finally, certain types of data analysis — specifically, voxel-wise analyses — require that all subjects brains be in the same coordinate space.
A wealth of literature and scholarly work has been published on the mathematical basis for algorithms that shift, realign, warp, and reslice three-dimensional images from different modalities so they align correctly. The purpose of this section is to provide a basic, qualitative description of some of these algorithms in context of why they are useful for PET data.
To avoid confusion, we will use these terms generically, without attaching any algorithmic meaning to either. We leave it to the reader to investigate the semantics and procedural implementations of a particular program. Representative examples of spatially normalized, co-registered images from a healthy subject. Images are axial slices at the level of the striatum and thalamus.
Note that the FDG image contains a high degree of anatomic information that is shared with the MRI cohesive brain outline and subcortical structure delineation. Rigid body transformations. Algorithms that perform rigid body transformations are based on the assumption that the rigid bodies in our case, the PET and MRI image volumes of the same brain are roughly the same size and geometry. Three translations are made along the x , y , and z axes typically considered right-left, superior-inferior, and anterior-posterior axes, respectively.
Rotations are also made around the three axes; these are called pitch, roll, and yaw. However, the algorithms typically rely on the PET and MRI to share a sufficient amount of contrast and outline among anatomic structures for the alignment to work. However, in the case of dynamic data, the tracer distribution and resulting structural information changes significantly over time. Additionally, different tracers will provide varying degrees of structural information Figure 2. Because of the lack of similarity to the MRI, attempts to co-register individual early or late-time images will likely fail.
Here, an intermediate strategy is often successful: create a PET image that shares sufficient features with both the MRI and all dynamic PET images so that the co-registration algorithm is successful. At this time, performing an alignment or co-registration of the selected subset of PET frames to the first frame is helpful for eliminating spatial variance introduced by motion.
The final balance of images to include will be unique to each tracer and frame sequence. Empirical testing is the best way to determine what will be an acceptable combination. Thus, the early images will trace the general outline of the brain. In the case of tracers like [ 11 C]raclopride and [ 18 F]fallypride, mid-and late time images will be dominated by binding in the striatum, and the brain outline becomes diffuse e. Inclusion of too many of these striatal images will skew the registration process and should be avoided. For tracers that may not necessarily have a lot of tracer retention e.
Make sure the transformation parameters have been saved in the header files of the resliced PET. Because all the frames are being registered to the same target, this step has the convenient function of also providing a robust method for motion correction. Additional refinements for motion-correction may be needed in cases in which the motion may be too severe to be corrected by a rigid-body algorithm alone. Representative time-activity curves from before and after manual manipulation of two errant time frames are given in Figure 4.
- Angels Burning!
- Digital image processing - Wikipedia;
Multi-panel figure of early, mid, and late time frames from [18F]fallypride top panels and [11C]raclopride bottom panels. Underneath each panel: start time of each frame relative to tracer injection min , and duration of each frame s. The majority of healthy elderly subjects have no discernable [11C]PiB uptake. There is some degree of consistent [11C]PBR28 brain uptake in healthy subjects; the pathological patterns of [11C]PBR28 in neurological and psychiatric disease are not yet well-understood.
Example of how a mean dynamic image can be used to facilitate successful co-registration with an anatomic MRI. This particular combination happens to work well for [11C]raclopride. Note the general similarity to the FDG scan in Figure 2. Right, corresponding spatially normalized MRI from the same subject. Time-activity curves TACs from a [11C]raclopride scan with and without manual motion correction. Left: TAC from the right putamen of a subject after initial automated motion correction was conducted.
IMAGING: In-Memory AlGorithms for Image processiNG - IEEE Journals & Magazine
Subject motion was severe enough that at least two frames could not be corrected by the algorithm arrows. Due to the retrospective nature of our study, we did not study the low-dose utility of MBIR.
However, extrapolating from results of previous studies, application of MBIR can potentially be used to perform diagnostic examinations with further reduction in dose. In the context of CT KUB, this can be of particular value as many patients with urolithiasis are young and will require multiple CT examinations during their lives. However, further research is required to ascertain the exact levels of dose reduction achievable with MBIR. Further studies using lower dose scans with assessment of diagnostic accuracies should be performed to gain maximal benefit of the noise reduction achieved with MBIR.
The major limitation of MBIR is the time required to obtain multiple iterations. This limits the use of this technique in emergency situations, which was not the case with ASIR [ 28 ]. It is however still possible to perform an initial reconstruction with FBP or ASIR to detect large or easily detectable lesions or life-threatening conditions, for example followed by a second reconstruction with MBIR where a detailed and thorough examination can be performed to formalise a final report.
Using this new iterative reconstruction algorithm, it may be possible to acquire images with diagnostic quality similar to FBP or ASIR at a reduced dose, but further studies are required to substantiate this claim. The authors declare no conflicts of interest. No funding was received for this work. Written patient consent was waived by the Institutional Review Board.
On cross-correlations, averages and noise in electron microscopy
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author s and the source are credited. Skip to main content Skip to sections. Advertisement Hide.
Download PDF. Comparison of image quality between filtered back-projection and the adaptive statistical and novel model-based iterative reconstruction techniques in abdominal CT for renal calculi. Open Access. First Online: 10 August Introduction The CT KUB is now regarded as the imaging investigation of choice for most patients with suspected renal stone disease because of its unrivalled stone detection capacity, speed and non-dependence on intravenous contrast medium administration [ 1 , 2 ]. Reconstruction algorithms The differences among the three reconstruction techniques are related to the assumptions that each method makes in producing the final image from the raw data.
For each patient, ROIs were drawn over five contiguous images for each anatomical area. Image noise was taken from standard deviation values derived over three areas of subcutaneous fat anterior abdominal wall, left buttock and right buttock. Mean attenuation values were taken as an average of the mean Hounsfield numbers over these same areas of subcutaneous fat. ROIs were also drawn over the upper poles of both kidneys.
Table 1 Average quantitative values for image noise, mean attenuation values and contrast-to-noise ratio of three different reconstruction algorithms. Mean Diff. ASIR 8. MBIR Open image in new window. This is graphically illustrated in Fig. The interobserver variation weighted kappa and standard errors between the two radiologists were fair to moderate as follows: image noise [0.