Skip to main content
IEEE Journal of Translational Engineering in Health and Medicine logoLink to IEEE Journal of Translational Engineering in Health and Medicine
. 2014 May 30;2:2500113. doi: 10.1109/JTEHM.2014.2327628

Near Real-Time Computer Assisted Surgery for Brain Shift Correction Using Biomechanical Models

Kay Sun 1,, Thomas S Pheiffer 1, Amber L Simpson 1, Jared A Weis 1, Reid C Thompson 2, Michael I Miga 1,2,3
PMCID: PMC4405800  NIHMSID: NIHMS633581  PMID: 25914864

Abstract

Conventional image-guided neurosurgery relies on preoperative images to provide surgical navigational information and visualization. However, these images are no longer accurate once the skull has been opened and brain shift occurs. To account for changes in the shape of the brain caused by mechanical (e.g., gravity-induced deformations) and physiological effects (e.g., hyperosmotic drug-induced shrinking, or edema-induced swelling), updated images of the brain must be provided to the neuronavigation system in a timely manner for practical use in the operating room. In this paper, a novel preoperative and intraoperative computational processing pipeline for near real-time brain shift correction in the operating room was developed to automate and simplify the processing steps. Preoperatively, a computer model of the patient’s brain with a subsequent atlas of potential deformations due to surgery is generated from diagnostic image volumes. In the case of interim gross changes between diagnosis, and surgery when reimaging is necessary, our preoperative pipeline can be generated within one day of surgery. Intraoperatively, sparse data measuring the cortical brain surface is collected using an optically tracked portable laser range scanner. These data are then used to guide an inverse modeling framework whereby full volumetric brain deformations are reconstructed from precomputed atlas solutions to rapidly match intraoperative cortical surface shift measurements. Once complete, the volumetric displacement field is used to update, i.e., deform, preoperative brain images to their intraoperative shifted state. In this paper, five surgical cases were analyzed with respect to the computational pipeline and workflow timing. With respect to postcortical surface data acquisition, the approximate execution time was 4.5 min. The total update process which included positioning the scanner, data acquisition, inverse model processing, and image deforming was ∼ 11–13 min. In addition, easily implemented hardware, software, and workflow processes were identified for improved performance in the near future.

Keywords: Biomechanical modeling, brain shift, image-guided surgery, sparse data


Brain deformation during surgery compromises the fidelity of image-guided tumor resection procedures. This paper presents a comprehensive framework to intraoperatively account for volumetric brain deformations within image-guided surgery systems using only measurements of cortical surface shift. (Left) Intraoperative interface that measures the location of cortical features before and after deformation. (Right) Embedded within our correction framework is a finite element model that uses the data from our interface to constrain and estimate volumetric brain deformations.

Download video file (8.6MB, mp4)

I. Introduction

Image-guided neurosurgery relies on preoperative images to provide surgical visualization and navigation into the brain after registration of the images to the patient’s physical space. However, access to the brain subsequent to craniotomy often leads to deformation of the brain along with the movement of subsurface resection targets such as a tumor. The amount of brain shift depends on a number of factors including the extent of the craniotomy, retraction, tumor resection [1][3], drainage of cerebrospinal fluid (CSF) [2], [4], [5] and drugs administered during surgery [1], [6]. As a consequence, cortical shifts of up to 20 mm [1], [2] and subsurface shifts of up to 7 mm [1], [2], [4], [7], [8] have been reported and result in fundamental misalignment between actual brain target positions and their counterparts as determined from registered preoperative images. It is highly desirable to re-establish accurate alignment for successful image guidance. In addition, when one considers the abundance of preoperative image information (e.g., functional magnetic resonance (MR) data, positron emission tomography, MR diffusion tensor imaging, etc.) that can be brought to bear on the care of patients during surgery, the need to re-establish alignment between preoperative and intraoperative states becomes even more critical.

One direct approach to achieving updated deformed brain images is to re-image the brain during surgery using intraoperative magnetic resonance (iMR) imaging systems. To date, iMR systems have been the only clinical solution that has been adopted to any extent. While these systems are similar to diagnostic ones, often due to the surgical environment and workflow, the quality of these images acquired is not the equivalent of their preoperative counterparts. In an effort to utilize the pristine preoperative anatomical images as well as other forms of imaging data, preoperative images are deformed to match the intraoperative images using non-rigid registration techniques that are image-based [7] or physics-based [9][13] using the data-rich but albeit lesser quality intraoperative images to drive the registration. While significant work has been produced in this direction, iMR imaging systems are rather costly, occupy a significant portion of operating room (OR) space and may not be available in every hospital. A more cost-effective solution is to make use of the exposed cortical surface to record brain shift and use the subsequently measured surface displacements to drive a comprehensive biomechanical model of the brain. Once the model has computed a deformation field, it can then be used to update/deform the preoperative images [14], [15] (and other data consequently). The difficulty with this approach is determining the extent of data necessary to produce a sufficiently accurate registration for intraoperative guidance while simultaneously trying to minimize the impact on operational work flow, i.e., the sparse data extrapolation problem [16].

While there have been many proposed sparse-data solutions with encouraging results in phantom, animal and human studies, the work has largely reflected retrospective analysis [11], [12], [17][25]. For practical use in the OR, the updated preoperative images must be produced within a reasonable amount of time. This time constraint means that the cortical brain data collection and processing must be executed quickly and with minimal interruption to the surgical workflow. A brain shift compensation system, which includes a preoperative biomechanical model development pipeline, a preoperative surgical planning graphical user-interface (GUI) and two intraoperative GUIs, was developed to perform real-time brain shift correction in the OR. This study introduces this brain shift compensation system and presents a comprehensive evaluation of it in terms of the time taken for each processing step along with an analysis of possible areas for improvement.

II. Methods

A semi-automated, preoperative and intraoperative computational processing pipeline for brain shift correction was developed (Figure 1). Briefly, preoperative magnetic resonance (MR) images are acquired a day or more prior to surgery (diagnostic series can be used provided significant surgical changes have not ensued). From the images, the patient’s brain [26], tumor and intracranial support structures, falx and tentorium cerebri, [27], [28] are segmented. A patient-specific volumetric finite element mesh is generated from the segmented brain and tumor images with the structures of the falx and tentorium celebri having predefined boundary conditions. A preoperative planning GUI was developed for use by neurosurgeons to establish the approximate head orientation as well as size and location of the craniotomy. Based on the preoperative plan, remaining boundary conditions are generated using an automatic boundary condition generation algorithm [17]. As the exact forcing conditions are difficult to know (e.g., level of CSF drainage, gravitational direction, effects of hyperosmotic drugs and edema), a distribution of possible conditions is determined which generates an atlas of boundary conditions. Each boundary condition set is used to constrain a finite element deformation solution thus producing a distribution of possible deformation solutions or a ‘deformation atlas’, which is precomputed prior to surgery [17], [18]. The model used within this precomputation phase is a biphasic biomechanical model that takes into account may of the sources of brain shift, i.e., hydration effects from drugs like mannitol, gravity-induced brain sag due to CSF drainage, resection effects, and skull-tissue interactions [6], [29].

FIGURE 1.

FIGURE 1

A workflow illustrating the semi-automated preoperative and intraoperative computational processing steps involved in producing an updated brain shift image in real-time. The inputs are preoperative MR images, face LRS scan for registration, and pre and post-resection cortical brain surface LRS scans to drive the inverse modeling.

On the day of surgery, the deformation atlas is transferred to the intraoperative guidance system that performs an inverse model calculation driven by sparse cortical brain deformation measurements obtained from intraoperative laser range scanner (LRS) data. The LRS records the cortical brain surface by sweeping a line of laser light across the surface while recording the laser line with a digital camera and by triangulation produces a 3-dimensional (3D) point cloud of the surface geometry. Texture information is also recorded from the same digital camera by acquiring a 2-dimensional bitmap of the field of view (FOV). Other examples of LRS use in image-guided procedures include orthodontics [30], neurosurgery [31][37], liver surgery [24], [38], [39] and cranio-maxillofacial surgery [40], [41]. In this work, a commercial LRS system (Pathfinder Therapeutics, Inc., Nashville, TN) was integrated with an optical tracking system (Polaris Spectra, Northern Digital Inc., Waterloo, Ontario, Canada) and used to collect cortical surface data. After cortical brain measurements are made using the LRS, the optimum brain shift solution is determined from an inverse problem approach using the deformation atlas [18], [27]. Once calculated, the patient’s brain image data is subsequently deformed using the optimum solution to reflect the current state of the brain’s shape.

A preoperative planning GUI (called Surgical Planner), a processing automatic pipeline, and two intraoperative GUIs (called Registration, and Correction) were developed for use before and during surgery to plan and process the collected data. The custom software was written in C++ using open source Insight Segmentation and Registration Toolkit (ITK), Visualization Toolkit (VTK) and Fast Light Toolkit (FLTK) libraries. MATLAB R2011b (MathWorks, Natick, MA)’s Parallelization and Optimization Toolboxes were also used. Figure 1 illustrates the overall layout of the system. In the following sections, methodologies used will be briefly discussed, followed by results concerning the full system performance.

A. Preoperative Processing

1). Mr Image Acquisition

In this study, five patients were processed through the preoperative and intraoperative pipelines. All patients provided written consent prior to imaging for this Vanderbilt Institutional Review Board approved study. For each patient, two sets of T1-weighted MR image volumes, one gadolinium-enhanced and the other non-enhanced, were acquired from a conventional clinical MR scanner (Table 1).

TABLE I.

TABLE I

Patient demographics and MR image details.

2). Segmentation

To streamline the preoperative pipeline and model generation, a semi-automatic segmentation of the brain was implemented [26]. Briefly described, for each patient, a rigid alignment is performed between patient’s enhanced and non-enhanced image volumes. Once complete, the patient’s non-enhanced image volume is registered to an expertly segmented non-enhanced brain image volume (i.e., atlas volume) first using a mutual information rigid registration followed by a custom-built adaptive bases nonrigid registration algorithm [42]. Once complete, the atlas mask can be transformed such that the contrast-enhanced patient’s can be automatically segmented. We should note that the falx and tentorium have been expertly segmented in the atlas which serves as an automatic approach to deploying the dural septa within our model [27]. The atlas also provides an excellent reference surface set for finite element mesh generation once registered to the patient. We should note that a visual confirmation is performed when complete, and some manual editing of the automatic segmentation is sometimes performed using the open source software ITK-Snap (www.itksnap.org) to correct small discrepancies. At this time, ITK-Snap is also used to manually segment the contrast-enhancing tumor region. We should note that manual methods of tumor segmentation are the standard in commercial guidance systems.

Finally, we should note that in [28], a sensitivity study was performed which compared our brain shift correction results based on models built from our semi-automatic segmentation approach versus those coming an expert manual segmentation approach and no statistical difference between results was found.

3). Surgical Planner

The direction and degree of brain shift are in part dependent on how the head is oriented with respect to gravity, as well as location and the size of the craniotomy. A priori knowledge of these 3 variables helps to limit the size of the atlas of possible deformation solutions and can be provided by the neurosurgeon during preoperative planning. A user-interactive GUI was developed to assist the neurosurgeon in quantifying those variables. Brain and tumor surface meshes were converted from segmented brain and tumor images respectively by using marching cubes and smoothing algorithms [43]. Both surface meshes were rendered in the GUI and the neurosurgeon rotated the brain into the planned position and recorded the transformation. The center of the craniotomy was selected by picking a point on the brain surface and the craniotomy size was determined using the slider tool to adjust the sphere (Figure 2). These 3 variables were used later in defining the boundary conditions of the computational model.

FIGURE 2.

FIGURE 2

Screenshot of the surgical planner GUI with the brain and tumor surfaces loaded and oriented to the same position as in the OR. The center and size of the craniotomy are represented by the green sphere selected by the neurosurgeon

4). Continuum Model

Based on the observation that the brain acts similar to fluid saturated poroelastic medium, Biot’s theory of biphasic consolidation was used to represent the deformation behavior of brain tissue [6], [29]. According to Biot’s theory, the mechanical behavior of a poroelastic medium such as the brain can be described using equations of linear elasticity for solid porous matrix and Darcy’s law for describing the flow of fluid through the porous matrix. Equation (1) represents equilibrium whereby the gradient in interstitial fluid pressure can cause shape change to the solid matrix. In addition, changes in the buoyancy of the surrounding fluid can generate gravity-induced deformations. In equation (2), we see a conservation of fluid mass relationship whereby changes in hydration can effect the the time rate of change of the volumetric strain of the solid matrix. In addition, we also allow for dilatation effects as exchange with capillary beds can occur in response to drugs like mannitol. The model can be described as,

4).

where Inline graphic is the shear modulus defined by Inline graphic with Inline graphic as Young’s modulus and Inline graphic is the Poisson’s ratio. Inline graphic is the displacement vector, Inline graphic is the ratio of fluid volume extracted to volume change of the tissue under compression, Inline graphic is the interstitial pressure, Inline graphic is the tissue density, Inline graphic is the surrounding fluid density, Inline graphic is the gravitational unit vector, Inline graphic is the capillary permeability, Inline graphic is the intracapillary pressure and Inline graphic is the hydraulic conductivity. This constitutive model is a common model and has been used successfully to describe brain shift [17], [18], [27].

5). Computational Biomechanical Model

For each patient, a patient-specific finite element volumetric mesh was generated from the MR images. Briefly, once the patient’s images have been segmented, a marching cubes algorithms can be used to generate bounding surface. A custom-built mesh generator is then used to generate a volumetric tetrahedral mesh [44] with parenchyma, and tumor designated. Parenchyma can be discretized further into white and gray matter elements using an image-to-grid methodology whereby the average image-intensity of voxels within a tetrahedral element can be determined and then used to threshold tissue type [52]. Typically, a brain mesh consists of approximately 100,000 tetrahedral elements (Figure 3).

FIGURE 3.

FIGURE 3

Brain (in white) and tumor (in yellow) mesh overlaid with MR images.

The boundary conditions applied were determined according to observed conditions commonly associated with brain shift from previous studies [17], [18], [27]. The boundary conditions associated with displacement that were found to produce good estimates of brain shift were as follows: (1) the brainstem area was typically found to be very stable and as a result represent a fixed, i.e., no displacement condition – Figure 4, left, red region, (2) in the region of the craniotomy and surrounding area where the brain can often sag away or shift laterally, the surface was designated as stress free allowing for the brain to fall away from cranial wall Figure 4, left, green region, (3) the remaining brain surface is bound by the skull such that movement is limited to tangent-to-the-skull motion along the cranial wall only, i.e., a freedom to slip boundary condition Figure 4, left, black region, (4) slip boundary conditions were also designated for the internal rigid dural septa structures Figure 4, left, magenta region. As equations (1,2) state, gradients in interstitial pressure can induce deformations and do embody the transient effects of the model. Boundary conditions were either designated at an atmospheric reference pressure in elevations above cerebrospinal fluid (CSF) drainage levels Figure 4, right, blue region, or non-draining surfaces (i.e. no flux) below drainage levels Figure 4, right, red region.

FIGURE 4.

FIGURE 4

(Left) Mesh of the brain with the fixed brain stem nodes in red, stress-free nodes in green, slippage nodes in black, dural septa nodes also defined with slip boundary conditions in magenta and tumor nodes in blue. Black arrow indicates direction of gravity. (Right) Mesh of the brain with Dirchlet boundary conditions for pressure set on blue nodes at a baseline reference pressure and Neumann boundary conditions set on the red nodes indicating non-draining surfaces.

6). Atlas Creation

While the above provides a good reference for a single boundary condition set, the surgical environment is quite dynamic. As a result, our strategy is to generate a distribution of possible boundary conditions based on reasonable surgical presentation, a so-called ‘atlas of deformations’. As a result, the boundary conditions in the previous section have been parameterized such that based on minimal preoperative planning a complete deformation atlas can be constructed. The distribution of boundary conditions is based on three mechanisms of brain shift that we have observed to be important : gravity-induced brain shift, brain volume reduction due to administration of hyperosmotic drugs like mannitol, and brain swelling due to edema around the tumor [17], [18]. For gravity-induced deformation, we have varied the atlas to express three different levels of CSF drainage which influences the deployment of pressure-related boundary conditiozns (Figure 4, right). We should also note that with each drainage level, we also account for a distribution of possible head orientations around the estimate from the preoperative plan. While this accounts for inaccuracies in the preoperative plan, it also helps to account for surgical table adjustments during surgery (typically involves a distribution of +/−20 degrees from preoperative estimate, leads to approximately 60 different head orientations). We should note that with each orientation, the boundary condition distribution in Figure 4, left change, i.e., our displacement boundary conditions are parameterized as a function of head orientation. With respect to the influence of hyperosmotic drugs, we have chosen to exploit the second term in equation (2). Our atlas allows for 3 different capillary permeability, i.e., varying Inline graphic, with a fixed intracapillary pressure. Similarly, swelling variations were simulated with 3 different capillary permeability values, and positive intracapillary pressures, however, we did allow for 3 different craniotomy sizes (75%, 100% and 125% of planned size) to account for any deviations from the plan. We should note that boundary swelling, the boundary conditions associated with Figure 4, left, green region are modified to slip-based boundary conditions with stress free in the craniotomy region. For the work here, there were 729 total brain shift solutions contained within our deformation atlas. The different material properties and their varying levels are tabulated in Table 2 and are based on sensitivity studies performed within in vivo porcine brain experiments which we have found to be quite satisfactory for predicting bulk brain shift [45][48].

TABLE 2.

TABLE 2

List of material properties used.

The 729 finite element models were solved by spreading the computation across an 8-node computing cluster to ensure the atlas of solutions was built in time for the day of surgery. The biphasic brain model was solved for displacement and pressure using an open source Portable Extensible Toolkit for Scientific Computation (PETSc) package. A highly automated process of computational model generation, boundary conditions creation and solving for the atlas were developed to streamline and ensure minimal user error. Once complete, the deformation atlas is transferred to the intraoperative data collection and processing computer used in the OR on the day of surgery.

B. Intraoperative Real-Time Image Update

To facilitate real-time brain shift correction in the OR, an intra-operative pipeline was developed with 2 simple user-friendly GUIs to process the collected data from the LRS along with the precomputed atlas (Figure 1, right side).

1). Physical to Image Space Registration

The Registration GUI developed is a registration and visualization utility that registers the patient’s physical space to image space using an LRS scan of the face and the corresponding surface from the MR image volume. The face LRS scan was acquired by positioning the LRS directly over face of the patient, making sure to include all if possible, the nose, eye and ear as these structures serve as good landmarks for use in registration. The manual segmentation tool in the LRS acquisition software (Pathfinder Therapeutics, Inc., Nashville, TN) was used to remove extraneous points in the face scan, such as hair, intubation tubes and drapes, that will unnecessarily slow down the registration computation (Figure 5). Once the segmented face LRS scan data is complete, a smoothing process using a commercially available radial basis fitting (RBF) procedure is performed (FarField Technology, Ltd., Christchurch New Zealand). To initialize, three surface fiducials are selected on the LRS data of the patient’s face and the corresponding points are designated on the MR surface counterpart and a rigid registration using Horn’s method [49] is executed. Once complete, the registration is refined using an iterative closest point surface registration [50]. Verification of the registration is confirmed by visual inspection of the overlay of both 3D objects (Figure 6). If the alignment of the two objects is not satisfactory, the user may select new points and execute another registration.

FIGURE 5.

FIGURE 5

The face RBF before (left) and after (right) manual segmentation to remove extraneous points.

FIGURE 6.

FIGURE 6

A screenshot of the Registration GUI with the face RBF on the right overlaid onto the MR image-based head surface mesh on the left. The 3 homologous points on both surfaces are in green and they are used for the initial alignment.

2). Pre-Resection LRS Scan

After the craniotomy and dura is opened, the LRS was moved into place to acquire the exposed cortical brain surface. Care was taken to ensure a direct line of sight between the brain surface and the laser. Once acquired, a simple manual segmentation tool is used to remove extraneous points, isolating just the brain surface (Figure 7). Similar to the face scan, an RBF surface is fit and rigid transformations can be applied to transform the surface to image space. To confirm the positioning accuracy of the optically-tracked LRS, the transformed pre-resection RBF scan was automatically overlaid onto the head surface mesh for visual inspection (Figure 8).

FIGURE 7.

FIGURE 7

The pre-resection RBF before (top) and after (bottom) manual segmentation to remove extraneous points.

FIGURE 8.

FIGURE 8

A screenshot of the Registration GUI with the transformed pre-resection RBF overlaid over the MR image-based head surface mesh set to be semi-transparent.

3). Post-Resection LRS Scan

Multiple LRS scans may be taken during the course of the surgery depending on neurosurgeon’s request to track the updated position of the tumor since the amount of brain shift is a function of time. The procedure for all sequential scans is the same. In this study however, only one final cortical brain surface was acquired after tumor resection was thought to be complete and an image update would be useful to confirm complete removal of the tumor in the presence of brain shift. Since time is critical at this juncture as the entire surgical team is waiting for updated images, processing steps were specifically developed to minimize user intervention and computation time. Instead of a full manual segmentation as done previously with the pre-resection LRS scan, a mask of the pre-resection LRS was applied to the post-resection LRS to remove points outside of the craniotomy region. The fully segmented post-resection scan was also automatically fitted with an RBF surface, transformed to image space, and displayed along with the pre-resection RBF for visualization (Figure 9).

FIGURE 9.

FIGURE 9

A screenshot of the Registration GUI with transformed pre- and post-resection RBF overlaid onto the MR image based head surface that has been made less opaque. The post-resection RBF is below that of the pre-resection RBF illustrating brain shift.

4). Homologous Point Pick

Once the pre- and post-resection LRS cortical surfaces were spatially transformed to image space, the Correction GUI is used to determine the driving shift measurements for correcting the image volume for brain shift. To accomplish his, the 2D pre-resection and post-resection bitmaps, i.e., texture information acquired by the LRS unit, were visualized side-by-side. Homologous points were then selected using blood vessel bifurcations as landmarks (Figure 10). These points will produce shift measurements to drive our compensation system.

FIGURE 10.

FIGURE 10

A screenshot of the Correction GUI with homologous points in green selected at blood vessel bifurcations on the pre- and post-resection bitmaps.

5). 2D to 3D Correspondence

Once homologous points are selected from the texture information provided by the LRS, they can be related directly to their 3D coordinate positions. We should note that brain shift is possible from the very instant the dura is opened. To accommodate for this initial shift, a correspondence between brain mesh and intraoperative pre-resection LRS-defined features is determined using a closest point operator. Once determined, the shift from pre-resection to post-resection LRS is appended and an entire deformation is complete. In the event that a very limited number of homologous points can be determined, the platform does allow for the calculation to be driven by closest point operators solely with the possibility of weighting from any homologous points that can be determined. For one patient in this study, this feature was used due to a lack of homologous points.

6). Inverse Modeling

With a field of displacements describing cortical surface deformation defined, the correction algorithm is employed that uses a constrained least squares inverse modeling approach based on the atlas and constrained by the measured displacement shift vectors of the cortical surface as well as added constraints to the coefficients of reconstruction. Details of the inverse modeling can be found in previous studies [17], [18], [27]. Briefly, the least-squared errors between the measured shift vectors and predictions from the deformation atlas were minimized by solving the following equations for the weighting coefficients, Inline graphic.

6).

where Inline graphic are the measured shift vectors on the brains surface as determined by the above methods, and Inline graphic is the atlas matrix containing the pre-computed deformation solutions at the selected measurement points on the computer model boundary mesh. The first constraint ensures only positive regression coefficients and the second constraint prevents extrapolations of the solution. The constraints imposed have been shown to successfully predict brain shift [27], [28]. The implementation of the method of Lagrange multiplier in the Optimization Toolbox in MATLAB was used to solve the linear optimization problem, along with the use of the Parallelization Toolbox to improve input/output function speeds. We should note that while other optimization approaches with less constraint can lead to better objective function results, we have found that constraints such as the above are necessary to maintain physically realistic deformations, i.e., a real safety constraint consider the dramatically underdetermined nature of this problem.

7). Deformed Image Update

Once the inverse solution is achieved, a quantitative report is automatically generated based on the optimum solution for assessment, specifically the amounts of shift based on measurements, and remaining error after correction. As equation (3) is solved within the context of matching the sparse measurements at the surface, the coefficients determined are then used to construct a full volumetric deformation field which is subsequently used to deform the preoperative MR images, thus providing an updated image of the deformed brain for use within the neuronavigation system. With respect to image deformation, nodal displacements from the undeformed finite element mesh were trilinearly interpolated onto a regular grid at the same resolution as the preoperative MR images. To ensure there was no extrapolation of displacements outside the brain, the grid of interpolated displacements were multiplied with the binary brain mask. Each undeformed pixel was then transformed to their respective deformed pixel and filled with a trilinearly interpolated pixel intensity value from the undeformed MR images. Since not every deformed brain pixel will be filled, the missing pixels intensity values were interpolated from its surrounding neighbors. The Parallelization Toolbox in MATLAB was used to parallelize and speed up the interpolations of 3 Cartesian components of displacements.

A computer with Intel Quad Core i5 with 16 GB of ram running Windows 7 64bit was used to compute the intraoperative steps. The same computer was also used to acquire the LRS scans.

III. RESULTS

The computational timing to process each of the steps in the preoperative part of the pipeline for all 5 patients is tabulated in Table 3. The model and atlas creation steps were significantly longer for Patients #2, 3 and 4 because more extensive mesh refinement was needed to resolve tumors. The total preoperative processing time ranged from 7 to 17 hours, with the majority of the time spent on creating the atlas.

TABLE 3.

TABLE 3

Time taken to run the preoperative steps in the pipeline.

The computational costs to register the patient space to MR image space using the Registration GUI for all 5 patients are listed in Table 4. The LRS acquisition time was up to 4 minutes, including positioning of the apparatus, for all 5 patients.

TABLE 4.

TABLE 4

Time taken for registration using the registration GUI from intraoperative steps in the pipeline.

The computational costs to produce the updated deformed brain image from when the post-resection LRS scans were taken for all 5 patients are listed in Table 5. The maximum time reported for an updated deformed brain image to be computed during post-resection, including post-resection LRS acquisition times, was approximately 13 minutes and the fastest time was about 11 minutes (Table 5). From the perspective of surgical workflow, the most prominent ‘waiting’ period would likely be experienced during the computation of the updated image after homologous point picking. The average wait time during this period would be approximately 4.5 minutes. In a realistic workflow setting, it is likely that the surgeon would be engaged during homologous point picking. Once this task was complete, the surgeon would be effectively waiting for an image update. Summing across columns 5, 6, and 7, in Table 5 and taking the average, the surgeon wait time would be approximately 5.5 minutes.

TABLE 5.

TABLE 5

Time taken to run the post-resection LRS scan segmentation part of the registration GUI and the correction GUI from intraoperative steps in the pipeline.

The performance of the predictive computational model for all 5 patients is summarized in Table 6. It includes the number of homologous points used in calculating the measured brain shifts between the pre- and post-resection LRS scans, percentages of shift corrected and the magnitude of corrected position errors. The percent shift corrected follows the formula, (1 – corrected error magnitude/measured shift magnitude) Inline graphic%, where corrected error magnitude is the error between measured and model predicted points [17], [18]. Despite the variability in magnitude of brain shift between the 5 patients, the corrected error magnitudes had a narrow range of 2.48 mm to 3.29 mm. The updated images of the shift-compensated brains for all 5 patients are illustrated in Figure 11.

TABLE 6.

TABLE 6

The measured and predicted brain shift correction results (mean + standard deviation with maximum in parenthesis) for all 5 patients.

FIGURE 11.

FIGURE 11.

Original (left) and model updated shifted (right) brain images for all 5 patients.

IV. DISCUSSION

The objective of this study is to evaluate a preoperative and intraoperative processing pipeline developed for real-time brain shift correction using cortical brain surface deformation data only. The complete process beginning with the positioning of equipment into the surgical field, data acquisition, inverse model processing, and image deforming was approximately 11 – 13 minutes across the five patients. Within that time, the actual wait time to compute an updated image volume, where the neurosurgeon is not actively engaged in the workflow, is approximately 5.5 minutes. The current workflow has been developed to be minimally cumbersome but better OR design is very achievable and will serve to reduce total process time. This study comprehensively covers the pipeline and its performance on typical computing hardware. As hardware and software techniques continue to evolve, computation times are likely to improve.

As a comparison to intraoperative imaging systems, including movable magnetic systems like the VISIUS (IMIRIS, Inc., Chanhassen, MN), the fixed magnetic systems like the BrainSuite iMRI Miyabi (BrainLab, Inc., Westchester, IL), and portable systems like the PoleStar N20 (Medtronic, Inc., Minneapolis, MN), these systems all require at least the same amount of time if not more to position the magnet or patient, taking extra care to ensure anesthesia and ventilation are not interrupted. In addition, after the deformed images are acquired, more computation time is required to nonrigidly register the new images to the preoperative images [10], [11], [13], [20], [25], [51]. This latter point is quite important. Even with the employment of iMR techniques, one should expect algorithmic times similar to our approach to still be needed to align other forms of data. Time is also spent moving the magnet or patient back to the original surgical positions.

Breaking down the total 11-13 minutes of intraoperative setup and correction time yielded some interesting observations. The tasks that took the longest time were the manual selection of the homologous points (up to 2.25 mins), acquiring the LRS scan (up to 4 mins) and computing the deformed image (up to 4 mins). The length of time needed to deform the images was proportional to the number of slices and in-slice resolution of the patient’s brain volume. To improve image deformation times, the computation was divided to run in parallel on 4 computer processing units (CPU) using MATLAB’s Parallelization Toolbox. Although CPU/GPU parallelization does improve computation time, there are alternative strategies to how deformation correction should be implemented within guidance systems. Already some advances have been made. In [53], an alternative strategy where non-rigid deformations are compensated for in the localization of digitizers has been proposed which would eliminate the need for deformed image volumes. That alone would reduce the wait time for surgeons by approximately 3-4 minutes.

The positioning and acquisition of the LRS scans took up most of the time during registration, pre-section and post-resection. Improvements can be made to the workflow by mounting the LRS scanner on the overhead articulating arms in the OR, thereby allowing the scanner to be positioned and also withdrawn from the field quickly, resulting in less disruption to the surgical workflow. Alternate to LRS methodologies for point cloud generation, the use of stereo-pair reconstructed surfaces from the optics of the surgical microscope data would be an excellent way to reduce workflow problems [23], [54]. Another opportunity to reduce interruption and reduce time would be to eliminate homologous point picking. Since video is available throughout the resection process, blood vessels on the cortical surfaces could be continuously tracked. Ding et al. [52] developed such a tracking feature although in [48] it was used in a retrospective analysis. Combining vessel-tracking feature with stereo pair reconstruction from microscope data [53] could in effect generate the same type of data as with the LRS but yet remain integrally contained within the microscope environment.

The preoperative processing time to create the atlas from MR images took from 7 to 17 hours in this study. Approximately 2 hours were spent on generating the patient-specific brain models with the majority of the preoperative processing time (6 to 14 hours) spent on creating the 729 solutions in the atlas. Since preoperative MR images are typically acquired days before the surgery, there is clearly enough time to compute the atlas. However, a recent sensitivity study on the size of the atlas solutions found that instead of 729 solutions used in this study, only a fraction, approximately 123 solutions could produce results with the same accuracy (effectively a sparser sampling of the atlas). The smaller atlas size means that construction time could be reduced to 2 hours [28], [54]. This suggests that ‘same day as surgery’ preoperative computing is achievable.

The biphasic biomechanical model-based brain shift correction accounted for 60%-88% of the shift, with a mean correction error of about 3 mm. Sources of error may be from image segmentation, finite element meshing, material properties, boundary conditions and registration. Additionally, the LRS scanner has a geometric error of 0.25 +/− 0.40 mm and a tracking error of 2.2 +/− 1.0 mm [55]. Despite all these sources of possible error, the mean error of 3 mm is remarkably small. Although the majority of brain shift was accounted for in the biphasic biomechanical model, even higher accuracy could likely be achieved if the collapse of the tumor resection cavity could be included in the model. Effort is underway to address this complex tissue-modeling event.

The homologous points selected for use in the error analysis were from the cortical brain surface. There is a lack of subcortical surface validation of the biphasic biomechanical model used. In a previous study by Dumpuri et al. [18], postoperative MR images were used with preoperative images to provide both surface and subsurface homogolous points to drive the same biomechanical model. About 85% of the brain shift was recaptured in that 8 patient study, with remaining shift error less than 1 mm. While suggesting submillimetric correction accuracy, it must be noted that significant brain deformation recovery had taken place prior to post-operative imaging in this study (up to 40% recovery in some instances). Nevertheless, the results from this study were promising and demonstrated the applicability of the biphasic biomechanical modeling approach.

A. Opportunities and Challenges

The above system represents a cohesive approach to collecting, segmenting, and processing data with the result producing a ‘computationally’ altered image for improved navigation in image-guided procedures. There are clearly limitations to the approach and room for improvement. In no area of imaging and image processing has there been more development than that of the neurosurgical domain. The opportunity to develop sophisticated computer models with not only general anatomical information but also with complex structural information (e.g. diffusion tensor imaging and elastography) is attainable. In addition, it is important to recognize that more sophisticated platforms for modeling are being developed that incorporate a variety of constitutive laws as well as interactive simulation conditions that include nonlinear effects (e.g. SOFA [56]). While we have chosen a linear platform here based on acceptable performance levels within the localization limitations of today’s IGS systems, this will undoubtedly change in the future with the evolution of more precise surgical systems (e.g. robotic platforms [57]). In this paper however, the work presented represents a baseline ‘systems’ level realization from which enhanced innovation can be realized. For example, challenges in space-occupying lesions and removal of tissue still persist and solutions are needed. While new data streams (e.g. LRS and surgical microscope) and interventional diagnostics (e.g. optical spectroscopy, and fluorescence) are on the horizon, new minimally invasive neurosurgical techniques will continue to provide challenges to presentation. Hardware and software developments bring enormous processing speed and enhanced computational architecture to the OR, but workflow requirements and the ever-increasing wealth of preoperative information continue to expand and require improvements. As one looks at this contribution, it undoubtedly represents a ‘snapshot’ of technology in time but is an important contribution emphasizing the characteristics that serve as constraints to data acquisition and guidance procedure execution, while also highlighting the potential for computation within the OR. It embodies the problem of extrapolating cost-effective relevant information from distinctly finite or sparse data while balancing the competing goals between workflow and engineering design, and between application and accuracy, a term we have called ‘sparse data extrapolation problem’ [16].

V. CONCLUSION

This paper clearly demonstrates that deformation compensated images can be computed intraoperatively using sparse data and biomechanical model approaches in near real-time for use in the OR without the need for whole intraoperative imaging systems. It also suggest that intraoperative computing is less significant than the workflow of equipment and data acquisition. The work demonstrates the fabrication of logical and systematic preoperative and intraoperative pipeline that is robust, simple, and minimally disruptive; perhaps in some cases, significantly less wieldy when compared to setup times for intraoperative imaging systems. Lastly, while a great deal of work towards these computational approaches has been achieved, more validation using ‘gold standard’ iMR measurement methods is needed, as well as long-term patient outcome studies.

Acknowledgment

The authors would like to thank the surgical residents, OR staff and the Radiology Department at Vanderbilt University for their help in data collection. The authors would also like to thank Dr. Benoit Dawant for providing the atlas-based brain segmentation code.

Biographies

graphic file with name sun-2327628.gif

Kay Sun received the Ph.D. degree in bioengineering from Rice University, Houston, TX, USA, in 2006. She was a Scientist at Intio Inc., Broomfield, CO, USA, from 2008 to 2010. She was a Staff Engineer at Vanderbilt University, Nashville, TN, USA, from 2010 to 2013, and is currently a Biomedical Computation and Modeling Scientist with CFD Research Corporation, Huntsville, AL, USA.

graphic file with name pheif-2327628.gif

Thomas S. Pheiffer received the B.S. degree in biosystems engineering from Clemson University, Clemson, SC, USA, in 2007, and the M.S. degree in biomedical engineering from Vanderbilt University, Nashville, TN, USA, in 2010, where he is currently pursuing the Ph.D. degree in biomedical engineering, and his research interests include image-guided surgery and ultrasound imaging.

graphic file with name simps-2327628.gif

Amber L. Simpson received the B.Sc. degree in computer science from Trent University, Peterborough, ON, USA, in 2000, and the M.Sc. and Ph.D. degrees in computer science from Queen’s University, Kingston, ON, Canada, in 2002 and 2010, respectively. She joined the faculty at Vanderbilt University, Nashville, TN, USA, in 2009, and is currently a Research Assistant Professor of Biomedical Engineering. She is a member of the Vanderbilt Initiative in Surgery and Engineering. Her research interests include the evaluation and validation methodologies for surgical navigation and the computation and visualization of measurement uncertainty in surgery.

graphic file with name weis-2327628.gif

Jared A. Weis received the B.S. degree in biomedical engineering from Washington University in St. Louis, St. Louis, MO, USA, in 2005, and the M.S. and Ph.D. degrees in biomedical engineering from Vanderbilt University, Nashville, TN, USA, in 2009 and 2011, respectively. He is currently a Post-Doctoral Research Fellow with the Institute of Imaging Science and the Department of Radiology, Vanderbilt University.

graphic file with name thomp-2327628.gif

Reid C. Thompson received the M.D. degree and the Residency degree in neurological surgery from the Johns Hopkins University School of Medicine, Baltimore, MD, USA, in 1989 and 1996, respectively. While at Hopkins, his research began to focus on brain tumors and completed a two-year NIH fellowship in Neurooncology. He also completed a fellowship in cerebrovascular surgery at Stanford University, Stanford, CA, USA, specializing in the surgical treatment of aneurysms and other vascular disorders of the brain and spine. In 2002, he was recruited to Vanderbilt’s Department on Neurological Surgery. He is currently the William F. Meacham Professor and the Chairman of Neurosurgery, the Director of Neurosurgical Oncology, and the Director of the Vanderbilt Brain Tumor Center.

graphic file with name miga-2327628.gif

Michael I. Miga received the B.S. and M.S. degrees in mechanical engineering with applied mechanics from the University of Rhode Island, Kingston, RI, USA, in 1992 and 1994, respectively, and the Ph.D. degree in biomedical engineering from Dartmouth College, Hanover, NH, USA, in 1998. He joined the faculty at the Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA, in 2000. He is currently a Professor of Biomedical Engineering, Radiology and Radiological Sciences, and Neurological Surgery. He is the Director of the Biomedical Modeling Laboratory and is the Co-Founder of the Vanderbilt Initiative in Surgery and Engineering. The focus of his work is on the development of new paradigms in detection, diagnosis, and treatment of disease through the integration of computational models into research and clinical practice. He teaches courses in biomechanics, biotransport, and computational modeling.

Funding Statement

This work was supported by the National Institute of Health-National Institute for Neurological Disorders and Stroke under Grant R01NS049251.

References

  • [1].Roberts D. W., Hartov A., Kennedy F. E., Miga M. I., and Paulsen K. D., “Intraoperative brain shift and deformation: A quantitative analysis of cortical displacement in 28 cases,” Neurosurgery, vol. 43, no. , pp. 749–758, Oct. 1998. [DOI] [PubMed] [Google Scholar]
  • [2].Nabavi A., et al. , “Serial intraoperative magnetic resonance imaging of brain shift,” Neurosurgery, vol. 48, no. 4, pp. 787–797, Apr. 2001. [DOI] [PubMed] [Google Scholar]
  • [3].Miga M. I., et al. , “Modeling of retraction and resection for intraoperative updating of images,” Neurosurgery, vol. 49, no. 1, pp. 75–84, Jul. 2001. [DOI] [PubMed] [Google Scholar]
  • [4].Hill D. L., Maurer C. R. Jr., Maciunas R. J., Barwise J. A., Fitzpatrick J. M., and Wang M. Y., “Measurement of intraoperative brain surface deformation under a craniotomy,” Neurosurgery, vol. 43, no. 3, pp. 514–526, September 1998. [DOI] [PubMed] [Google Scholar]
  • [5].Maurer C. R., Jr., et al. , “Investigation of intraoperative brain deformation using a 1.5-T interventional MR system: Preliminary results,” IEEE Trans. Med. Imag., vol. 17, no. 5, pp. 817–825, Oct. 1998. [DOI] [PubMed] [Google Scholar]
  • [6].Paulsen K. D., Miga M. I., Kennedy F. E., Hoopes P. J., Hartov A., and Roberts D. W., “A computational model for tracking subsurface tissue deformation during stereotactic neurosurgery,” IEEE Trans. Biomed. Eng., vol. 46, no. 2, pp. 213–225, Feb. 1999. [DOI] [PubMed] [Google Scholar]
  • [7].Hata N., et al. , “Three-dimensional optical flow method for measurement of volumetric brain deformation from intraoperative MR images,” J. Comput. Assist. Tomogr., vol. 24, no. 4, pp. 531–538, Jul-Aug 2000. [DOI] [PubMed] [Google Scholar]
  • [8].Nimsky C., Ganslandt O., Cerny S., Hastreiter P., Greiner G., and Fahlbusch R., “Quantification of, visualization of, and compensation for brain shift using intraoperative magnetic resonance imaging,” Neurosurgery, vol. 47, no. 5, pp. 1070–1079, Nov. 2000. [DOI] [PubMed] [Google Scholar]
  • [9].Warfield S. K., et al. , “Capturing intraoperative deformations: Research experience at Brigham and women’s hospital,” Med. Image Anal., vol. 9, no. 2, pp. 145–162, Apr. 2005. [DOI] [PubMed] [Google Scholar]
  • [10].Clatz O., et al. , “Robust nonrigid registration to capture brain shift from intraoperative MRI,” IEEE Trans. Med. Imag., vol. 24, no. 11, pp. 1417–1427, Nov. 2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Ferrant M., et al. , “Serial registration of intraoperative MR images of the brain,” Med. Image Anal., vol. 6, no. 4, pp. 337–359, 2002. [DOI] [PubMed] [Google Scholar]
  • [12].Skrinjar O., Nabavi A., and Duncan J., “Model-driven brain shift compensation,” Med. Image Anal., vol. 6, no. 4, pp. 361–373, 2002. [DOI] [PubMed] [Google Scholar]
  • [13].Wittek A., Miller K., Kikinis R., and Warfield S. K., “Patient-specific model of brain deformation: Application to medical image registration,” J. Biomech., vol. 40, no. 4, pp. 919–929, 2007. [DOI] [PubMed] [Google Scholar]
  • [14].Roberts D. W., et al. , “Intraoperatively updated neuroimaging using brain modeling and sparse data,” Neurosurgery, vol. 45, no. 5, pp. 1199–1206, Nov. 1999. [PubMed] [Google Scholar]
  • [15].Miga M. I., et al. , “Updated neuroimaging using intraoperative brain modeling and sparse data,” Stereotactic Funct. Neurosurgery, vol. 72, nos. 2–4, pp. 103–106, 1999. [DOI] [PubMed] [Google Scholar]
  • [16].Miga M., Dumpuri P., Simpson A. L., Weis J. A., and Jarnagin W. R., “The sparse data extrapolation problem: Strategies for soft-tissue correction for image-guided liver surgery,” Proc. SPIE, vol. 7964, P. 79640C, Mar. 2011. [Google Scholar]
  • [17].Dumpuri P., Thompson R. C., Dawant B. M., Cao A., and Miga M. I., “An atlas-based method to compensate for brain shift: Preliminary results,” Med. Image Anal., vol. 11, no. 2, pp. 128–145, Apr. 2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [18].Dumpuri P., et al. , “A fast and efficient method to compensate for brain shift for tumor resection therapies measured between preoperative and postoperative tomograms,” IEEE Trans. Biomed. Eng., vol. 57, no. 6, pp. 1285–1296, Jun. 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [19].Hagemann A., Rohr K., and Stiehl H. S., “Coupling of fluid and elastic models for biomechanical simulations of brain deformations using FEM,” Med. Image Anal., vol. 6, no. 4, pp. 375–388, 2002. [DOI] [PubMed] [Google Scholar]
  • [20].Joldes G. R., Wittek A., Couton M., Warfield S. K., and Miller K., “Real-time prediction of brain shift using nonlinear finite element algorithms,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent. MICCAI, vol. 12 2009, pp. 300–307. [DOI] [PubMed] [Google Scholar]
  • [21].Wittek A., Kikinis R., Warfield S. K., and Miller K., “Brain shift computation using a fully nonlinear biomechanical model,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent. MICCAI, vol. 8 2005, pp. 583–590. [DOI] [PubMed] [Google Scholar]
  • [22].Zhuang D. X., et al. , “A sparse intraoperative data-driven biomechanical model to compensate for brain shift during neuronavigation,” Amer. J. Neuroradiol., vol. 32, no. 2, pp. 395–402, Feb. 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [23].Sun H., et al. , “Stereopsis-guided brain shift compensation,” IEEE Trans. Med. Imag., vol. 24, no. 8, pp. 1039–1052, Aug. 2005. [DOI] [PubMed] [Google Scholar]
  • [24].Cash D. M., Miga M. I., Sinha T. K., Galloway R. L., and Chapman W. C., “Compensating for intraoperative soft-tissue deformations using incomplete surface data and finite elements,” IEEE Trans. Med. Imag., vol. 24, no. 11, pp. 1479–1491, Nov. 2005. [DOI] [PubMed] [Google Scholar]
  • [25].Ferrant M., Nabavi A., Macq B., Jolesz F. A., Kikinis R., and Warfield S. K., “Registration of 3-d intraoperative MR images of the brain using a finite-element biomechanical model,” IEEE Trans. Med. Imag., vol. 20, no. 12, pp. 1384–1397, Dec. 2001. [DOI] [PubMed] [Google Scholar]
  • [26].D’Haese P. F., Duay V., Merchant T. E., Macq B., and Dawant B. M., “Atlas-based segmentation of the brain for 3-dimensional treatment planning in children with infratentorial ependymoma,” in Proc. Med. Image Comput. Comput.-Assist. Intervent. MICCAI, vol. 2879 2003, pp. 627–634. [Google Scholar]
  • [27].Chen I., et al. , “Intraoperative brain shift compensation: Accounting for dural septa,” IEEE Trans. Biomed. Eng., vol. 58, no. 3, pp. 499–508, Mar. 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [28].Chen I., “Evaluation of atlas-based brain shift model for improved adaptation to intraoperative neurosurgical conditions,”Ph.D. dissertation, Dept. Biomed. Eng., Vanderbilt University, Nashville, TN, USA, 2012. [Google Scholar]
  • [29].Nagashima T., Shirakuni T., and Rapoport S. I., “A two-dimensional, finite element analysis of vasogenic brain edema,” Neurol. Med. Chirurgica, vol. 30, no. 1, pp. 1–9, Jan. 1990. [DOI] [PubMed] [Google Scholar]
  • [30].Commer P., Bourauel C., Maier K., and Jager A., “Construction and testing of a computer-based intraoral laser scanner for determining tooth positions,” Med. Eng. Phys., vol. 22, no. 9, pp. 625–635, Nov. 2000. [DOI] [PubMed] [Google Scholar]
  • [31].Audette M. A., Siddiqi K., Ferrie F. P., and Peters T. M., “An integrated range-sensing, segmentation and registration framework for the characterization of intra-surgical brain deformations in image-guided surgery,” Comput. Vis. Image Understand., vol. 89, nos. 2–3, pp. 226–251, 2003. [Google Scholar]
  • [32].Miga M. I., Sinha T. K., Cash D. M., Galloway R. L., and Weil R. J., “Cortical surface registration for image-guided neurosurgery using laser-range scanning,” IEEE Trans. Med. Imag., vol. 22, no. 8, pp. 973–985, Aug. 2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [33].Sinha T. K., et al. , “A method to track cortical surface deformations using a laser range scanner,” IEEE Trans. Med. Imag., vol. 24, no. 6, pp. 767–81, Jun. 2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [34].Sinha T. K., Miga M. I., Cash D. M., and Weil R. J., “Intraoperative cortical surface characterization using laser range scanning: Preliminary results,” Neurosurgery, vol. 59, no. 4, pp. 368–377, Oct. 2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [35].Cao A., et al. , “Laser range scanning for image-guided neurosurgery: Investigation of image-to-physical space registrations,” Med. Phys., vol. 35, no. 4, pp. 1593–1605, Apr. 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [36].Ding S., et al. , “Semiautomatic registration of pre- and postbrain tumor resection laser range data: Method and validation,” IEEE Trans. Biomed. Eng., vol. 56, no. 3, pp. 770–780, Mar. 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [37].Shamir R. R., Freiman M., Joskowicz L., Spektor S., and Shoshan Y., “Surface-based facial scan registration in neuronavigation procedures:A clinical study,”. J. Neurosurgery, vol. 111, no. 6, pp. 1201–1206, Dec. 2009. [DOI] [PubMed] [Google Scholar]
  • [38].Cash D. M., et al. , “Incorporation of a laser range scanner into image-guided liver surgery: Surface acquisition, registration, and tracking” Med. Phys., vol. 30, no. 7, pp. 1671–1682, Jul. 2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [39].Dumpuri P., Clements L. W., Dawant B. M., and Miga M. I., “Model-updated image-guided liver surgery: Preliminary results using surface characterization,” Progr. Biophys. Molecular Biol., vol. 103, nos. 2–3, pp. 197–207, Dec. 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [40].Meehan M., Teschner M., and Girod S., “Three-dimensional simulation and prediction of craniofacial surgery,” Orthodontics Craniofacial Res., vol. 6, no. 1, pp. 102–107, 2003. [DOI] [PubMed] [Google Scholar]
  • [41].Marmulla R., Hassfeld S., Luth T., Mende U., and Muhling J., “Soft tissue scanning for patient registration in image-guided surgery,” Comput. Aided Surgery, Official J. Int. Soc. Comput. Aided Surgery, vol. 8, no. 2, pp. 70–81, 2003. [DOI] [PubMed] [Google Scholar]
  • [42].Rohde G. K., Aldroubi A., and Dawant B. M., “The adaptive bases algorithm for intensity-based nonrigid image registration,” IEEE Trans. Med. Imag., vol. 22, no. 11, pp. 1470–1479, Nov. 2003. [DOI] [PubMed] [Google Scholar]
  • [43].Lorensen W. E. and Cline H. E., “Marching cubes: A high resolution 3D surface construction algorithm,” in Proc. SIGGRAPH, Anaheim, CA, USA, 1987, pp. 163–169. [Google Scholar]
  • [44].Sullivan J. M., Charron G., and Paulsen K. D., “A three-dimensional mesh generator for arbitrary multiple material domains,” Finite Elements Anal. Des., vol. 25, nos. 3–4, pp. 219–241, 1997. [Google Scholar]
  • [45].Miga M. I., “Development and quantification of a 3d brain deformation model for model-updated image-guided stereotactic neurosurgery,” Ph.D. dissertation, Dept. Eng., Dartmouth College, Hanover, NH, USA, 1998. [Google Scholar]
  • [46].Miga M. I., Paulsen K. D., Hoopes P. J., Kennedy F. E., Hartov A., and Roberts D. W., “In vivo modeling of interstitial pressure in the brain under surgical load using finite elements,” J. Biomech. Eng., vol. 122, no. 4, pp. 354–363, Aug. 2000. [DOI] [PubMed] [Google Scholar]
  • [47].Miga M. I., Paulsen K. D., Kennedy F. E., Hoopes P. J., Hartov A., and Roberts D. W., “Modeling surgical loads to account for subsurface tissue deformation during stereotactic neurosurgery,” Proc. SPIE, vol. 3254, pp. 501–511, May 1998. [Google Scholar]
  • [48].Miga M. I., Paulsen K. D., Kennedy F. E., Hoopes P. J., Hartov A., and Roberts D. W., “In vivo analysis of heterogeneous brain deformation computations for model-updated image guidance,” Comput. Methods Biomech. Biomed. Eng., vol. 3, no. 2, pp. 129–146, 2000. [DOI] [PubMed] [Google Scholar]
  • [49].Horn B. K. P., “Closed-form solution of absolute orientation using unit quaternions,” J. Opt. Soc. Amer. A, vol. 4, no. 4, pp. 629–642, Apr. 1987. [Google Scholar]
  • [50].Ma B. and Ellis R. E., “Robust registration for computer-integrated orthopedic surgery: Laboratory validation and clinical experience,” Med. Image Anal., vol. 7, no. 3, pp. 237–250, Sep. 2003. [DOI] [PubMed] [Google Scholar]
  • [51].Vigneron L. M., Noels L., Warfield S. K., Verly J. G., and Robe P. A., “Serial FEM/XFEM-based update of preoperative brain images using intraoperative MRI,” Int. J. Biomed. Imag., vol. 2012, p. 872783, Jan. 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [52].Ding S., Miga M. I., Pheiffer T. S., Simpson A. L., Thompson R. C., and Dawant B. M., “Tracking of vessels in intra-operative microscope video sequences for cortical displacement estimation,” IEEE Trans. Biomed. Eng., vol. 58, no. 7, pp. 1985–1993, Jul. 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [53].Sun H., Roberts D. W., Farid H., Wu Z., Hartov A., and Paulsen K. D., “Cortical surface tracking using a stereoscopic operating microscope,” Neurosurgery, vol. 56, no. 1, pp. 86–97, Jan. 2005. [DOI] [PubMed] [Google Scholar]
  • [54].Chen I., Simpson A. L., Sun K., Thompson R. C., and Miga M., “Sensitivity analysis and automation for intraoperative implementation of the atlas-based method for brain shift correction,” Proc. SPIE, vol. 8671, p. 86710T, Mar. 2013. [Google Scholar]
  • [55].Pheiffer T. S., Simpson A. L., Lennon B., Thompson R. C., and Miga M. I., “Design and evaluation of an optically-tracked single-CCD laser range scanner,” Med. Phys., vol. 39, no. 2, p. 636, Feb., 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [56].Comas O., Taylor Z. A., Allard J., Ourselin S., Cotin S., and Passenger P., “Efficient nonlinear FEM for soft tissue modelling and its GPU implementation within open source framework SOFA,” in Biomedical Simulation (Lecture Notes in Computer Science), vol. 5104, Bello F. and Edwards P. J. E., Eds. Berlin, Germany: Springer-Verlag, pp. 28–39, 2008. [Google Scholar]
  • [57].Sutherland G. R., Wolfsberger S., Lama S., and Zarei-nia K., “The evolution of neuroArm,” Neurosurgery, vol. 72, pp. A27–A32, Jan. 2013. [DOI] [PubMed] [Google Scholar]

Articles from IEEE Journal of Translational Engineering in Health and Medicine are provided here courtesy of Institute of Electrical and Electronics Engineers

RESOURCES

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy