Procedures of robot assisted percutaneous renal intervention
The robot assisted percutaneous renal intervention surgery workflow consists of preoperative surgical planning, intraoperative surgical navigation and semiautonomous telemanipulated needle operation. First, the patient is scanned by magnetic resonance (MR), kidney, vessels and tumor are then segmented from the MR volume as 3D model, such that a surgeon can make a optimal surgical plan preoperatively. During the surgery, a semiautomatic rigid registration is performed for the alignment of the US slices and the MR volume, the preoperative planning can be transferred onto the patient in situ. With an imageguidance interface, the surgeon guide the robot to the insertion point, needle alignment and interventional puncture can be performed autonomously in accordance with the surgical planning. Verification that the needle has successfully gained access to the collecting system will be provided by the return of urine through the trocar needle. The needle will be detached from the robot, and subsequent surgical procedure continues.
Experiment setup
The prototype system for needle insertion has been set up in our integrated operating room (see Figure 1). The master was the PHANToM OMNI haptic device (SensAble Technologies Inc., USA) ,which provided force position measurements at its end point and feedback in three DOFs. A 5R1P industrial robot (REBoV6R650, Shenzhen Reinovo Technology Co. Ltd., China) was employed for needle operation, five rotational joints uniquely determine the needle orientation and position, the last linear DCservomotor (Quickshaft LM1247, Faulhaber Group, Germany) served for needle insertion with positional accuracy 180 μm and maximum 3N force. The last two joint axes of the wrist and the translational axis are intersecting in a single point. Needle orientation and puncture are independently activated by the corresponding joints and safe button of the haptic device. An 18gauge trocar needle (090020ET, Cook Urological Inc., USA) with a triangular diamond tip was attached to the endeffector by a detachable unit, which was equipped with a force sensor (AL311BL, Honeywell Inc., USA), the force value were collected by data acquisition card (DAQ 6229USB card, NI Inc., USA). A 3D ultrasound system (DC7, Mindray Medical Ltd.,China) was used as an intraoperative navigator. In order to track the 6D positions of needle and ultrasound frame, passive optical markers were mounted to ultrasonic probe and needle holder, the receiver was optical tracking systems with positioning accuracy RMS error 0.35 mm (Polaris Spectra, Northern Digital Inc. (NDI), Canada). All experiments were conducted on the silicon phantom from Computerized Imaging Reference Systems (CIRS), no ethical concern is involved.
Registration of imagerobottracker
Registration of the robot to the image space provides us with the essential relationship between the needle location and the targets in image coordinate. Indeed, inaccurate robotimage calibration has a direct impact on the accuracy of the needle steering.
(i). ImageTracker registration
At preoperative surgical planning stage, we applied the semiautonomous algorithm from [15] to segment kidney parenchyma and vascular structures from magnetic resonance images. A 3D plan can be then defined in MR volume coordinate frame F_{
MR
} by the surgeon. During the surgery, the tracker reads the positions of maker fixed on the robot endeffector and the US probe, while the preoperative data and the surgical plan are registered to the calibrated intraoperative US images. First, two pairs of orthogonal ultrasound images were collected near the 11th intercostals space at the maximum exhalation positions, all images should contain clearly visible kidney contours. Next, the iterate closet point (ICP) algorithm was performed for the alignment of the US slices and the MR volume, using kidney surface and large vessel surface as registration features [16]. Based on ultrasoundMR volume transformation {T}_{\mathit{US}}^{\mathit{MR}} and trackerultrasound transformation {T}_{\mathit{US}}^{T}, the planned puncture path can be transferred from the preoperative MR volume frame F_{
MR
} into intraoperative tracker frame F_{
Tracker
}. Once robottracker registration {\mathbf{T}}_{R}^{T} is done, the surgical plan can be transferred from preoperative MR volume frame F_{
MR
} to robot frame F_{
Robot
}. At the maximum exhalation, the needle is rapidly inserted into the intrarenal target under navigated guidance. The imagerobottracker registration is shown in Figure 2.
{T}_{R}^{\mathit{MR}}={T}_{\mathit{US}}^{\mathit{MR}}{\left({T}_{\mathit{US}}^{T}\right)}^{1}{T}_{R}^{T}
(1)
(ii). RobotTracker correspondence
In this section, we propose a simplified registration method for both robottracker correspondence and robot calibration.
The coordinate systems of the 5R1P needle operation robot is depicted in Figure 3, frame F_{
Tracker
} = (x_{
T
},y_{
T
},z_{
T
}) is attached to the optical tracker base, there are total 8 coordinate systems attached to the manipulator, the robot based F_{
Robot
} = (x_{
R
},y_{
R
},z_{
R
}) is used as reference, the last frame F_{
Needle
} = (x_{
N
},y_{
N
},z_{
N
}) is attached to the needle, the passive optical marker with frame F_{
Marker
} = (x_{
M
},y_{
M
},z_{
M
}) is mounted on the endeffector, the other coordinate systems (x_{
i
},y_{
i
},z_{
i
}), i = 1⋯5 are attached to the links. The transformation from marker frame to tracker base and robot frame can be expressed respectively with matrix of the form
{\mathbf{T}}_{M}^{T}=\left\{\begin{array}{cc}\hfill {\mathbf{R}}_{M}^{T}\hfill & \hfill {\mathbf{d}}_{M}^{T}\hfill \\ \hfill \mathbf{0}\hfill & \hfill 1\hfill \end{array}\right\},{\mathbf{T}}_{M}^{R}=\left\{\begin{array}{cc}\hfill {\mathbf{R}}_{M}^{R}\hfill & \hfill {\mathbf{p}}_{M}^{R}\hfill \\ \hfill \mathbf{0}\hfill & \hfill 1\hfill \end{array}\right\}
(2)
where {\mathbf{p}}_{M}^{R} and {\mathbf{d}}_{M}^{T} are marker positions observed in F_{
Robot
}and F_{
Tracker
}, rotation matrix {\mathbf{R}}_{M}^{T}=\left[{\mathbf{m}}_{1},{\mathbf{m}}_{2},{\mathbf{m}}_{3}\right] are axis vectors x_{
M
}, y_{
M
}, z_{
M
}that expressed in tracker frame, and {\mathbf{R}}_{M}^{R}=\left[{\mathbf{n}}_{1},{\mathbf{n}}_{2},{\mathbf{n}}_{3}\right] refers to the axis expression in robot frame. They are ideally related by robottracker transformation matrix {\mathbf{T}}_{R}^{T}, shown as
{\mathbf{T}}_{R}^{T}=\left\{\begin{array}{cc}\hfill {\mathbf{R}}_{R}^{T}\hfill & \hfill {\mathbf{p}}_{R}^{T}\hfill \\ \hfill \mathbf{0}\hfill & \hfill 1\hfill \end{array}\right\}
(3)
{\mathbf{T}}_{M}^{T}={\mathbf{T}}_{R}^{T}{\mathbf{T}}_{M}^{R}
(4)
Affected by measurement noise U and V_{
i
}, i = 1, 2, 3, the expansions of (4) are
{\mathbf{d}}_{M}^{T}={\mathbf{R}}_{R}^{T}\mathbf{p}+{\mathbf{p}}_{M}^{R}+\mathbf{U}
(5)
{\mathbf{m}}_{i}={\mathbf{R}}_{R}^{T}{\mathbf{n}}_{i}+{\mathbf{V}}_{i}
(6)
K corresponded pose pairs {\left\{{\mathbf{T}}_{M}^{T},{\mathbf{T}}_{M}^{R}\right\}}_{k}, k = 1 ⋯ K were recorded at different configurations of robot angle setting. Solving the optimal transformation {\mathbf{T}}_{R}^{T} typically requires minimizing a least square error criterion given by
{\displaystyle \sum ={\displaystyle \sum _{k=1}^{K}{\displaystyle \sum _{i=1}^{3}{\alpha}_{\mathit{ki}}}{\u2225{\mathbf{m}}_{\mathit{ki}}{\mathbf{R}}_{R}^{T}{\mathbf{n}}_{\mathit{ki}}\u2225}^{2}+{\displaystyle \sum _{k=1}^{K}{\beta}_{i}}}}{\u2225{\mathbf{d}}_{\mathit{Mk}}^{T}{\mathbf{R}}_{R}^{T}{\mathbf{p}}_{\mathit{Mk}}^{R}{\mathbf{p}}_{R}^{T}\u2225}^{2}
(7)
A dual number quaternion based algorithm was employed to estimate the transformation matrix [17], which incorporates both orientation and translation information. However, inaccuracy in robot forward kinematics seriously affects the validity of registration result. Robot calibration is required to reduce the registration error as well as inaccuracies in robot parameters of links and joint angles.
(iii). Calibration of robot parameters
The forward kinematics of the 5R1P needle manipulating robot is cascadely constructed by the transformations between consecutive joint frames based on the modified DH parameters [18]. The needle was axially attached to the linear motor shaft, the optical marker was mounted on the outer shell of motor. The transformation matrix {\mathbf{T}}_{M}^{R} of maker can be read via robot forward kinematics,
{\mathbf{T}}_{M}^{R}={\mathbf{T}}_{1}^{R}{\mathbf{T}}_{2}^{1}\dots {\mathbf{T}}_{5}^{4}{\mathbf{T}}_{M}^{5}=F\left(\mathbf{X},\mathbf{\Theta}\right)
(8)
{\mathbf{T}}_{i+1}^{i}={\mathbf{R}}_{x}\left({\alpha}_{i}\right){\mathbf{T}}_{x}\left({a}_{i1}\right){\mathbf{R}}_{z}\left({\theta}_{i}\right){\mathbf{T}}_{z}\left({d}_{i}\right){\mathbf{R}}_{y}\left({\beta}_{i}\right)
(9)
in which, X = (a, d, α, β, p)^{T} are link structural parameters, a = (a_{1}, a_{2} ⋯ a_{6})^{T}, d = (d_{1}, d_{2} ⋯ d_{6})^{T}, α = (α_{1}, α_{2}, ⋯, α_{6})^{T}, β = (β_{1}, β_{2}, ⋯, β_{6})^{T}, p = (p_{
x
}, p_{
y
}, p_{
z
})^{T} are positions of the optical marker relative to the robot endeffector, Θ = (θ_{1}, θ_{2}, ⋯ θ_{6})^{T}are joint variables. Variations in robot geometric parameters due to manufacturing tolerances or component limitations account for the accuracy of robot kinematics. Considering the 2nd and 3rd joint axes are nearly parallel, only β_{2} is necessary. The marker pose with nominal link parameters is noted as {\widehat{T}}_{M}^{R}, the correct pose of the marker with kinematic errors is given by {\mathbf{T}}_{M}^{R}, it can be expressed as
{\mathbf{T}}_{M}^{R}={\stackrel{\u02c6}{\mathbf{T}}}_{M}^{\mathit{R}}+d{\mathbf{T}}_{M}^{R}=F\left(\mathbf{X}+\mathrm{\Delta}\mathbf{X},\mathbf{\Theta}+\mathrm{\Delta}\mathbf{\Theta}\right)
(10)
The differential translation and rotation transformation \delta {\mathbf{T}}_{M}^{R} can be written as
\delta {\mathbf{T}}_{M}^{R}=d{\mathbf{T}}_{M}^{R}{\left({\stackrel{\u02c6}{\mathbf{T}}}_{M}^{\mathit{R}}\right)}^{1}
(11)
Using the firstorder approximation for the differential error matrix, the translation deviations d = [dx, dy, dz]^{T}in robot frame due to parameter errors can be written in the following linear form [19]
\mathbf{d}={\mathbf{W}}_{\theta}\mathrm{\Delta}\mathbf{\theta}+{\mathbf{W}}_{\alpha}\mathrm{\Delta}\mathbf{\alpha}+{\mathbf{W}}_{a}\mathrm{\Delta}\mathbf{a}+{\mathbf{W}}_{d}\mathrm{\Delta}\mathbf{d}+{\mathbf{W}}_{\beta}\mathrm{\Delta}\mathbf{\beta}+{\mathbf{W}}_{p}\mathrm{\Delta}\mathbf{p}
(12)
where Δ θ, Δ a, Δ d, Δ α, Δ β, Δ p refer to the disturbances in robot parameters. After N measurements of the corresponded marker positions, the identification equation is constructed as
\mathbf{D}=\mathbf{J}\mathrm{\Delta}\mathbf{X}
(13)
in which \mathbf{D}={\left[{\mathbf{d}}_{1}^{T},{\mathbf{d}}_{2}^{T}\cdots {\mathbf{d}}_{N}^{T}\right]}^{T}, d_{
i
} is the i_{
th
}measured marker position error in robot frame,
{\mathbf{d}}_{i}={\mathbf{p}}_{\mathit{Mi}}^{R}{\mathbf{R}}_{T}^{R}\left({\mathbf{d}}_{\mathit{Mi}}^{T}{\mathbf{p}}_{R}^{T}\right)
(14)
\mathbf{J}={\left[{\mathbf{W}}_{1}^{T}\phantom{\rule{0.25em}{0ex}}{\mathbf{W}}_{2}^{T}\cdots {\mathbf{W}}_{N}^{T}\right]}^{T}
is the identification Jacobian matrix, each row block W_{
i
}refers to the i_{
th
}coefficient matrix of d_{
i
}, W_{
i
} = [W_{
θi
}, W_{
αi
}, W_{
ai
}, W_{
di
}, W_{
βi
}, W_{
pi
}]. The leastsquare estimation of robot parameter deviation Δ X is calculated by the pseudoinverse matrix J^{†} of J,
\mathrm{\Delta}\mathbf{X}={\mathbf{J}}^{\u2020}\mathbf{D}
(15)
then the robot parameters can be compensated by \mathbf{X}=\widehat{X}+\mathrm{\Delta}\mathbf{X}, \mathbf{\Theta}=\widehat{\Theta}+\mathrm{\Delta}\mathbf{\Theta}. The least square method tends to change the mechanical structure of robot completely when the estimated parameters deviate a lot from the actual ones. Only 5 rotational joint zeropositions, 4 link lengths and 3 marker positions are chosen to calibrate for consistency and simplicity in solving the inverse kinematic.
(iv). Simplified twostep scheme for robottrack registration
In this section, we introduce a simplified twostep registration scheme for both robottracker correspondence and robot calibration. The entire registration is summarized as follow.
Input: corresponded frame pairs, k = 1 ⋯ K of the optical makers measured via optical tracker and robot forward kinematics respectively;
Output: transformation {\mathbf{T}}_{R}^{T} and robot parameters (X, Θ);
Initialization: robot parameters (X_{0}, Θ_{0}) are initialized by the nominal settings;
Iteration: for n = 1 to n_{max} or the registration error Σ converges, do

1.
Compute the transformation matrix {\mathbf{T}}_{R}^{T} by minimizing the object function (7);

2.
Update the marker positions and deviation matrix using the newer robot kinematics F(X _{
n
}, Θ _{
n
});

3.
Calibrate and compensate robot parameters
\begin{array}{c}\hfill {\mathbf{X}}_{n}={\mathbf{X}}_{n1}+\mathrm{\Delta}\mathbf{X}\hfill \\ \hfill {\mathbf{\Theta}}_{n}={\mathbf{\Theta}}_{n1}+\mathrm{\Delta}\mathbf{\Theta}\hfill \end{array}
(16)

4.
End the iteration when n = n _{max} or the decrease of the MSE below a threshold h.
Control scheme
With the imageguidance interface, the surgeon telemanipulated the robot to approach the insertion point manually in free space, needle alignment and interventional puncture are performed autonomously in accordance with the surgical planning. In this study, the haptic device acted as the master controller, while the 5R1P robot performed as the slave needle operator. The master and the slave were connected through a communication network.
(i). Master–slave control for manually needle approaching
Since the operation space of the master is not in proportion to that of the slave, a jointjoint velocity scaling control was applied to the master–slave system. Operations on the master side were scaled down to the slave side directly, the master joint velocities {\dot{\Theta}}_{\mathit{\text{master}}} were mapped to the corresponding slave joint velocities {\dot{\Theta}}_{\mathit{\text{slave}}} by
\begin{array}{ll}{\dot{\Theta}}_{\mathit{\text{slave}}}=\mathbf{\Lambda}{\dot{\Theta}}_{\mathit{\text{master}}}& \phantom{\rule{5em}{0ex}}\mathbf{\Lambda}=diag\left({\lambda}_{1},{\lambda}_{2},\cdots {\lambda}_{6}\right)\end{array}
(17)
where Λ is a scaling diagonal matrix, different scaling ratio was assigned to each joint pair according to their contributions to the translation and rotation of the endeffector. Small ratio helps reduce disturbances in manual input. The calculated joint velocities were then sent to the Mitsubishi alternate current servounit, all five joints were controlled simultaneously to approach the puncture point, the linear motor were controlled by safe button on the joystick of master separately for needle insertion.
(ii). Optical tracker feedback control for needle alignment
Inaccuracy of robottracker correspondence and robot parameters impacts the absolute precision severely when using the internal control system of the robot itself. But since the relative accuracy is better than the allowed tolerances the robot can be adjusted until the absolute accuracy is good enough [20, 21]. This section presents an optical tracker feedback control method to improve the accuracy of needle alignment for manual or robotic needle steering operations in soft tissue.
Once the needle tip approached the puncture point, autonomous needle alignment is conducted in accordance with the surgical planning {\mathbf{T}}_{\mathit{EdR}}^{T} in tracker frame. In needle alignment stage, the needle shaft maintains straight and without touching the tissue, the needle tip pose measured by optical tracker is noted as {\mathbf{T}}_{\mathit{EdC}}^{T} in tracker frame and {\mathbf{T}}_{\mathit{EdC}}^{R} in robot frame. The robot reports nominal needle pose {\mathbf{T}}_{\mathit{EdN}}^{R}, which does not ensure accuracy due to disturbances in robot parameters Δ X and Δ Θ. Inaccuracies also appear in the measured pose {\mathbf{T}}_{\mathit{EdC}}^{R} due to robottracker transformation error \mathrm{\Delta}{\mathbf{T}}_{R}^{T}. Deviations between the measured {\mathbf{T}}_{\mathit{EdC}}^{T} and the reference {\mathbf{T}}_{\mathit{EdR}}^{T} are noted as {\mathbf{T}}_{E}^{T} in tracker frame and {\mathbf{T}}_{E}^{R} robot frame.
{\mathbf{T}}_{\mathit{EdN}}^{R}=\mathbf{F}\left(\widehat{X},\widehat{\Theta}\right)
{\mathbf{T}}_{\mathit{EdC}}^{R}=\mathbf{F}\left(\widehat{X}+\mathrm{\Delta}\mathbf{X},\widehat{\Theta}+\mathrm{\Delta}\mathbf{\Theta}\right)={\left({\mathbf{T}}_{R}^{T}\right)}^{1}{\mathbf{T}}_{\mathit{EdC}}^{T}
(18)
{\mathbf{T}}_{E}^{T}={\mathbf{T}}_{\mathit{EdR}}^{T}{\left({\mathbf{T}}_{\mathit{EdC}}^{T}\right)}^{1}
(19)
{\mathbf{T}}_{E}^{R}={\left({\mathbf{T}}_{R}^{T}\right)}^{1}{\mathbf{T}}_{E}^{T}
(20)
{\mathbf{T}}_{E}^{R}={\mathbf{T}}_{\mathit{EdN}}^{R}{\left({\mathbf{T}}_{\mathit{EdC}}^{R}\right)}^{1}={\left({\mathbf{T}}_{R}^{T}\right)}^{1}{\mathbf{T}}_{E}^{T}{\mathbf{T}}_{R}^{T}
(21)
{\mathbf{T}}_{R}^{T}={\stackrel{\u02c6}{\mathbf{T}}}_{R}^{\mathit{T}}+\mathrm{\Delta}{\mathbf{T}}_{R}^{T}
(22)
The goal is to make {\mathbf{T}}_{\mathit{EdC}}^{T} to be close to {\mathbf{T}}_{\mathit{EdR}}^{T} as possible while robust to inaccuracy in robottracker calibration. \widehat{X}, \widehat{\Theta} are estimated robot parameters, {\widehat{T}}_{R}^{T} is estimated the robottracker transformation matrix, Δ X and Δ Θ, \Delta {\mathbf{T}}_{R}^{T} are deviations. The goal is achieved by commanding the robot to a new pose iteratively by error compensation. The control scheme is shown in Figure 3. Here we outline the optical tracker feedback control scheme as follow (Figure 4).
For k =0 to k_{
max
}, do

1.
Initialize the pose of endeffector {\mathbf{T}}_{C0}^{R}={\left({\widehat{T}}_{R}^{T}\right)}^{1}{\mathbf{T}}_{\mathit{EdR}}^{T} by the estimated {\widehat{T}}_{R}^{T};

2.
Solve the inverse kinematics of robot {\mathbf{\Theta}}_{k}={\mathbf{F}}^{1}\left(\widehat{X},\widehat{\Theta},{\mathbf{T}}_{\mathit{Ck}}^{R}\right), command the joints move to Θ _{
k
};

3.
Measure the actual pose of endeffector {\mathbf{T}}_{\mathit{EdC}}^{R}={\left({\widehat{T}}_{R}^{T}\right)}^{1}{\mathbf{T}}_{\mathit{EdC}}^{T} using optical tracker, and compute the error,
{\mathbf{T}}_{E}^{R}={\left({\mathbf{T}}_{\mathit{EdC}}^{R}\right)}^{1}{\mathbf{T}}_{\mathit{EdR}}^{R}={\left({\mathbf{T}}_{\mathit{EdC}}^{T}\right)}^{1}{\mathbf{T}}_{\mathit{EdR}}^{T}

4.
Modify a new command by error compensation {\mathbf{T}}_{\mathit{Ck}+1}^{R}={\mathbf{T}}_{\mathit{Ck}}^{R}{\mathbf{T}}_{E}^{\mathit{Ed}}, go to step 2;

5.
Stop until iteration time k = k _{max} or the error below threshold h.