• Tidak ada hasil yang ditemukan

Reinforced Proofreading of Image Segmentation for Connectomics

N/A
N/A
Protected

Academic year: 2023

Membagikan "Reinforced Proofreading of Image Segmentation for Connectomics"

Copied!
47
0
0

Teks penuh

Brain is an organ that serves as the center of the nervous system in all vertebrates and most invertebrates [1]. Therefore, one of the goals of connectomics is to fully realize the reconstruction map of neural connections to understand the meaning hidden in the brain structures [8–12]. But even with state-of-the-art methods, the segmentation result still contains errors, especially merging errors and splicing errors (as shown in Figure 3) due to various artifacts in EM images, such as noise, folding errors. and tearing due to the nature of tissue preparation during the acquisition process, meaning that the human proofreading process is still required [12].

Figure 1: The typical connectomics workflow [12,13].
Figure 1: The typical connectomics workflow [12,13].

Background

At time step t, given a current state and an observation Ot, the agent chooses an action At. In the next time step, t+1, the agent receives a reward Rt+1 as a consequence of his previous action [36].

Problem and Motivation

The observation Ot is used if the agent can only see a limited observation, if not, the agent observes the state St completely [37]. The merge errors are correct segments merged into green areas, split errors are fragments of one or more correct segments marked as red areas, and yellow are the overlapping areas between two types of errors.

Contribution

In summary, given a good segmentation method, there is no guarantee that the result will have no errors, and therefore the proofreading task is the one that cannot be avoided. But proofreading by humans slows down the reconstruction pipeline, so I'd like to make it fully automatic by applying reinforcement learning, which can perform the simple task at a human level. Starting with an input consisting of an initial label map and an EM image, the Locator agent selects a coordinate for a patch containing error regions based on an action grid.

The selected patches cropped from the image and label map are passed to the merge agent to correct split errors, and then the split agent corrects merge errors on the result received from the merge agent. The initial label map is updated from the corrected label map for the next iteration. I use reinforcement learning only as an approach for my application by adapting Markov decision processes.

Figure 4: The proposed proofreading system diagram. From the beginning, an input consists of an initial label map and an EM image, the Locator agent choose a coordinate of a patch containing error regions base on an action grid
Figure 4: The proposed proofreading system diagram. From the beginning, an input consists of an initial label map and an EM image, the Locator agent choose a coordinate of a patch containing error regions base on an action grid

Automatic Segmentation

Human Proofreading

Additionally, their framework works on 3D volumes, while my system works on 2D disks. The authors demonstrated a human-guided proofreading approach, called guided proofreading, to correct merge and split errors based on the boundary of segmentation on an error patch, and then display the proposed corrections as a yes/no decision for users. To detect split fault, they proposed a CNN-based boundary classifier or split fault detection, which outputs the probability of the given boundary mask of two adjacent segments in the fault patch.

Likewise, to detect fusion errors, they reused the trained split error detection and inverted the probability result given a set of boundaries generated from a segment. In other words, given a segment, they randomly placed pairs of symmetric seed points on the boundary of the extended segment to generate a set of potential boundaries by watershed transformation; then they used the inverse probability result to estimate individual limit. To get the error corrections, they inspect all the segments, to find merge errors and boundaries of neighboring segments, to find split errors on the segmentation in a brute-force manner; Next, all probability results from two detectors are sorted into a list of ranks, which will be looped from the top by the users.

After all, their approach is encapsulated in human corrections and still requires human labor. For merge error correction, dynamic decision making can take part, meaning that instead of randomly placing a few points that create a boundary and intersect 2 segments have been merged, my Splitter agent can determine the optimal places to place seeds; therefore, my Splitter agent not only reduces a lot of effort, but also can separate more than 2 segments.

Reinforcement Learning

Therefore, the action space of the agent is the set of intersections of a designed grid and an additional termination action.

Merger agent

In the end, the fusion agent observes Lˆ or Lˆ¨, the EM image I ∈[0,1] and Point mapP to detect and correct splitting errors. Action: Given a condition, to select a fragment on L, a non-terminated action is defined as coordinates of intersections on a defined 2D grid, excluding the intersections on the boundary, which means, from a(n+ 2)×(n+ 2) grid, the action space consists of inner n×actions and an additional terminate action. To make it more precise, I choose 15×15 actions, so the grid action is dense and the total number of actions is 226 (see Fig. 6a).

The termination action has index225 and the size of the Gaussian kernel is 16×16 pixels. b). The orange dots indicate where the agent can take action and the number is the index of non-terminated action in the action space (zero-based numbering). So, I can simply design the fusion agent to receive positive reward if it is an erroneous fragment and negative reward if it is selected.

In fact, at the odd time step, the selected label indicates a base label; then, in the next time step, or the even time step, the next selected label has its label changed to base label, which is the same as the way humans select and merge some segments (see Fig. 7). Given that a correct segment is split into n fragments, the number of time steps needed to fix it is 2n−2 steps.

Figure 5: Illustration of encoding and normalizing result on a 3 × 3 label map. The L ˆ is the normalized map of L and L¨ is encoded map of L .
Figure 5: Illustration of encoding and normalizing result on a 3 × 3 label map. The L ˆ is the normalized map of L and L¨ is encoded map of L .

Splitter agent

In a time step t, the black arrow is the current of the state St; the red arrow is the input observationOt to the agent; the yellow arrow indicates the agent's action for the corresponding Ot, therefore,a= 20. In this case, a gateway module checks whether this time step is odd or even. If it is even, the environment looks up the previously selected label in the odd time step and changes the currently selected label to the previously selected label, respectively.

The green arrow showed the update flow for the next time step and also the calculation of the reward for the action. 3 in both types of time step is the same, the reward is only either this selected label is correct or not. State: The state in the Splitter agent is the same as the Merger agent, which means that the Splitter agent observes Lˆ or Lˆ¨, the EM image I ∈[0,1] and the point map P to detect and correct merge errors.

At a time step t, the black arrow is the current of the stateSt; the red arrow is the input observation for the agent; the yellow arrow indicates the agent's action for the corresponding Ot, so a = 159. The green arrow indicated the update flow for the next time step and also calculated the reward Eq.

Figure 7: The interaction in Merger agent. In a time step t , The black arrow is the flow of the state S t ; the red arrow is the input observation O t to the agent; the yellow arrow indicates the action of the agent for the corresponding O t , hence, a =
Figure 7: The interaction in Merger agent. In a time step t , The black arrow is the flow of the state S t ; the red arrow is the input observation O t to the agent; the yellow arrow indicates the action of the agent for the corresponding O t , hence, a =

Locator

Locator's action space is the same grid action map of Merger and Splitter agents, but different sizes because the task is now viewed from the overall context, so I don't need a dense grid anymore, that's why I choose 7×7, so now it's total number of actions on the action space 50. Reward: Again, the Locator's job is to locate the error areas and those chosen areas should improve the metric of the HR label card. The same goes for the Splitter agent. I want the Locator to not only be able to find areas of error, but also to be fully aware of the improvement metrics.

Because the large size of the label map, after the error spot is corrected, the improvement score is low, so Rlog gives a low immediate reward. In a time step t, The black arrow is the flow of the state St; the red arrow is the input observation Ot, which is the scaled St, to the agent;. The lattice-action card gets the state St and reads the action a; then the point map Pt+1 a Gaussian kernel is inserted on a coordinate of the action index, and also the location of the chosen patch is calculated.

In this case, the image and label map are cropped based on the location of the chosen patch. The size of the chosen patch is the basic merge and split agent, so 128×128 as I defined in the previous one.

Figure 9: The interaction in Locator agent. In a time step t , The black arrow is the flow of the state S t ; the red arrow is the input observation O t , which is the downscaled S t , to the agent;
Figure 9: The interaction in Locator agent. In a time step t , The black arrow is the flow of the state S t ; the red arrow is the input observation O t , which is the downscaled S t , to the agent;

CREMI data set

Training

Testing

Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Computed Medical Imaging and Computer-Assisted Intervention. Plaza, “Focused proofreading for neural connectivity reconstruction from large-scale EM images,” in Deep Learning and Data Labeling for Medical Applications. Pfister, “Guided proofreading of automatic segmentations for connectivity,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, p.

Seung, “An error detection and correction framework for connectomics,” in Advances in Neural Information Processing Systems, 2017, pp. Pratt et al., “Towards Fully Autonomous Driving: Systems and Algorithms,” in 2011 IEEE Intelligent Vehicles Symposium (IV) . Xiao, “Deepdriving: Learning affordances for direct perception in autonomous driving,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp.

Abraham et al., “Boundary learning by optimization with topological constraints,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Liang, “Unet++: a nested u-net architecture for medical image segmentation,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Mu Lee, “Seednet: Automatic Seed Generation with Deep Reinforcement Learning for Robust Interactive Segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, p.

Xing, “Augmented auto-magnification network: Towards accurate and fast segmentation of breast cancer in whole images,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support.

Figure 11: The result on split error test set. The lower the score, the better result.
Figure 11: The result on split error test set. The lower the score, the better result.

Gambar

Figure 1: The typical connectomics workflow [12,13].
Figure 2: The agent-environment interaction in a Markov decision process (MDP). At time step t , given a current state S t or an observation O t , the agent selects an action A t
Figure 3: Illustration of merge and split errors. The merge errors are correct segments have been merged together in green regions, split errors are fragments of one or many correct segments highlighted as red regions, and yellow are the overlapped regions
Figure 4: The proposed proofreading system diagram. From the beginning, an input consists of an initial label map and an EM image, the Locator agent choose a coordinate of a patch containing error regions base on an action grid
+7

Referensi

Garis besar

Dokumen terkait