This document states how to use the example MIS/deployments/visual-system.
1. Prerequisites
1.1. Prepare dataset
Place your dataset folder in here(MIS/deployments/visual-system) and name it as data.
Or you may directly create a link to the folder like this.
ln -s YOUR_DATASET data
1.2. Update the MIS & MIDP packages
-
Update MIS
cd PATH_OF_MIS git pull
-
Update MIDP
pip install -U git+https://github.com/YuanYuYuan/MIDP
1.3. Set the data list
1.3.1. For training/validation, please modify training/data_list.yaml.
amount:
test: 11
total: 48
train: 28
valid: 9
list: (1)
train:
- 0522c0002
⋮
valid:
- 0522c0001
⋮
loader:
data_dir: ../data
name: NRRDLoader
roi_map:
OpticNerve_L: 1
OpticNerve_R: 2
Chiasm: 3
spacing: 1
resample: false (2)
| 1 | Fill in the data list to be evaluated with the name of each case. |
| 2 | The default spacing of this model is 1mm. If your dataset hasn’t been resampled into this spacing, you may toggle this option. However, for the training efficiency, the recommended way is to preprocess the dataset with this. |
For evaluation, please modify evaluation/data_list.yaml.
list:
- 0522c0001 (1)
⋮
loader:
data_dir: ../data
name: NRRDLoader
roi_map:
OpticNerve_L: 1
OpticNerve_R: 2
Chiasm: 3
spacing: 1
resample: false (2)
| 1 | Fill in the data list to be evaluated with the name of each case. |
| 2 | The default spacing of this model is 1mm. If your dataset hasn’t been resampled into this spacing, you may toggle this option. However, for the training efficiency, the recommended way is to preprocess the dataset with this. |
| Note that you can split the dataset to training/validation/testing three parts. Train the model on training data, choose the best model in checkpoints according to the performance on the validation data, and finally evaluate the performance on the testing data. |
| You can use the tools from MIDP to generate the data list. Please see here for the details. |
2. Usage
2.1. Download the trained model
make download_model
It will download a model checkpoint which achieved the performance as below.
| Left Optic Nerve | Right Optic Nerve | Chiasm | Average |
|---|---|---|---|
0.5777 |
0.5987 |
0.3251 |
0.5005 |
2.2. Training
Continue training with the trained model.
make train
| There will be a gap between the validation score and the evaluation one since the condition is harder(the model makes prediction without considering threshold). |
| If you want to use some model checkpoints to resume training, please specify the checkpoint path in the training config. |
⋮
models:
seg:
model_config: '../models/seg_vae_reg.json5'
checkpoint: '../model_checkpoint.pt' (1)
dis:
model_config: '../models/dis.json5'
# checkpoint: 'dis.pt'
⋮
| 1 | Modify here. |
2.3. Evaluation
Directly evaluate the performance with the trained model checkpoint.
make evaluate
Evaluate the performance with a newly trained checkpoint.
make evaluate CKPT=training/_ckpts/SOME_BETTER_CHECKPOINT
|
One can observe the gap between the score of each batch(before reconstruction)
and the evaluation one(after reconstruction,
true dice score enclosed in ===== Restored =====). Since there are additional processings like applying threshold , averaging the overlapping predictions, the performance will be better. |
The output result will be exported as evaluation/score.json.
[Result]
{
"0522c0001": {
"OpticNerve_L": 0.60801393728223,
"OpticNerve_R": 0.6426484907497566,
"Chiasm": 0.671850699844479
},
"0522c0014": {
"OpticNerve_L": 0.5075862068965518,
"OpticNerve_R": 0.6363636363636364,
"Chiasm": 0.26305220883534136
},
"0522c0195": {
"OpticNerve_L": 0.505338078291815,
"OpticNerve_R": 0.6531468531468532,
"Chiasm": 0.5344563552833078
},
"0522c0248": {
"OpticNerve_L": 0.5934256055363322,
"OpticNerve_R": 0.6904549509366636,
"Chiasm": 0.27712609970674484
},
"0522c0330": {
"OpticNerve_L": 0.426197458455523,
"OpticNerve_R": 0.3314527503526093,
"Chiasm": 0.21585482330468003
},
"0522c0555": {
"OpticNerve_L": 0.7164444444444444,
"OpticNerve_R": 0.6149870801033591,
"Chiasm": 0.09828571428571428
},
"0522c0576": {
"OpticNerve_L": 0.6567796610169492,
"OpticNerve_R": 0.6638078902229846,
"Chiasm": 0.4628099173553719
},
"0522c0667": {
"OpticNerve_L": 0.6633165829145728,
"OpticNerve_R": 0.576271186440678,
"Chiasm": 0.11376146788990826
},
"0522c0727": {
"OpticNerve_L": 0.521805661820964,
"OpticNerve_R": 0.5794530672579453,
"Chiasm": 0.2884473877851361
}
}
Besides making inference, save the predictions and store them into NRRD.
make predict
The outputs will be stored in the folder evaluation/outputs.
| The process may be slow due to resampling twice before/after inference. And the current workflow will store the predictions of all cases and then do reconstruction, the memory usage might be large. |