Code for the paper: "Deep-learning-aided forward optical coherence tomography endoscope for percutaneous nephrostomy guidance"[1] The following pieces of python code and jupyter notebooks were used for the paper. The following architectures were used:
- Resnet 34
- Resnet 50 and Mobilenetv2 with and without pretrained initial weights from Imagenet Dataset.
The dataset can be found in [2]
The language used is Python. We used Tensorflow 2.3.
-
0-Read_images.ipynb
It process the images from JPEG to numpy ndarray binaries -
ResNet34/
- Cross-validation
- archResNet_p1.py
- archResNet_p2.py
- archResNet_p3.py
- archResNet_p4.py
- archResNet_p1.py
It uses the ResNet34 architecture to predict the type of tissue( 3 categories) It is split in 4 files in order to be able to run them independently.
- Cross-validation
-
PT_MobileNetv2/
- Cross-validation
- PT_MobileNetv2_batch/
- mobilenetv2_tl_arg_simult_vC.batch
- PT_MobileNetv2_python/
- mobilenetv2_tl_arg_vC.py
- PT_MobileNetv2_batch/
- Cross-validation
-
ResNet50/
- Cross-validation
- Resnet50_batch/
- resnet50_arg_simult.batch
- resnet50_arg_simult.batch
- Resnet50_python/
- archResNet50_arg.py
- archResNet50_arg.py
- Resnet50_batch/
- Cross-testing
- Resnet50_batch/
- resnet50_arg_outer_simult.batch
- resnet50_arg_outer_simult.batch
- Resnet50_python/
- archResNet50_arg_outer.py
- archResNet50_arg_outer.py
- Resnet50_batch/
- Cross-validation
-
PT_ResNet50/
- Cross-validation
- PT_Resnet50_batch/
- resnet50_tl_arg_simult.batch
- resnet50_tl_arg_simult.batch
- PT_Resnet50_python/
- archResNet50_tl_arg.py
- archResNet50_tl_arg.py
- PT_Resnet50_batch/
- Cross-testing
- PT_Resnet50_batch/
- resnet50_tl_arg_outer_simult.batch
- resnet50_tl_arg_outer_simult.batch
- PT_Resnet50_python/
- archResNet50_tl_arg_outer.py
- archResNet50_tl_arg_outer.py
- PT_Resnet50_batch/
- Cross-validation
-
Processing_results.ipynb
Processing of the results to obtain the accuracies, epochs of all the combinations. Time is calculated for a few combinations -
Processing_predictions.ipynb
Processing of the predictions to obtain the ROC curves -
Processing_time.ipynb
Complete processinf of time for cross-validation. -
Grad-CAM.ipynb
Implementation of visual explanation using Grad-CAM[3] for the models obtained
For ResNet34 run the python code, for the rest you need to use arguments. The python file is used as:
archResNet50_arg.py testing_kidney validation_kidney
e.g.
archResNet50_arg.py 1 2
The batch file was used in Summit supercomputer.
[1] Chen Wang, Paul Calle, Nu Bao Tran Ton, Zuyuan Zhang, Feng Yan, Anthony M. Donaldson, Nathan A. Bradley, Zhongxin Yu, Kar-ming Fung, Chongle Pan, and Qinggong Tang, "Deep-learning-aided forward optical coherence tomography endoscope for percutaneous nephrostomy guidance," Biomed. Opt. Express 12, 2404-2418 (2021)
[2] Chen Wang, Paul Calle, Qinggong Tang, & Chongle Pan. (2022). OCT porcine kidney dataset for percutaneous nephrostomy guidance [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7113948
[3] Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision (pp. 618-626).
Paul Calle - pcallec@ou.edu
Project link: https://github.com/thepanlab/FOCT_kidney