This repository has been archived by the owner on Mar 8, 2021. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathindex.html
179 lines (158 loc) · 12.4 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge"/>
<title>Computer Vision Class Project| CS, Georgia Tech | Fall 2020: CS4476</title>
<link rel="stylesheet" href="styles.css">
<meta name="description" content="">
<meta name="author" content="">
</head>
<body>
<div class="container">
<div class="page-header">
<!-- Title and Name -->
<h1>Medical Imaging</h1> <!--maybe change to something like Chest X-ray Imaging?-->
<span style="font-size: 20px; line-height: 1.5em;"><strong>Yash Kothari, Nima Jadali, Aditya Tapshalkar, Brayden Richardson, and Raymond Bryant</strong></span><br>
<span style="font-size: 18px; line-height: 1.5em;">Fall 2020 CS 4476 Intro to Computer Vision: Class Project</span><br>
<span style="font-size: 18px; line-height: 1.5em;">Georgia Institute of Technology</span>
<hr>
<!-- Proposal -->
<h2>Proposal</h2>
<div class="proposal_items">
<h3>Problem Statement</h3>
<div class="text_indent">
Chest X-rays are highly valued in diagnosing injuries and ailments within the
thoracic region. However, medical diagnosis of these ailments can prove very
challenging. Our team wanted to discover a way to easily diagnose nodules, masses,
and cardiomegaly, all of which could be diagnosed more easily with aid from a
neural network-backed computer vision system. <br><br>
Masses and nodules are similar in that both growths are tumors; the only
variance is that masses are larger in size, usually greater than 3cm in diameter,
whereas nodules are smaller, with a width smaller than 3cm. These growths could
be categorized as benign or malignant, but malignant growths have a high probability
of metastasizing and could be life-threatening if not treated soon. (“Lung Masses and Growth”) <br><br>
Cardiomegaly can be defined as the enlargement of the heart. Causes of cardiomegaly
range from short-term bodily stress to myocardial weakness and arrhythmia. An enlarged
heart can be treated through surgery and certain medications and is generally easier
to treat when diagnosed earlier. (“Enlarged Heart”) <br><br>
These three ailments were chosen because of the feasibility of diagnosis given the
time constraint of the project. These three are solid ailments in the chest and easy
to identify and classify, whereas the other ailments are all some form of liquid in the
lungs, making them harder to identify and distinguish from one another. <br><br>
The goal of our project is to create a model, using supervised classification, that is
able to identify and diagnose nodules, masses, and cardiomegaly from an inputted Chest X-ray.
A Hough transformation and deformable contour will then be used to locate the position of the
ailment specifically within the image. A user of our system will be able to input a Chest X-ray of a patient,
and our model will then determine the presence (with a corresponding confidence level) of a
nodule, mass, or cardiomegaly. The model will then output the image with a
boundary encasing area of interest (the enlarged heart, nodule, or mass)
and the estimated size of the nodule/mass.
</div>
<br><br>
<div class="image_container">
<img src="Images\Cardiomegaly_sample.png" alt="Cardiomegaly_sample">
<img src="Images\Mass_sample.png" alt="Mass_sample">
<img src="Images\Nodule_sample.png" alt="Nodule_sample">
</div>
<!-- Approach
Comments: potentially mention some preprocessing we might do on the X-Rays/what pre-processing we might do to an inputted patient image before our model makes a prediction
Maybe talk about the steps of the CNN and how it relates to material from class? -->
<h3>Approach</h3>
<div class="text_indent">
We will be utilizing supervised learning to detect features indicative of a mass, nodule, or cardiomegaly.
This will be done by training a convolutional neural network to extract features and classify the input
X-ray images, with a certain confidence, into the various classes representing each condition.
There will be some overlap between classes due to the nature of the dataset used, but we will
choose the class with the highest confidence for a given image with overlapping ground truth. <br><br>
The general architecture of our model will consist of the following:
<ul>
<li>Initial pre-processing within the input layer</li>
<li>CheXNet architecture used as a basis for development of hidden layers
<ul>
<li>Original network has 121 layers - will be narrowed down proportionally to reflect scope of this project</li>
<li>Original network was trained on 100,000 frontal chest X-ray images -
aim to narrow this down as well depending on image choice
(uncertain as explained below in the experiment sub-section)</li>
</ul>
</li>
<li>Hough transform after classification for location (except for cardiomegaly detection)</li>
<li>Deformable contours to define the boundaries of the ailment</li>
</ul>
Our approach for locating nodules and masses would be preprocessing the image to remove noise,
and then use a generalized Hough transformation (using our PS2 implementation as a basis) to
locate roughly where the nodule or mass might be and then applying a deformable contour to
get the more exact boundaries (since the nodules or masses are slightly non-rigid) and
potentially calculating the size (area) of the nodule or mass using the boundary. We will use
various filters (Sobel, Prewitt, etc.), smoothing techniques, and quantization/clustering to
address potential image noise in order to get better results from our detection.
We will also use varying ranges of target radii for detecting the nodules and masses as they are
going to vary in size, and we can use the results of our contouring to further refine our ranges
(by having a better estimate of average size for each ailment).
<br><br>
Our approach for locating cardiomegaly would be to evolve a contour to best fit the boundary of the heart in the chest X-ray.
Although the heart will be located in roughly the same region across X-ray images (center-right of the image,
majority on the patient’s left side), we may need to perform 2D image transformations to make some outlier X-rays uniform
(e.g. X-ray of a small child). The contour would be initialized roughly where the enlarged heart would be expected to be
found in the chest X-ray (middle right of the image). The shape of the contour will then iteratively deform to better fit
the contours of the heart. Once the contours have converged, then the contour will be extracted from the image and compared
against control contours of normal and enlarged heart to double check whether the heart is normal-sized or indeed enlarged.
Possibly, we will try to find a threshold for how off a deformable contour is before marking a predicted cardiomegaly as a
false positive. In addition, the contour can then be used to also calculate the size of the enlarged heart and make it more visible.
<br><br>
Some sample images with boundaries (in red):
</div>
<br>
<!-- Experiment and Results
Comments: Mention which software we will be using (TensorFlow and Keras?) and how our work environment will be setup? Not sure if we should just mention Tensor flow for
experimental setup or also include stuff like Github and Trello. Mention what we will implement ourselves
What makes our project successful-->
<h3>Experiments and Results</h3>
<div class="text_indent">
We will be using the NIH Clinical Center's publically available Chest X-Ray dataset. The dataset is prelabeled with the associated thoracic pathology (if any)
and identified as training data or testing data. The dataset contains 112,120 frontal-view X-ray images, covering 14 different illnesses.
Since we are focusing on nodules, masses, and cardiomegaly, we will be using a subset of the 20,584 images that correspond to those illnesses.
We will be using TensorFlow and Keras to create a convolutional neural network and train it with the training data and test the accuracy of
our model’s predictions with the testing data. To carry out this approach, we would require certain resources to train and test our model.
For this, we have secured access to a lambdabook (dual GPU deep learning laptop) as well as a certain amount of free Azure credits that
can be used to train models on their cloud GPU architecture. This level of computational power should be enough to train the model we have in mind
(keeping in mind realistic limitations in complexity of the model as well as number of images pushed). As for the Hough transformation and
deformable contours, we will implement based on the pseudocode from class and outside of class research. Our project will be successful if we
can make a model that can diagnose nodules, masses, and cardiomegaly based on any input Chest X-ray with at least 75% accuracy and denote
specifically where the ailments were detected within the X-rays. The measured accuracy will be based on the F1 score metric - a statistical
accuracy measure, typically used for binary classification in machine learning. This metric was used by the CheXNet architecture cited
earlier in this proposal and has been modified to be comparable for multi-classification problems. Some potential uncertainties would
be differentiating between nodules and masses, because of the similarities in the shape of the two ailments and size being the main
differentiating factor, as well as the specific structure of our neural network; we may also find that one architecture works better
than another in detecting the relevant illnesses. In addition to different architectures, we might need to try different datasets according to
the architecture it is being used with (for example, one type of architecture might require a lot of images for training). We will experiment with different
neural network architectures on a smaller
subset of our dataset images to attain the accuracy of the predictions. We will also experiment with different configurations
for our deformable contouring implementation such as changing the number of vertices, the initial shape of the contour,
the tension parameter (alpha), and the stiffness parameter (beta). The results of this experiment would help us find the optimal
configuration for nodule, masse, and cardiomegaly deformable contours. We would also need to test whether using Hough transformations
in the labeling of Nodule and Mass locations would be helpful in creating a more accurate model from our architecture.
Our hypothesis is that using a hough transformation to pre-process and label the training images as containing an ailment
in a certain region (e.g. Nodule in upper left) before being used in the CNN would lead to more accurate classification.
We will also experiment with using different Hough transforms and contouring to assist in visualization and labelling after
being classified by our model as well as to extract further information from the image such as size (area) of the ailment to
determine the severity. Our contouring area and corresponding energy of the area of interest will also be used as a
confirmation of the prediction of our model to help prevent false positives.
<br><br>(The NIHCC's dataset can be found at https://nihcc.app.box.com/v/ChestXray-NIHCC)
</div>
<br>
<h3>Citations</h3>
<div class="text_indent">
*add citations here*
</div>
<br><br>
</div>
<hr>
<footer>
<p>© You Name Here</p>
</footer>
</div>
</div>
<br><br>
</body></html>