-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
240 lines (211 loc) · 11.6 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8"/>
</head>
<body>
<h1>collinsem.github.io</h1>
<h2> HTM demos </h2>
<ul>
<li>
<h3><a id="gataca" class="anchor" href="#gataca" aria-hidden="true"><span class="octicon octicon-link"></span></a>
<a href="gataca">HTM sequence learning over a 1D domain.</a></h3>
<p>
This demo shows the HTM temporal memory algorithm operating
on a simple 1D domain containing only four possible input
states. The input states are for letters: A, C, G, T. Three
sensor patches move over the domain. Each sensor patch
consists of five sensors that directly encode the input
state in one of four active neurons (bits) associated with
each sensor. The last three neurons associated with each
patch is an encoding of the next movement of the patch in
one of three bits: left, stay, right. These inputs are then
incorporated into three temporal memory modules with eight
neurons per column.
</li>
<li>
<h3><a id="gclusteron-1" class="anchor" href="#gclusteron-1" aria-hidden="true"><span class="octicon octicon-link"></span></a>
<a href="GradientClusteronDemo1D">Gradient Clusteron</a></h3>
<p>
This demo is a variation on the GATACA example above. We
take the same 1D domain with four possible input features at
each location. The top line is the input domain. The three
lines below show the target pattern and current input to
each of the sensor patches. For each sensor patch, there are
20 pre-synaptic neurons (not shown), and of these, 5 will be
active during each cycle (corresponding to the currently
active detector in each sensor).
The graphs shown below are visualizations of dendrites for
three post-synaptic neurons. Each dendrite has 20 synapses
corresponding to each of the detectors on each sensor. Each
of the synapses has an associated weight and position on the
dendrite. The synaptic weights are indicated by the vertical
bars along the dendrite. Activated synapses will generate a
localized effect on the dendrite inversely proportional to
the distance away from the synapse location (indicated by
the Gaussian bump centered on each active synapse). The
learning rule follows that described
in <a href="https://doi.org/10.1371/journal.pcbi.1009015">
this paper</a> by Toviah Moldwin, et.al.
The dendrite activation is depicted by the thicker plot
line, and then integrated into the bar on the far right. The
white horizontal line on this bar is the post-synaptic
neuron firing threshold. These plots will take on different
colors depending on the current state. Green for successful
detection of the target pattern (true-positive), cyan for
failure to detect the target pattern (false-negative), red
for detection of the pattern when not present
(false-positive), and gray for successful non-detection of
the pattern (true-negative).
</li>
<li>
<h3><a id="stereo-test" class="anchor" href="#stereo-test" aria-hidden="true"><span class="octicon octicon-link"></span></a>
<a href="stereo-test">Early stage visualization of an agent with binocular/stereo vision.</a></h3>
<p>
Prototype visualization of a simple agent in a simple
environment. The agent possesses two cameras for visual
input. The RGB channels are then digitized very coarsely and
projected onto a pair of virtual retinas. This is of course
not how the actual processing occurs in the retina. That is
currently on the TODO list. The purpose of this
visualization was to prototype a potential interactive
application that would be able to show the initial stages of
encoding and processing stereo vision.
</li>
<li>
<h3><a id="maze-runner" class="anchor" href="#maze-runner" aria-hidden="true"><span class="octicon octicon-link"></span></a>
<a href="MazeRunner">2D Maze environment with rudimentary physics. Potential environment for agent.</h3></a>
<p>
Another candidate for a simple embodied agent: a spherical
rat in a maze. This demo only got as far as implementing
basic collision physics before getting bogged down in non-AI
details. Going foward, I will probably utilize an existing
physics engine and focus on how the agent generates movement
and receives sensor feedback from its environment.
</li>
<li>
<h3><a id="nnvis" class="anchor" href="#nnvis" aria-hidden="true"><span class="octicon octicon-link"></span></a>
<a href="nnvis">3D visualization of a simulated cortical column.</a></h3>
<p>
This demo is mostly pretty flashing lights that demonstrates
one potential way to visualize the inner workings of a
single cortical column. There was a half-hearted attempt to
implement a temporal memory algorithm, and you can kind of
see it working in the shifting of the neurons from red
(active-bursting) to blue (predicted) and green
(active-predicted). However, the proximal inputs at the
lowest level are essentially random, so no meaningful
learning is taking place.
</li>
<li>
<h3><a id="mnist-sparse-rep" class="anchor" href="#mnist-sparse-rep" aria-hidden="true"><span class="octicon octicon-link"></span></a>
<a href="mnist-sparse-rep">Sparse encoding of MNIST digits using a simple dictionary lookup algorithm.</a></h3>
<p>
Atoms in the dictionary are initialized by sub-sampling from
a set of random images in the training set. Thereafter
these atoms are used as an overcomplete basis set to encode
portions of subsequent images. The encoding selects the best
atom by direct projection (dot product of image and basis
atom) to obtain a correlation coefficient. The product of
this coefficient and the basis atom is subtracted from the
image leaving a residual. This residual is then subjected to
the same procedure to select the next atom that best
captures the image features that were not present in the
first atom. This continues until the atom limit is reached
or the magnitude of the residual falls below a minimum
threshold. The reconstructed image is then displayed along
with the residual.
<p>
NOTE: This demo is not currently learning or adapting the
atoms after the initial sampling stage. This simple choice
for the basis set yields some fairly impressive results
which can best be appreciated by comparing them to the
reconstructions that results if you enable the "random
atoms" checkbox in the menu.
</li>
<li>
<h3><a id="vision-proto1" class="anchor" href="#vision-proto1" aria-hidden="true"><span class="octicon octicon-link"></span></a>
<!-- <a href="https://vision-proto1.herokuapp.com/">Retina to V1 mapping</a></h3> -->
<a href="vision-proto1">Retina to V1 mapping</a></h3>
<p>
Prototype of a visualization of a low-level visual encoding
strategy. At the top of the window a sequence of MNIST
digits are displayed with an overlay of a stencil displaying
the proximal receptive fields for a set of cortical
columns. The size of the stencil can be controled through
colRadius.
<p>
The primary visualization is a 3D depiction of the encoding
of features associated with each column's receptive
fields. Each cortical column is composed of 19 mini-columns
rendered as three concentric rings (1+6+12). The intensity
of each mini-column corresponds to the strength with which
the input field matches one of the nineteen Gabor Filters
(show in the lower left corner).**
<p>
** If the Gabor field is unchecked, then a simpler set of 6
filters is used: centerOn, centerOff, xSobel, ySobel,
xScharr, and yScharr.
</li>
<li>
<h3><a id="stereo-vision-test1" class="anchor" href="#stereo-vision-test1" aria-hidden="true"><span class="octicon octicon-link"></span></a>
<a href="https://stereo-vision-test1.herokuapp.com/">Stereo vision prototype with retinal encoding visualization</a></h3>
<p>
A more detailed stereo vision prototype. This one is
designed to understand more about how to create a
sensor-motor feedback loop for aligning two independent
retinal sensor patches on the same location in the input
field.
<p>
In this demo, two retinal patches are overlaid on an input
field consisting of a sequence of colored MNIST digits. Each
patch consists of multiple retinal sensors covering a fixed
spatial extent. The individual circles rendered on the
sequence display the receptive field of each retinal sensor.
<p>
The main portion of the display window shows the cortical
regions associated with the two retinal patches. Each region
consists of a cortical column for each retinal
sensor. Within each column are multiple minicolumns. Each
minicolumn is proximally connected to the column's sensor
via a log-Gabor convolutional filter. This filter is
currently standing in place for what will eventually become
an adaptive (Hebbian) filter. The minicolumn with the
greatest filter response (dot product of receptive field
with log-Gabor filter) fires first and then decays over
time. While it is fading, the next best filter match has the
opportunity to fire. This process continues until either no
more filters exceed the activation threshold, or any of the
previously activated filters have completed their refractory
period and are ready to fire again.
<p>
NOTE: Some of the controls are currently inactive as the
backend functionality has not yet been completed.
</li>
<li>
<h3><a id="htm-conway" class="anchor" href="#htm-conway" aria-hidden="true"><span class="octicon octicon-link"></span></a>
<a href="htm-conway">Can HTM learn Conway's Game of Life?</a></h3>
<p>
Inspired by a question asked in the HTM Forum, this example
seeks to answer the question: "Can HTM learn Conway's Game
of Life?"
<p>
This is a work in progress. Check back soon for future
updates.
</ul>
<h2> Notes </h2>
<ul>
<li>For demos showing simulated neurons and/or synapses, colors
are indicative of the current state of the neuron.
<ul>
<li>Blue: predictive (high-probability of becoming active
soon)</li>
<li>Green: normal activation (activated by proximal input
after being in the predictive state)</li>
<li>Red: bursting (activated by proximal input without first
being predicted)</li>
</ul>
</li>
</ul>
</body>
</html>