You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What is the need of linear layer x = cl(x) after the Chebyshev multiplications ?
What if we don't use the linear layer and keep only the result of chebyshev multiplication as follow
if K > 1:
x1 = my_sparse_mm()(L,x0) # V x Fin*B
x = torch.cat((x, x1.unsqueeze(0)),0) # 2 x V x Fin*B
for k in range(2, K):
x2 = 2 * my_sparse_mm()(L,x1) - x0
x = torch.cat((x, x2.unsqueeze(0)),0) # M x Fin*B
x0, x1 = x1, x2
x = x.view([K, V, Fin, B]) # K x V x Fin x B
x = x.permute(3,1,2,0).contiguous() # B x V x Fin x K
x = x.view([B*V, Fin*K])
?
Thank you a lot for your answer
The text was updated successfully, but these errors were encountered:
@xtsdylyx for which reason we need to chang the number of filters ?
spectral graph convolution is a operation that changes the input data using the Lapacian matrix or other kernals. Indeed, it can't add/decrease the number of feature maps of each vertex. It just updates the data in all vertex's feature maps. The reason for that is that graph convolution kernal is fixed, unlike CNN which concolution kernal can be different/changed.
To increase the expression ability of the network, we need every vertex has a lot of feature maps. Since graph convolution operation can't change the number of fature maps, linear operation is most straightforward.
Maybe I am wrong, hope for disscusions.
Hello @xbresson ,
Thank you for your work and for making available pytorch version of GCN.
I have question relate to
graph_conv_cheby(self, x, cl, L, lmax, Fout, K)
function (https://github.com/xbresson/spectral_graph_convnets/blob/master/02_graph_convnet_lenet5_mnist_pytorch.ipynb)x = cl(x)
after the Chebyshev multiplications ??
Thank you a lot for your answer
The text was updated successfully, but these errors were encountered: