Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the need of the linear layer after Chbyshev multiplications ? #5

Open
pinkfloyd06 opened this issue Oct 3, 2018 · 3 comments

Comments

@pinkfloyd06
Copy link

Hello @xbresson ,

Thank you for your work and for making available pytorch version of GCN.

I have question relate to graph_conv_cheby(self, x, cl, L, lmax, Fout, K) function (https://github.com/xbresson/spectral_graph_convnets/blob/master/02_graph_convnet_lenet5_mnist_pytorch.ipynb)

  1. What is the need of linear layer x = cl(x) after the Chebyshev multiplications ?
  2. What if we don't use the linear layer and keep only the result of chebyshev multiplication as follow

if K > 1: 
            x1 = my_sparse_mm()(L,x0)              # V x Fin*B
            x = torch.cat((x, x1.unsqueeze(0)),0)  # 2 x V x Fin*B
        for k in range(2, K):
            x2 = 2 * my_sparse_mm()(L,x1) - x0  
            x = torch.cat((x, x2.unsqueeze(0)),0)  # M x Fin*B
            x0, x1 = x1, x2  
        
        x = x.view([K, V, Fin, B])           # K x V x Fin x B     
        x = x.permute(3,1,2,0).contiguous()  # B x V x Fin x K       
        x = x.view([B*V, Fin*K])    

?
Thank you a lot for your answer

@YuanxunLu
Copy link

YuanxunLu commented Dec 23, 2018

I am also interested in that. I think the linear layer is used to change the number of the filters.
hope for other answers.

@Benjiou
Copy link

Benjiou commented Jan 3, 2019

@xtsdylyx for which reason we need to chang the number of filters ?

@YuanxunLu
Copy link

@xtsdylyx for which reason we need to chang the number of filters ?

spectral graph convolution is a operation that changes the input data using the Lapacian matrix or other kernals. Indeed, it can't add/decrease the number of feature maps of each vertex. It just updates the data in all vertex's feature maps. The reason for that is that graph convolution kernal is fixed, unlike CNN which concolution kernal can be different/changed.
To increase the expression ability of the network, we need every vertex has a lot of feature maps. Since graph convolution operation can't change the number of fature maps, linear operation is most straightforward.
Maybe I am wrong, hope for disscusions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants