GAT

class dhg.models.GAT(*args, **kwargs)[source]

Bases: torch.nn.Module

The GAT model proposed in Graph Attention Networks paper (ICLR 2018).

Parameters
  • in_channels (int) – \(C_{in}\) is the number of input channels.

  • hid_channels (int) – \(C_{hid}\) is the number of hidden channels.

  • num_classes (int) – The Number of class of the classification task.

  • num_heads (int) – The Number of attention head in each layer.

  • use_bn (bool) – If set to True, use batch normalization. Defaults to False.

  • drop_rate (float) – The dropout probability. Defaults to 0.5.

  • atten_neg_slope (float) – Hyper-parameter of the LeakyReLU activation of edge attention. Defaults to 0.2.

forward(X, g)[source]

The forward function.

Parameters
  • X (torch.Tensor) – Input vertex feature matrix. Size \((N, C_{in})\).

  • g (dhg.Graph) – The graph structure that contains \(N\) vertices.