Learn Before
  • Variational Graph Autoencoders (VGAE) Based On Graph-Level Latent Representations

Decoder For Graph-Level Latents

The goal of graph level variational decoder is to define a posterior given a graph level latent embedding. The original graph VAE proposed to combine a multi-layer perceptron (MLP) and Bernoulli distribution assumption to obtain the posterior:

pθ(GzG)=(u,v)VA~[u,v]A[u,v]+(1A~[u,v])(1A[u,v])p_{\theta}(G | \mathbf{z}_G) = \prod_{(u,v)\in \mathcal{V}} \tilde{A}[u,v] A[u,v]+(1-\tilde{A}[u,v]) (1-A[u,v])

Where AA is adjacency matrix and A~=σ(MLP(zG))\tilde{A}=\sigma(MLP(\mathbf{z}_G)) is the predicted matrix of edge probabilities. The overall log-likelihood objective is equivalent to a set of independent binary cross-entropy loss.

0

1

3 years ago

Contributors are:

Who are from:

Tags

Deep Learning (in Machine learning)

Data Science

Related
  • Encoder For Graph-Level Latents

  • Decoder For Graph-Level Latents

Learn After
  • Drawback Of Original Graph Level Variational Decoder