Robust Stochastic Graph Generator for Counterfactual Explanations

Generator vs Discriminator on RSGG-CE for graph counterfactual engendering.


Counterfactual Explanation (CE) techniques have garnered attention as a means to provide insights to the users engaging with AI systems. While extensively researched in domains such as medical imaging and autonomous vehicles, Graph Counterfactual Explanation (GCE) methods have been comparatively under-explored. GCEs generate a new graph akin to the original one, having a different outcome grounded on the underlying predictive model. Among these GCE techniques, those rooted in generative mechanisms have received relatively limited investigation, despite demonstrating impressive accomplishments in other domains, such as artistic styles and natural language modelling. The preference for generative explainers stems from their capacity to generate counterfactual instances during inference, leveraging autonomously acquired perturbations of the input graph. Motivated by the rationales above, our study introduces RSGG-CE, a novel Robust Stochastic Graph Generator for Counterfactual Explanations able to produce counterfactual examples from the learned latent space considering a partially ordered generation sequence. Furthermore, we undertake both quantitative and qualitative analyses to compare RSGG-CE’s performance against SoA generative explainers, highlighting its increased abilities in engendering plausible counterfactual candidates.

In Proceedings of the 38th Annual AAAI Conference on Artificial Intelligence