Graph Neural Networks (GNNs) have proven highly effective in graph-related tasks, including Traffic Modeling, Learning Physical Simulations, Protein Modeling, and Large-scale Recommender Systems. The focus has shifted towards models that deliver accurate results and provide understandable and actionable insights into their predictions. Counterfactual explanations have emerged as crucial tools to meet these challenges and empower users. For the reasons mentioned above, in this tutorial, we furnish the essential theoretical underpinnings for generating counterfactual explanations for GNNs, arming the audience with the resources to gain a deeper understanding of the outcomes produced by their systems and enabling users to interpret predictions effectively. First, we present an insight into GNNs, their underlying message-passing architecture, and the challenges of providing post-hoc explanations for their predictions across different domains. Then, we provide a formal definition of Graph Counterfactual Explainability (GCE) and its potential to provide recourse to users. Furthermore, we propose a taxonomy of existing GCE methods and give insights into every approach’s main idea, advantages, and limitations. Finally, we introduce the most frequently used benchmarking datasets, evaluation metrics, and protocols to analyze the future challenges in the field.