We consider the problem of characterizing Bayesian networks up to unconditional equivalence, i.e., when directed acyclic graphs (DAGs) have the same set of unconditional d-separation statements. Each unconditional equivalence class (UEC) is uniquely represented with an undirected graph whose clique structure encodes the members of the class. Via this structure, we provide a transformational characterization of unconditional equivalence; i.e., we show that two DAGs are in the same UEC if and only if one can be transformed into the other via a finite sequence of specified moves. We also extend this characterization to the essential graphs representing the Markov equivalence classes (MECs) in the UEC. UECs form a partition coarsening of the space of MECs and are easily estimable from marginal independence tests. Thus, a characterization of unconditional equivalence has applications in methods that involve searching the space of MECs of Bayesian networks.
This tutorial will cover the basics of causal discovery, including: how the do-calculus extends the language of probability theory to give a formal framework for expressing causal relationships, how these causal relationships are represented by graphical models, and how these causal graphical models can be learned from data. We will also discuss some recent work at the intersection between causal discovery and kernel methods in machine learning.
Though undirected graphs offer less detailed representations of causal structure than DAGs or ancestral graphs, they are easier to work with. In this talk, we discuss three ways of using undirected graphs for causal structure learning: (1) representing the structure of measurement dependence inducing latent (MeDIL) causal models, giving a causal semantics to edge clique covers of undirected graphs; (2) facilitating a causal graph kernel embedding using the distance covariance, connecting causal structure learning to kernel methods in machine learning; and (3) partitioning the space of DAGs into unconditional equivalence classes, drastically reducing the search space of greedy learning algorithms.
Alex Markham is completing their Postdoc in the Math of Data and AI group at KTH Royal Institute of Technology in Sweden. Their research focuses on developing new algorithms for learning causal models from data. Causal discovery is especially appealing to more applied researchers, because it offers an intuitive framework for reasoning about why stuff happens and how we can influence it to happen differently. Alex finds causal discovery especially interesting because of the many different fields it draws from, including philosophy, cognitive science, and methodology, as well as computational and mathematical fields, like machine learning, statistics, graph theory, algebraic geometry, and combinatorics.
We consider the task of causal structure learning over measurement dependence inducing latent (MeDIL) causal models. We show that this task can be framed in terms of the graph theoretical problem of finding edge clique covers, resulting in a simple algorithm for returning minimal MeDIL causal models (minMCMs). This algorithm is non-parametric, requiring no assumptions about linearity or Gaussianity. Furthermore, despite rather weak assumptions about the class of MeDIL causal models, we show that minimality in minMCMs implies three rather specific and interesting properties: first, minMCMs provide lower bounds on (i) the number of latent causal variables and (ii) the number of functional causal relations that are required to model a complex system at any level of granularity; second, a minMCM contains no causal links between the latent variables; and third, in contrast to factor analysis, a minMCM may require more Latent than measurement variables.