(see the publication list for the papers)
Review papers on causal discovery and causality-related learning: causal discovery in biology (Glymour, Zhang, and Spirtes, 2019); causal discovery in earth system sciences (Runge et al., 2019); cyclic causal model discovery in neuroscience (Sanchez-Romero et al, 2019); evaluation of causal discovery methods (Ramsey, Zhang, and Glymour, 2019); general reviews of causal discovery methods (Zhang et al., NSR18; Spirtes & Zhang, Applied_Informatics2016 & BookChapter2018).
Principles for causal discovery: independent noise in (post-)nonlinear causal model (Zhang and Hyvärinan, UAI’09 & ECML’09; Zhang and Chan, ICONIP’06); independent transformation in deterministic systems (Janzing et al., AI12 & Daniusis et al., UAI’10); exogeneity (Zhang et al., TARK’15); independent changes (generalized notation of invariance) in nonstationary/heterogeneous data (Zhang et al., IJCAI’17; Huang et al., ICDM’17; Zhang et al., arxiv’15...); constraints on and estimation of functional causal models (Zhang et al., TIST'16); Generalized independent noise conditions (including Triad conditions) for estimating linear, non-Gaussian hidden causal representations (Cai et al., NeurIPS'19; Xie et al., NeurIPS'20).
Causal discovery from various types of nonstationary and heterogeneous data: general framework for causal discovery from independent but non-identically distributed time series or multiple-domain data and beyond (Huang \& Zhang et al., JMLR’20;Zhang et al., IJCAI’17; Huang et al., ICDM’17; Zhang et al., arxiv’15); causal discovery and forecasting in nonstationary environments with state-space modeling (Huang et al., ICML’19); causal discovery with fixed general nonlinear causal mechanisms and nonstationary noise (Monti et al., UAI’19); modeling and estimation of time-varying causal relations with Gaussian processes (Huang et al., IJCAI’15); multi-domain causal structure learning in linear systems with regression invariance (Ghassami et al., NIPS'17) or with independent changes (Ghassami et al., NeurIPS’18).
Functional causal model-based causal discovery: theory and methods for causal discovery based on the post-nonlinear (PNL) causal model (Zhang and Hyvärinan, UAI’09 & JMLR WCP’10 (NIPS'08 Workshop); Zhang and Chan, ICONIP’06); nonlinear causal models with additive noise and application to time series (Zhang and Hyvärinan, ECML’09); generalized score-based search of general nonlinear causal relations (Huang et al., KDD’18); Cascade nonlinear additive noise model (Cai et al., IJCAI’19); causal discovery with fixed general nonlinear causal mechanisms and nonstationary noise (Monti et al., UAI’19); Triad conditions for estimating linear, non-Gaussian hidden causal representations (Cai et al., NeurIPS'19).
Hidden causal representation learning: estimating linear, non-Gaussian latent variable causal models with Generalized Independent Noise (GIN) conditions (Xie et al., NeurIPS'20; Cai et al., NeurIPS'19).
Causal discovery from low-resolution or partially observable time series: causality discovery in time series as constraint-based or functional causal model-based causal discovery with temporal constraints & time-delayed and instantaneous relations (Zhang et al., ECML'09; Hyvarinen et al., JMLR'10); causal discovery from subsampling / temporally aggregation (Gong et al., ICML'15 & UAI'17); causal discovery from partially observable time series (Geiger et al., ICML'15; Salehkaleyber et al., AAA’18).
Causal discovery in the presence of measurement error or confounders: Causal discovery with linear, non-Gaussian models under measurement error (Zhang et al., UAI’18); both linear, Gaussian and linear, non-Gaussian cases (Zhang et al., UAI WS’17); Learning Linear Non-Gaussian Causal Models in the Presence of Latent Variables (Salehkaleybar et al., JMLR’20).
Causal discovery under missing values: Constraint-based causal discovery in the presence of missing values (Tu et al., AIStats’19).
Causal discovery in discrete or mixed continuous and discrete cases: causal search based on generalized score functions that apply to general nonlinear relations and mixed cases (Huang et al., KDD’18); causal discovery from discrete variables with hidden compact representations (Cai et al., NeurIPS’18).
Causal discovery under selection bias (Zhang et al., UAI’16).
Conditional independence test: kernel-based conditional independence test (KCI-test) with application to causal discovery (Zhang et al., UAI’11); permutation-based kernel conditional independence test (Doran et al., UAI’14); approximate kernel-based conditional independence tests for causal discovery (Strobl et al., 2019).
Domain adaptation / transfer learning as well as other learning problems from a causal perspective: domain adaptation as a problem of Bayesian inference on the learned graphical presentation: a principled, end-to-end framework of domain adaptation (Zhang \& Gong et al., NeurIPS’20); causal and anti-causal learning (Schölkopf et al., ICML’12); domain adaptation under target and conditional shift (Zhang et al., ICML’13); a general causal view of domain adaptation (Zhang et al., AAAI’15); domain adaptation with conditionally transferrable components or invariant mechanisms (Gong et al., ICML’16); data-driven approach to multiple-source domain adaptation (Stojanov et al., AIStats'19a); low-dimensional density ratio estimation for covariate shift correction (Stojanov et al., AIStats'19b); properties of invariant component-based domain adaptation (Zhao et al., ICML’19); geometry-consistent GANs for one-sided unsupervised domain mapping (Fu et al., CVPR’19); deep domain generalization via conditional invariant adversarial networks (Li et al., ECCV’18); domain generalization via multi-domain discriminant analysis (Hu et al., UAI’19); multi-label learning by exploiting label dependency (Zhang & Zhang, KDD’10); learning disentangled semantic representation for domain adaptation (Cai et al., IJCAI’19); Causal Discovery and Forecasting in Nonstationary Environments with State-Space Models (Huang et al., ICML’19).