(see the publication list for the papers)
Methodological developments of causal representation learning: learning latent temporal causal processes from time series (Yao et al., ICLR'22; Yao et al., NeurIPS'22); estimating latent variables and their relations with the Generalized Independent Noise (GIN) condition in the linear, non-Gaussian case (Xie et al., NeurIPS'20; Cai et al., NeurIPS'19); learning latent hierarchical structure in the linear, non-Gaussian case (Xie et al., ICML'22) and in the linear-Gaussian case (Huang, Low et al., NeurIP'22); establishment of the identifiability of nonlinear ICA based on proper sparsity constraints (Zheng et al., NeurIPS'22); learning action-sufficient state representations in reinforcement learning (Huang et al., ICML'22); learning hidden changing sources with partial disentanglement (Kong et al., ICML'22); learning general linear structure with latent variables in the linear, non-Gaussian or heterogeneous case: theoretical identifiability results (Adams et al., NeurIPS'21).
Review papers on causal discovery and causality-related learning: causal discovery in biology (Glymour, Zhang, and Spirtes, 2019); causal discovery in earth system sciences (Runge et al., 2019); cyclic causal model discovery in neuroscience (Sanchez-Romero et al, 2019); evaluation of causal discovery methods (Ramsey, Zhang, and Glymour, 2019); general reviews of causal discovery methods (Zhang et al., NSR18; Spirtes & Zhang, Applied_Informatics2016 & BookChapter2018).
Principles for causal discovery: independent noise in (post-)nonlinear causal model (Zhang and Hyvärinan, UAI’09 & ECML’09; Zhang and Chan, ICONIP’06); independent transformation in deterministic systems (Janzing et al., AI12 & Daniusis et al., UAI’10); exogeneity (Zhang et al., TARK’15); independent changes (generalized notation of invariance) in nonstationary/heterogeneous data (Zhang et al., IJCAI’17; Huang et al., ICDM’17; Zhang et al., arxiv’15...); constraints on and estimation of functional causal models (Zhang et al., TIST'16); Generalized independent noise conditions (including Triad conditions) for estimating linear, non-Gaussian hidden causal representations (Xie et al., NeurIPS'20; Cai et al., NeurIPS'19).
Causal discovery from various types of nonstationary and heterogeneous data: general framework for causal discovery from independent but non-identically distributed time series or multiple-domain data and beyond (Huang & Zhang et al., JMLR’20;Zhang et al., IJCAI’17; Huang et al., ICDM’17; Zhang et al., arxiv’15); learning hidden causal variables with changing distributions (Kong et al., ICML'22); causal discovery and forecasting in nonstationary environments with state-space modeling (Huang et al., ICML’19); causal discovery with fixed general nonlinear causal mechanisms and nonstationary noise (Monti et al., UAI’19); modeling and estimation of time-varying causal relations with Gaussian processes (Huang et al., IJCAI’15); multi-domain causal structure learning in linear systems with regression invariance (Ghassami et al., NIPS'17) or with independent changes (Ghassami et al., NeurIPS’18).
Functional causal model-based causal discovery: theory and methods for causal discovery based on the post-nonlinear (PNL) causal model (Zhang and Hyvärinan, UAI’09 & JMLR WCP’10 (NIPS'08 Workshop); Zhang and Chan, ICONIP’06); nonlinear causal models with additive noise and application to time series (Zhang and Hyvärinan, ECML’09); generalized score-based search of general nonlinear causal relations (Huang et al., KDD’18); Cascade nonlinear additive noise model (Cai et al., IJCAI’19); causal discovery with fixed general nonlinear causal mechanisms and nonstationary noise (Monti et al., UAI’19); Triad conditions for estimating linear, non-Gaussian hidden causal representations (Cai et al., NeurIPS'19).
Causal discovery from low-resolution or partially observable time series: causality discovery in time series as constraint-based or functional causal model-based causal discovery with temporal constraints & time-delayed and instantaneous relations (Zhang et al., ECML'09; Hyvarinen et al., JMLR'10); causal discovery from subsampling / temporally aggregation (Gong et al., ICML'15 & UAI'17); causal discovery from partially observable time series (Geiger et al., ICML'15; Salehkaleyber et al., AAA’18).
Causal discovery in the presence of measurement error or confounders: Causal discovery with linear, non-Gaussian models under measurement error (Zhang et al., UAI’18); both linear, Gaussian and linear, non-Gaussian cases (Zhang et al., UAI WS’17); Independence testing-based approach to causal discovery under measurement error and linear non-Gaussian models (Tang et al., NeurIPS'22); Learning Linear Non-Gaussian Causal Models in the Presence of Latent Variables (Salehkaleybar et al., JMLR’20).
Causal discovery under missing values: Constraint-based causal discovery in the presence of missing values (Tu et al., AIStats’19).
Causal discovery in discrete or mixed continuous and discrete cases: causal search based on generalized score functions that apply to general nonlinear relations and mixed cases (Huang et al., KDD’18); causal discovery from discrete variables with hidden compact representations (Cai et al., NeurIPS’18).
Causal discovery under selection bias (Zhang et al., UAI’16).
Causal treatment of recommender systems (Wang et al., AAAI'18; Wang et al., NeurIPS'18).
Conditional independence test: kernel-based conditional independence test (KCI-test) with application to causal discovery (Zhang et al., UAI’11); permutation-based kernel conditional independence test (Doran et al., UAI’14); approximate kernel-based conditional independence tests for causal discovery (Strobl et al., 2019).
Domain adaptation / transfer learning, reinforcement learning, as well as other learning problems from a causal perspective: domain adaptation as a problem of Bayesian inference on the learned graphical presentation: a principled, end-to-end framework of domain adaptation (Zhang & Gong et al., NeurIPS’20); causal and anti-causal learning (Schölkopf et al., ICML’12); domain adaptation under target and conditional shift (Zhang et al., ICML’13); a general causal view of domain adaptation (Zhang et al., AAAI’15); domain adaptation with conditionally transferrable components or invariant mechanisms (Gong et al., ICML’16); domain adaptation with invariant representation learning: what transformations to learn? (Stojanov et al., NeurIPS'21); data-driven approach to multiple-source domain adaptation (Stojanov et al., AIStats'19a); partial disentanglement: learning changing hidden sources for domain adaptation (Kong et al., ICML'22); adaptive reinforcement learning (Huang et al., ICLR'22; Feng et al., NeurIPS'22); unaligned image-to-image translation by Learning to reweight with changing distributions for the content (Xie et al., ICCV'21); unsupervised image-to-image translation with density changing regularization (Xie et al., NeurIPS'22); low-dimensional density ratio estimation for covariate shift correction (Stojanov et al., AIStats'19b); properties of invariant component-based domain adaptation (Zhao et al., ICML’19); geometry-consistent GANs for one-sided unsupervised domain mapping (Fu et al., CVPR’19); deep domain generalization via conditional invariant adversarial networks (Li et al., ECCV’18); domain generalization via multi-domain discriminant analysis (Hu et al., UAI’19); multi-label learning by exploiting label dependency (Zhang & Zhang, KDD’10); learning disentangled semantic representation for domain adaptation (Cai et al., IJCAI’19); Causal Discovery and Forecasting in Nonstationary Environments with State-Space Models (Huang et al., ICML’19); attainability of optimality of certain fairness constraints (Tang & Zhang, CLeaR'22); counterfactual fairness with partially known causal graph (Zuo et al., NeurIPS'22).