I am a final-year Ph.D. student at the StarAI lab at University of California, Los Angeles, advised by Prof. Guy Van den Broeck. I am currently visiting Prof. Mathias Niepert’s lab at the University of Stuttgart.

🎯 Research highlights

My primary research focus is deep generative models (diffusion models [1,2,3], probabilistic circuits [5,6,7], variational autoencoders [4]). Other than understanding and mitigating the fundamental challenges toward good modeling performance [1,6,8], I am especially interested in efficient exact/approximate inference with guarantees of various deep generative models from both theoretical perspectives [9] and empirical perspectives [7,10].

πŸ“Š Research directions

  • What requirements must we impose on the structure of deep generative models to accurately and efficiently answer various probabilistic queries, such as computing arbitrary marginal probabilities or determining the MAP state? A useful theoretical framework for studying these problems is Probabilistic Circuits (PCs), which allows us to establish necessary and sufficient conditions on their structures to answer specific probabilistic queries [9]. I am among the first to significantly enhance the empirical performance of PCs, improving their effectiveness from struggling on MNIST to being compatible with variational autoencoders and even diffusion models on ImageNet32 [8]. To facilitate large-scale training and inference on PCs, I developed the Python package PyJuice, which is orders of magnitudes faster than all previous implementations.

  • Does PCs matter in the era of large language models? Many of my research works demonstrate that, with the ability to efficiently perform exact probabilistic inference, PCs can achieve better empirical performance on various down-stream tasks, either when used along [7] or when combined with other deep generative models [2,11].

  • How does the idea of tractable modeling generalizes to other types of deep generative models? In our recent work, we generalize the idea of combining the PC copula with a set of target univariate marginals to solve a fundamental problem that prevents discrete diffusion models from achieving strong performance with fewer steps – they fail to capture dependencies between output variables at each denoising step [1].

PyJuice PyJuice

I am the main developer of PyJuice, which enables fast and scalable training and inference of Probabilistic Circuits. PyJuice has been used to train state-of-the-art PCs [8] and and has supported many related projects. Feel free to give it a try!

πŸ”₯ News

  • 2024.10: Β πŸŽ‰πŸŽ‰ Our recent work on improving few-step generation performance of discrete diffusion models is now on ArXiv. Check it out at https://arxiv.org/pdf/2410.01949.
  • 2024.09: Β πŸŽ‰πŸŽ‰ Check our recent work accepted to NeurIPS 2024. It demonstrates the importance of performing tractable inference in Offline Reinforcement Learning.
  • 2024.08: Β πŸŽ‰πŸŽ‰ I gave an invitated talk about Probabilistic Circuits at Prof. Steffen Staab’s group at University of Stuttgart
  • 2024.06: Β πŸŽ‰πŸŽ‰ I will co-organize the workshop on Open-World Agents at NeurIPS 2024
  • 2024.05: Β πŸŽ‰πŸŽ‰ Our paper describing the technical details of PyJuice is accepted to ICML 2024.

πŸ“– Education

  • 2020.09 - present, Computer Science Ph.D. student at UCLA, United States
  • 2015.09 - 2019.06, Bachelor’s degree in Automation, Beihang University, China

πŸ’¬ Invited Talks

  • 2024.08, Scaling Up Tractable Probabilistic Circuits for Inference-Demanding Application, University of Stuttgart, Germany
  • 2023.07, Tractable Probabilistic Circuits, Dagstuhl, Germany
  • 2023.05, Scaling Up Probabilistic Circuits by Latent Variable Distillation, ICLR oral presentation
  • 2022.02, Tractable Probabilistic Circuits, Peking University, China

πŸ“– Teaching

  • Teaching assistant, Reinforcement Learning, University of Stuttgart, Spring 2024
  • Lecturer (with Mathias Niepert), Introduction to AI, University of Stuttgart, Winter 2024

πŸ’— Services

  • PC member of ICML, NeurIPS, ICLR, AISTATS, AAAI
  • Co-organizer of the Workshop on Open-World Agents at NeurIPS 2024

πŸ“ Publications

Below is a list of selected publications. Please refer to my Google Scholar page for the full list of publications.

ArXiv
sym

Discrete Copula Diffusion

Anji Liu, Oliver Broadrick, Mathias Niepert, Guy Van den Broeck

ArXiv / Paper

NeurIPS 2024
sym

A Tractable Inference Perspective of Offline RL

Xuejie Liu*, Anji Liu*, Guy Van den Broeck, Yitao Liang

NeurIPS 2024 / Paper

NeurIPS 2024
sym

OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents

Zihao Wang, Shaofei Cai, Zhancun Mu, Haowei Lin, Ceyao Zhang, Xuejie Liu, Qing Li, Anji Liu, Xiaojian Ma, Yitao Liang

NeurIPS 2024 / Paper / Website / Code

ICML 2024
sym

Scaling Tractable Probabilistic Circuits: A Systems Perspective

Anji Liu, Kareem Ahmed, Guy Van den Broeck

ICML 2024 / Paper / Code

ICLR 2024
sym

Image Inpainting via Tractable Steering of Diffusion Models

Anji Liu, Mathias Niepert, Guy Van den Broeck

ICLR 2024 / Paper / Code

ICLR 2024
sym

Groot: Learning to follow instructions by watching gameplay videos

Shaofei Cai, Bowei Zhang, Zihao Wang, Xiaojian Ma, Anji Liu, Yitao Liang

ICLR 2024 (Spotlight; top 6.2%) / Paper / Website / Code

NeurIPS 2023
sym

Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents

Zihao Wang, Shaofei Cai, Guanzhou Chen, Anji Liu, Xiaojian Ma, Yitao Liang

NeurIPS 2023 (Best paper award at the TEACH workshop at ICML 2023) / Paper / Code

ICML 2023
sym

Understanding the distillation process from deep generative models to tractable probabilistic circuits

Xuejie Liu*, Anji Liu*, Guy Van den Broeck, Yitao Liang

ICML 2023 / Paper / Code

AAAI 2023
sym

Out-of-distribution generalization by neural-symbolic joint training

Anji Liu, Hongming Xu, Guy Van den Broeck, Yitao Liang

AAAI 2023 / Paper / Code

ICLR 2023
sym

Scaling up probabilistic circuits by latent variable distillation

Anji Liu*, Honghua Zhang*, Guy Van den Broeck

ICLR 2023 (Oral; top 1.8%) / Paper / Code

NeurIPS 2022
sym

Sparse probabilistic circuits via pruning and growing

Meihua Dang, Anji Liu, Guy Van den Broeck

NeurIPS 2022 (Oral; top 1.9%) / Paper / Code

RECOMB 2022
sym

Tractable and Expressive Generative Models of Genetic Variation Data

Meihua Dang, Anji Liu, Xinzhu Wei, Sriram Sankararaman, Guy Van den Broeck

RECOMB 2022 / Paper / Code

ICLR 2022
sym

Lossless compression with probabilistic circuits

Anji Liu, Stephan Mandt, Guy Van den Broeck

ICLR 2022 (Spotlight; top 5.2%) / Paper / Code

NeurIPS 2021
sym

A Compositional Atlas of Tractable Circuit Operations for Probabilistic Inference

Antonio Vergari, YooJung Choi, Anji Liu, Stefano Teso, Guy Van den Broeck

NeurIPS 2021 (Oral; top 0.6%) / Paper / Code

NeurIPS 2021
sym

Tractable regularization of probabilistic circuits

Anji Liu, Guy Van den Broeck

NeurIPS 2021 (Spotlight; top 3.7%) / Paper / Code

ICLR 2020
sym

Off-Policy Deep Reinforcement Learning with Analogous Disentangled Exploration

Anji Liu, Yitao Liang, Guy Van den Broeck

AAMAS 2020 / Paper / Code

ICLR 2020
sym

Watch the Unobserved: A Simple Approach to Parallelizing Monte Carlo Tree Search

Anji Liu, Jianshu Chen, Mingze Yu, Yu Zhai, Xuewen Zhou, Ji Liu

ICLR 2020 (Oral; top 1.9%) / Paper / Code