I am an incoming Assistant Professor (Presidential Young Professor, PYP) in the Department of Computer Science within the School of Computing (SoC) at the National University of Singapore (NUS). Before joining NUS, I completed my PhD at the StarAI lab at the University of California, Los Angeles, where I was advised by Prof. Guy Van den Broeck. I am currently visiting Prof. Mathias Niepert’s lab at the University of Stuttgart.

🎯 Research Interests

My primary research focus is deep generative models (diffusion models [1,2,3], probabilistic circuits [5,6,7], variational autoencoders [4]). Other than understanding and mitigating the fundamental challenges toward good modeling performance [1,6,8], I am especially interested in efficient exact/approximate inference with guarantees of various deep generative models from both theoretical perspectives [9] and empirical perspectives [7,10].

πŸŽ“ Prospective Students

I am actively looking for motivated and curious individuals to join my research group. Multiple positions are available for PhD students, postdoctoral researchers, and research interns (both on-site and remote) at my lab in NUS. If you are interested in working on topics including generative modeling, reasoning, and tractable inference, I would love to hear from you.

To apply, please send an email to anjiliu219@gmail.com. Use the subject line to specify the position you are applying for (e.g., [PhD], [Postdoc], or [Intern]), and include the following materials:

  • Your CV and transcript;
  • (Optional) A short research statement describing your interests;
  • (Optional) One research paper you have authored.

I welcome applications from individuals with diverse backgrounds and levels of experience.

[Important] For prospective PhD students interested in the Spring 2026 intake, the application deadline is June 15th, 2025. In addition to emailing me, please also submit your application through the official website.

πŸ“Š Research Directions

Generative AI has become a transformative paradigm that enables machines to produce high-quality content such as images, language, and audio. However, beyond creating charming and coherent outputs, these systems must reason – steering their generations to satisfy specific properties. While sound reasoning techniques from classical symbolic AI can rigorously guarantee these properties, they are often computationally prohibitive and difficult to scale. As a result, many recent approaches rely on scalable yet unsound methods, such as chain-of-thought prompting, which prioritize efficiency over rigorous correctness.

My research aims to design generative AI models as drop-in replacements of existing models like autoregressive Transformers and diffusion models, with the distinguishing capability of sound reasoning. This enables high-fidelity yet controllable generations that align with user requests or domain constraints. Towards this goal, I pursue research in the following directions:

  • Advancing and designing tractable deep generative models. Are there generative models that are expressive enough and at the same time support sound reasoning? Perhaps surprisingly, the answer is yes! I am among the first to enhance the expressiveness of Probabilistic Circuits, a class of models known for their ability to compute probabilistic queries exactly and efficiently. My work has pushed these models from underfitting simple tabular data to achieving competitive performance with autoregressive and diffusion models on image and text modeling tasks (e.g., [8,11]). Building on this progress, I aim to further explore the frontier of tractable and expressive generative models, making them more capable and broadly applicable in reasoning-intensive domains.

  • Demonstrating the benefit of tractable reasoning. While expressiveness is often viewed as the most important aspect of generative models, I argue that their ability to reason tractably is equally critical. Specifically, in many reasoning-demanding tasks such as lossless data compression [7], controlled image generation [12], and population genetic studies [13], the capacity to compute the desired inference query is essential. When a model lacks this capability, the resulting approximation error can outweigh any gains achieved through increased expressiveness. Looking ahead, I aim to make generative models more suitable for reasoning-demanding applications, by either improving their inherent reasoning capability, or designing better exact/approximate inference algorithms.

Beyond these directions, I am also interested in several related questions at the intersection of modeling and reasoning. One example is the interplay between learning to reason and having intrinsic reasoning capabilities.

πŸ”₯ News

  • 2025.03: Β πŸŽ‰πŸŽ‰ Can discrete diffusion models generalize well to any conditional generation tasks? We demonstrate a significant gap and propose a better architecture Tracformer in our paper accepted to ICML 2025.
  • 2025.01: Β πŸŽ‰πŸŽ‰ Our recent work on improving few-step generation performance of discrete diffusion models is accepted to ICLR 2025. Check it out at https://arxiv.org/pdf/2410.01949.
  • 2024.09: Β πŸŽ‰πŸŽ‰ Check our recent work accepted to NeurIPS 2024. It demonstrates the importance of performing tractable inference in Offline Reinforcement Learning.
  • 2024.08: Β πŸŽ‰πŸŽ‰ I gave an invitated talk about Probabilistic Circuits at Prof. Steffen Staab’s group at University of Stuttgart
  • 2024.06: Β πŸŽ‰πŸŽ‰ I will co-organize the workshop on Open-World Agents at NeurIPS 2024
  • 2024.05: Β πŸŽ‰πŸŽ‰ Our paper describing the technical details of PyJuice is accepted to ICML 2024.

πŸ“– Education

  • 2020.09 - 2025.06, Computer Science PhD student at UCLA, United States
  • 2015.09 - 2019.06, Bachelor’s degree in Automation, Beihang University, China

πŸ’¬ Invited Talks

  • 2025.01, Towards Controllable Generative AI with Intrinsic Reasoning Capabilities, ELPIS lab, TU Munich, Germany
  • 2024.08, Scaling Up Tractable Probabilistic Circuits for Inference-Demanding Application, University of Stuttgart, Germany
  • 2023.07, Tractable Probabilistic Circuits, Dagstuhl, Germany
  • 2023.05, Scaling Up Probabilistic Circuits by Latent Variable Distillation, ICLR oral presentation
  • 2022.02, Tractable Probabilistic Circuits, Peking University, China

PyJuice PyJuice

I am the main developer of PyJuice, which enables fast and scalable training and inference of Probabilistic Circuits. PyJuice has been used to train state-of-the-art PCs [8] and and has supported many related projects. Feel free to give it a try!

πŸ“– Teaching

  • Lecturer (with Mathias Niepert), Reinforcement Learning, University of Stuttgart, Spring 2025
  • Lecturer (with Mathias Niepert), Introduction to AI, University of Stuttgart, Winter 2024
  • Teaching assistant, Reinforcement Learning, University of Stuttgart, Spring 2024

πŸ’— Services

  • PC member of ICML, NeurIPS, ICLR, AISTATS, AAAI
  • Co-organizer of the Workshop on Open-World Agents at NeurIPS 2024

πŸ“ Publications

Below is a list of selected publications. Please refer to my Google Scholar page for the full list of publications.

ICLR 2025
sym

Discrete Copula Diffusion

Anji Liu, Oliver Broadrick, Mathias Niepert, Guy Van den Broeck

ICLR 2025 / Paper / Code

ICLR 2025
sym

Learning to Discretize Denoising Diffusion ODEs

Vinh Tong, Trung-Dung Hoang, Anji Liu, Guy Van den Broeck, Mathias Niepert

ICLR 2025 (Oral; top 1.8%) / Paper / Code

ICLR 2025
sym

GROOT-2: Weakly Supervised Multi-Modal Instruction Following Agents

Shaofei Cai, Bowei Zhang, Zihao Wang, Haowei Lin, Xiaojian Ma, Anji Liu, Yitao Liang

ICLR 2025 / Paper

NeurIPS 2024
sym

A Tractable Inference Perspective of Offline RL

Xuejie Liu*, Anji Liu*, Guy Van den Broeck, Yitao Liang

NeurIPS 2024 / Paper / Code

NeurIPS 2024
sym

OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents

Zihao Wang, Shaofei Cai, Zhancun Mu, Haowei Lin, Ceyao Zhang, Xuejie Liu, Qing Li, Anji Liu, Xiaojian Ma, Yitao Liang

NeurIPS 2024 / Paper / Website / Code

ICML 2024
sym

Scaling Tractable Probabilistic Circuits: A Systems Perspective

Anji Liu, Kareem Ahmed, Guy Van den Broeck

ICML 2024 / Paper / Code

ICLR 2024
sym

Image Inpainting via Tractable Steering of Diffusion Models

Anji Liu, Mathias Niepert, Guy Van den Broeck

ICLR 2024 / Paper / Code

ICLR 2024
sym

Groot: Learning to follow instructions by watching gameplay videos

Shaofei Cai, Bowei Zhang, Zihao Wang, Xiaojian Ma, Anji Liu, Yitao Liang

ICLR 2024 (Spotlight; top 6.2%) / Paper / Website / Code

NeurIPS 2023
sym

Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents

Zihao Wang, Shaofei Cai, Guanzhou Chen, Anji Liu, Xiaojian Ma, Yitao Liang

NeurIPS 2023 (Best paper award at the TEACH workshop at ICML 2023) / Paper / Code

ICML 2023
sym

Understanding the distillation process from deep generative models to tractable probabilistic circuits

Xuejie Liu*, Anji Liu*, Guy Van den Broeck, Yitao Liang

ICML 2023 / Paper / Code

AAAI 2023
sym

Out-of-distribution generalization by neural-symbolic joint training

Anji Liu, Hongming Xu, Guy Van den Broeck, Yitao Liang

AAAI 2023 / Paper / Code

ICLR 2023
sym

Scaling up probabilistic circuits by latent variable distillation

Anji Liu*, Honghua Zhang*, Guy Van den Broeck

ICLR 2023 (Oral; top 1.8%) / Paper / Code

NeurIPS 2022
sym

Sparse probabilistic circuits via pruning and growing

Meihua Dang, Anji Liu, Guy Van den Broeck

NeurIPS 2022 (Oral; top 1.9%) / Paper / Code

RECOMB 2022
sym

Tractable and Expressive Generative Models of Genetic Variation Data

Meihua Dang, Anji Liu, Xinzhu Wei, Sriram Sankararaman, Guy Van den Broeck

RECOMB 2022 / Paper / Code

ICLR 2022
sym

Lossless compression with probabilistic circuits

Anji Liu, Stephan Mandt, Guy Van den Broeck

ICLR 2022 (Spotlight; top 5.2%) / Paper / Code

NeurIPS 2021
sym

A Compositional Atlas of Tractable Circuit Operations for Probabilistic Inference

Antonio Vergari, YooJung Choi, Anji Liu, Stefano Teso, Guy Van den Broeck

NeurIPS 2021 (Oral; top 0.6%) / Paper / Code

NeurIPS 2021
sym

Tractable regularization of probabilistic circuits

Anji Liu, Guy Van den Broeck

NeurIPS 2021 (Spotlight; top 3.7%) / Paper / Code

ICLR 2020
sym

Off-Policy Deep Reinforcement Learning with Analogous Disentangled Exploration

Anji Liu, Yitao Liang, Guy Van den Broeck

AAMAS 2020 / Paper / Code

ICLR 2020
sym

Watch the Unobserved: A Simple Approach to Parallelizing Monte Carlo Tree Search

Anji Liu, Jianshu Chen, Mingze Yu, Yu Zhai, Xuewen Zhou, Ji Liu

ICLR 2020 (Oral; top 1.9%) / Paper / Code