I am an incoming Assistant Professor (Presidential Young Professor, PYP) in the Department of Computer Science within the School of Computing (SoC) at the National University of Singapore (NUS). Before joining NUS, I completed my PhD at the StarAI lab at the University of California, Los Angeles, where I was advised by Prof. Guy Van den Broeck. I am currently visiting Prof. Mathias Niepertβs lab at the University of Stuttgart.
π― Research Interests
My primary research focus is deep generative models (diffusion models [1,2,3], probabilistic circuits [5,6,7], variational autoencoders [4]). Other than understanding and mitigating the fundamental challenges toward good modeling performance [1,6,8], I am especially interested in efficient exact/approximate inference with guarantees of various deep generative models from both theoretical perspectives [9] and empirical perspectives [7,10].
π Prospective Students
I am actively looking for motivated and curious individuals to join my research group. Multiple positions are available for PhD students, postdoctoral researchers, and research interns (both on-site and remote) at my lab in NUS. If you are interested in working on topics including generative modeling, reasoning, and tractable inference, I would love to hear from you.
To apply, please send an email to anjiliu219@gmail.com. Use the subject line to specify the position you are applying for (e.g., [PhD], [Postdoc], or [Intern]), and include the following materials:
- Your CV and transcript;
- (Optional) A short research statement describing your interests;
- (Optional) One research paper you have authored.
I welcome applications from individuals with diverse backgrounds and levels of experience.
[Important] For prospective PhD students interested in the Spring 2026 intake, the application deadline is June 15th, 2025. In addition to emailing me, please also submit your application through the official website.
π Research Directions
Generative AI has become a transformative paradigm that enables machines to produce high-quality content such as images, language, and audio. However, beyond creating charming and coherent outputs, these systems must reason β steering their generations to satisfy specific properties. While sound reasoning techniques from classical symbolic AI can rigorously guarantee these properties, they are often computationally prohibitive and difficult to scale. As a result, many recent approaches rely on scalable yet unsound methods, such as chain-of-thought prompting, which prioritize efficiency over rigorous correctness.
My research aims to design generative AI models as drop-in replacements of existing models like autoregressive Transformers and diffusion models, with the distinguishing capability of sound reasoning. This enables high-fidelity yet controllable generations that align with user requests or domain constraints. Towards this goal, I pursue research in the following directions:
-
Advancing and designing tractable deep generative models. Are there generative models that are expressive enough and at the same time support sound reasoning? Perhaps surprisingly, the answer is yes! I am among the first to enhance the expressiveness of Probabilistic Circuits, a class of models known for their ability to compute probabilistic queries exactly and efficiently. My work has pushed these models from underfitting simple tabular data to achieving competitive performance with autoregressive and diffusion models on image and text modeling tasks (e.g., [8,11]). Building on this progress, I aim to further explore the frontier of tractable and expressive generative models, making them more capable and broadly applicable in reasoning-intensive domains.
-
Demonstrating the benefit of tractable reasoning. While expressiveness is often viewed as the most important aspect of generative models, I argue that their ability to reason tractably is equally critical. Specifically, in many reasoning-demanding tasks such as lossless data compression [7], controlled image generation [12], and population genetic studies [13], the capacity to compute the desired inference query is essential. When a model lacks this capability, the resulting approximation error can outweigh any gains achieved through increased expressiveness. Looking ahead, I aim to make generative models more suitable for reasoning-demanding applications, by either improving their inherent reasoning capability, or designing better exact/approximate inference algorithms.
Beyond these directions, I am also interested in several related questions at the intersection of modeling and reasoning. One example is the interplay between learning to reason and having intrinsic reasoning capabilities.
π₯ News
- 2025.03: Β ππ Can discrete diffusion models generalize well to any conditional generation tasks? We demonstrate a significant gap and propose a better architecture Tracformer in our paper accepted to ICML 2025.
- 2025.01: Β ππ Our recent work on improving few-step generation performance of discrete diffusion models is accepted to ICLR 2025. Check it out at https://arxiv.org/pdf/2410.01949.
- 2024.09: Β ππ Check our recent work accepted to NeurIPS 2024. It demonstrates the importance of performing tractable inference in Offline Reinforcement Learning.
- 2024.08: Β ππ I gave an invitated talk about Probabilistic Circuits at Prof. Steffen Staabβs group at University of Stuttgart
- 2024.06: Β ππ I will co-organize the workshop on Open-World Agents at NeurIPS 2024
- 2024.05: Β ππ Our paper describing the technical details of PyJuice is accepted to ICML 2024.
π Education
- 2020.09 - 2025.06, Computer Science PhD student at UCLA, United States
- 2015.09 - 2019.06, Bachelorβs degree in Automation, Beihang University, China
π¬ Invited Talks
- 2025.01, Towards Controllable Generative AI with Intrinsic Reasoning Capabilities, ELPIS lab, TU Munich, Germany
- 2024.08, Scaling Up Tractable Probabilistic Circuits for Inference-Demanding Application, University of Stuttgart, Germany
- 2023.07, Tractable Probabilistic Circuits, Dagstuhl, Germany
- 2023.05, Scaling Up Probabilistic Circuits by Latent Variable Distillation, ICLR oral presentation
- 2022.02, Tractable Probabilistic Circuits, Peking University, China
PyJuice
I am the main developer of PyJuice, which enables fast and scalable training and inference of Probabilistic Circuits. PyJuice has been used to train state-of-the-art PCs [8] and and has supported many related projects. Feel free to give it a try!
π Teaching
- Lecturer (with Mathias Niepert), Reinforcement Learning, University of Stuttgart, Spring 2025
- Lecturer (with Mathias Niepert), Introduction to AI, University of Stuttgart, Winter 2024
- Teaching assistant, Reinforcement Learning, University of Stuttgart, Spring 2024
π Services
- PC member of ICML, NeurIPS, ICLR, AISTATS, AAAI
- Co-organizer of the Workshop on Open-World Agents at NeurIPS 2024
π Publications
Below is a list of selected publications. Please refer to my Google Scholar page for the full list of publications.




A Tractable Inference Perspective of Offline RL
Xuejie Liu*, Anji Liu*, Guy Van den Broeck, Yitao Liang
NeurIPS 2024 / Paper / Code

OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents
Zihao Wang, Shaofei Cai, Zhancun Mu, Haowei Lin, Ceyao Zhang, Xuejie Liu, Qing Li, Anji Liu, Xiaojian Ma, Yitao Liang
NeurIPS 2024 / Paper / Website / Code




Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents
Zihao Wang, Shaofei Cai, Guanzhou Chen, Anji Liu, Xiaojian Ma, Yitao Liang
NeurIPS 2023 (Best paper award at the TEACH workshop at ICML 2023) / Paper / Code




Sparse probabilistic circuits via pruning and growing
Meihua Dang, Anji Liu, Guy Van den Broeck
NeurIPS 2022 (Oral; top 1.9%) / Paper / Code

Tractable and Expressive Generative Models of Genetic Variation Data
Meihua Dang, Anji Liu, Xinzhu Wei, Sriram Sankararaman, Guy Van den Broeck
RECOMB 2022 / Paper / Code


A Compositional Atlas of Tractable Circuit Operations for Probabilistic Inference
Antonio Vergari, YooJung Choi, Anji Liu, Stefano Teso, Guy Van den Broeck
NeurIPS 2021 (Oral; top 0.6%) / Paper / Code

Tractable regularization of probabilistic circuits
Anji Liu, Guy Van den Broeck
NeurIPS 2021 (Spotlight; top 3.7%) / Paper / Code

Off-Policy Deep Reinforcement Learning with Analogous Disentangled Exploration
Anji Liu, Yitao Liang, Guy Van den Broeck
AAMAS 2020 / Paper / Code