Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in International Conference on Neural Information Processing (ICONIP), 2022
We reduce the cost of generative rehearsal for continual learning by modulating the frequency of rehearsal based on the depth of the network.
Published in International Conference on Neural Information Processing (ICONIP), 2022
We propose a simple regularizer that selectively increases the diversity of GAN outputs where variety is desired,
Published in Advances in Neural Information Processing Systems (NeurIPS), 2023
B4B is an active defense against encoder model stealing.
Published in Winter Conference on Computer Vision (WACV), 2024
We design a fair evaluation framework for membership inference on Stable Diffusion, apply existing and new attacks, and show prior setups overestimate success while true membership detection remains difficult.
Published in European Conference on Artificial Intelligence (ECAI), 2024
We presents efficient model‑stealing attacks tailored to inductive graph neural networks.
Published in International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2025
LGR-AD models the generation process as a distributed system of interacting agents, each representing an expert diffusion model. These agents dynamically adapt to varying conditions and collaborate through a graph neural network that encodes their relationships and performance metrics.
Published in International Conference on Machine Learning (ICML), 2025
We show that image autoregressive models are empirically less private than diffusion models. We introduce the first membership inference attack tailored to IARs, and execute membership inference, dataset inference, and sample extraction to reveal their vulnerability.
Published in ICLR Workshop on Building Trust in Language Models and Applications, 2025
We investigate whether LLMs implicitly encode safety information, introducing a training-free moderation method that levarages the hidden states of an LLM to detect unsafe inputs.
Published in Conference on Computer Vision and Pattern Recognition (CVPR), 2025
We show that existing membership inference attacks are ineffective for large diffusion models and we propose CDI, a dataset inference approach that aggregates signals across many samples to reliably detect copyrighted training data with over 99% confidence.
Published:
This is a description of your talk, which is a markdown file that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.