Pin-Yu Chen (陳品諭)

Principal Research Staff Member, IBM Research AI; MIT-IBM Watson AI Lab; RPI-IBM AIRC

IBM Thomas J. Watson Research Center, NY, USA

    Link to my    Twitter    Google scholar profile    CV    Bio    

        Contact: pinyuchen.tw at gmail.com (primary reviewer account), pin-yu.chen at ibm.com              

- I am a Principal Research Scientist of Trusted AI Group & PI of MIT-IBM Watson AI Lab, IBM Thomas J. Watson Research Center. I am also the Chief Scientist of RPI-IBM AI Research Collaboration program. My recent research focuses on adversarial machine learning of neural networks for robustness and safety, and more broadly, making AI trustworthy. Here is my <bio>. Check out my research vision and portfolio.

- My research contributes to IBM Adversarial Robustness Toolbox, AI Explainability 360, AI Factsheets 360, and Watson Studio

- I am open to collaboration with highly motivated researchers!

- I wrote a book on "Adversarial Robustness for Machine Learning" with Cho-Jui Hsieh

- Workshop organizer (selected): ICML('22,'23), KDD('19-'22), MLSyS'22, NeurIPS'21

- Tutorial presenter (selected): NeurIPS'22, AAAI('24,'23,'22), CVPR('23,'21,'20), IJCAI'21, MLSS'21, ECCV'20 

- Area Chair/Senior PC: NeurIPS, ICML, AAAI, IJCAI, AISTATS, PAKDD, ICASSP, ACML

     - Technical Program Committee: IEEE S&P, ACM CCS, IEEE Signal Processing (MLSP & ASPS)

- Editor: TMLR 

RecentEvents

Featured Talks

Funded Research Projects

Selected Awards and Honors

New Preprints

Selected Publications

I. Adversarial Machine Learning and Robustness of Neural Networks

-Attack & Defense

-Robustness Evaluation & Verification & Certification

-Applications of Adversarial Machine Learning (e.g. model reprogramming)

II. Foundation Models and Generative AI (e.g., PEFT, Prompting, Safety, Red-Teaming)

III. Cyber Security & Network Resilience

IV. Graph Learning and Network Data Analytics

V. Event Propagation Models in Networks

VI. Optimization Methods and Algorithms for Machine Learning and Signal Processing

VII. Interpretability, Explainability, Fairness, and Causality for Machine Learning

VIII. Deep Learning and Generalization

Technical Reports

[T11] Vijay Arya, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R Varshney, Dennis Wei, and Yunfeng Zhang. “One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques,”

[T10] Rise Ooi, Chao-Han Huck Yang, Pin-Yu Chen, Vìctor Eguìluz, Narsis Kiani, Hector Zenil, David Gomez-Cabrero, Jesper Tegnèr, “Controllability, Multiplexing, and Transfer Learning in Networks using Evolutionary Learning

[T9] Sijia Liu, Pin-Yu Chen, Alfred Hero, and Indika Rajapakse, “Dynamic Network Analysis of the 4D Nucleome

[T8] Sheng-Chun Kao*, Chao-Han Huck Yang*, Pin-Yu Chen, Xiaoli Ma, and Tushar Krishna, “Reinforcement Learning based Interconnection Routing for Adaptive Traffic Optimization,” poster paper at IEEE/ACM International Symposium on Networks-on-Chip (NOCS), 2019 (*equal contribution)

[T7] Chia-Yi Hsu, Pin-Yu Chen, and Chia-Mu Yu, “Characterizing Adversarial Subspaces by Mutual Information,” poster paper at AsiaCCS, 2019

[T6] Pin-Yu Chen, Sutanay Choudhury, Luke Rodriguez, Alfred  O. Hero, and Indrajit Ray, “Enterprise Cyber Resiliency Against Lateral Movement: A Graph Theoretic Approach,” technical report for a book chapter in “Industrial Control Systems Security and Resiliency: Practice and Theory,” Springer, 2019 

[T5] Sijia Liu and Pin-Yu Chen, “Zeroth-Order Optimization and Its Application to Adversarial Machine Learning,” IEEE Intelligent Informatics BULLETIN (invited paper)

[T4] Hongge Chen, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, Cho-Jui Hsieh, “Show-and-Fool: Crafting Adversarial Examples for Neural Image Captioning

[T3] Yash Sharma and Pin-Yu Chen, “Bypassing Feature Squeezing by Increasing Adversary Strength

[T2] Zhuolin Yang, Bo Li, Pin-Yu Chen, Dawn Song, “Towards Mitigating Audio Adversarial Perturbations

[T1] Pin-Yu Chen, Meng-Hsuan Sung, and Shin-Ming Cheng, “Buffer Occupancy and Delivery Reliability Tradeoffs for Epidemic Routing

Patents

[PA43] Certification-based Robust Training by Refining Decision Boundary

[PA42] Temporal Action Localization with Mutual Task Guidance

[PA41] Self-supervised semantic shift detection and alignment

[PA40] Neural capacitance: neural network selection via edge dynamics

[PA39] Protein Structure Prediction using Machine Learning

[PA38] Testing adversarial robustness of systems with limited access

[PA37] Counterfactual Debiasing Inference for Compositional Action Recognition

[PA36] Image Grounding with Modularized Graph Attention Networks

[PA35] Compositional Action Machine Learning Mechanisms

[PA34] Determining Analytical Model Accuracy with Perturbation Response

[PA33] Model-Agnostic Input Transformation for Neural Networks

[PA32] Decentralized Policy Gradient Descent and Ascent for Safe Multi-agent Reinforcement Learning

[PA31] Embedding-Based Generative Model for Protein Design

[PA30] Distributed Adversarial Training for Robust Deep Neural Networks

[PA29] Generating Unsupervised Adversarial Examples for Machine Learning

[PA28] Self-supervised semantic shift detection and alignment

[PA27] Transfer learning with machine learning systems

[PA26] Summarizing Videos Via Side Information

[PA25] Detecting Trojan Neural Networks

[PA24] State-augmented Reinforcement Learning

[PA23] Query-based Molecule Optimization and Applications to Functional Molecule Discovery

[PA22] Efficient Search of Robust Accurate Neural Networks

[PA21] Arranging content on a user interface of a computing device

[PA20] Filtering artificial intelligence designed molecules for laboratory testing

[PA19] Training robust machine learning models

[PA18] Robustness-aware quantization for neural networks against weight perturbations

[PA17] Inducing Creativity in an Artificial Neural Network

[PA16] Interpretability-Aware Adversarial Attack and Defense Method for Deep Learnings

[PA15] Mitigating adversarial effects in machine learning systems

[PA14] Designing and folding structural proteins from the primary amino acid sequence

[PA13] Contrastive explanations for images with monotonic attribute functions

[PA12] Efficient and secure gradient-free black box optimization

[PA11] Explainable machine learning based on heterogeneous data

[PA10] Computational creativity based on a tunable creativity control function of a model

[PA9] Integrated noise generation for adversarial training

[PA8] Framework for Certifying a lower bound on a robustness level of convolutional neural networks 

[PA7] Adversarial input identification using reduced precision deep neural networks

[PA6] Model agnostic contrastive explanations for structured data

[PA5] Contrastive explanations for interpreting deep neural networks

[PA4] Computational Efficiency in Symbolic Sequence Analytics Using Random Sequence Embeddings

[PA3] Graph similarity analytics

[PA2] Testing adversarial robustness of systems with limited access

[PA1] System and methods for automated detection, reasoning and recommendations for resilient cyber systems

Tutorial Presenter:

Service

Editorial Board

Senior Members

Technical Committee

Area Chair/Senior PC

Featured conference reviewers

Featured journal reviewers

Mentorship

Students having me in PhD Thesis Committee:

Internship

Fun and Proud Fact: My Erdos number is 4 (through two distinct paths)!!