Current Research:

I am currently a post-doctoral researcher at the University of Potsdam, Germany. I previously was a post-doctoral research fellow at the University of Tübingen, Germany and completed my doctoral studies at the University of California, Berkeley. My research focuses on learning algorithms particularly in the context of security-sensitive application domains. I investigate the vulnerability of learning to security threats and how resilient learning techniques can mitigate such security threats. I was a co-chair of the ACM Workshop on Artificial Intelligence and Security in 2012 and 2013 and was a co-organizer of a Dagstuhl Workshop entitled “Machine Learning Methods for Computer Security” in 2012. The following are my current research interests:

Security in machine learning

In this project, we are studying the effect a malicious user can have on statistical learning techniques used in security-sensitive environments.

SAT-based DTP

This project has focused on developing solvers for large instances of Disjunctive Temporal Problems (DTPs) by converting them into a SAT representation.

Clustering with Pairwise Constraints

This project explored the use of pairwise constraints between data points for clustering algorithms. The constraints we considered indicated whether pairs of points belonged to the same cluster or to different clusters. Using these constraints, one is able to better cluster data as has been demonstrated in several image applications. Our contribution was a new sampling algorithm that uses these constraints.

Adaptive Protection Environment

This project uses machine learning techniques to identify viruses in email traffic.

Duke Landmine Detection project


Talks

Here I list the research talks I've given and provide slides.

Conference Talks

  1. Microbagging Estimators: An Ensemble Approach to Distance-weighted Classifiers at the 3rd Asian Conference on Machine Learning (ACML), Tapei, November 2011.
  2. Understanding the Risk Factors of Learning in Adversarial Environments at the 4th ACM Workshop on Artificial Intelligence and Security (AISec), Chicago, IL, October 2011.
  3. Classifier Evasion: Models and Open Problems at the Privacy and Security Issues in Data Mining and Machine Learning, 2011
  4. Near-Optimal Evasion of Convex-Inducing Classifiers at the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS), 2010.
  5. Open Problems in the Security of Learning at the First ACM Workshop on Security and Artificial Intelligence (AISEC), Chicago, IL, November 2008. [slides]
  6. Exploiting Machine Learning to Subvert Your Spam Filter at the First USENIX Workshop on Large-Scale Exploits and Emergent Threats (LEET'08), San Francisco, CA, April 15, 2008. [slides | audio]
  7. CircuitTSAT: A Solver for Large Instances of the Disjunctive Temporal Problem at the International Conference on Automated Planning and Scheduling (ICAPS), Sydney, Australia, 2008. [slides]
  8. Revisiting Probabilistic Models for Clustering with Constraints, In the Proceedings of the International Conference on Machine Learning (ICML), Corvallis, Oregon, 2007. [slides]
  9. Bounding an Attack's Complexity for a Simple Learning Model at the First Workshop on Tackling Computer Systems Problems with Machine Learning Techniques (SysML), Saint-Malo, France, June 2006. [slides]