Skip to content
View MartinPawel's full-sized avatar

Block or report MartinPawel

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
MartinPawel/README.md

Hi there!

Scholar Badge

Hi! I am Martin - a Postdoc @ Harvard University.
I work on unlearning as well as privacy & robustness issues of foundation models.
Previously, I was a PhD stundent @ University of Tübingen (Germany) & interned @ JP Morgan AI Research (UK).

Pinned Loading

  1. CounterfactualDistanceAttack CounterfactualDistanceAttack Public

    "On the Privacy Risks of Algorithmic Recourse". Martin Pawelczyk, Himabindu Lakkaraju* and Seth Neel*. In International Conference on Artificial Intelligence and Statistics (AISTATS), PMLR, 2023.

    Jupyter Notebook 6

  2. In-Context-Unlearning In-Context-Unlearning Public

    "In-Context Unlearning: Language Models as Few Shot Unlearners". Martin Pawelczyk, Seth Neel* and Himabindu Lakkaraju*; ICML 2024.

    Jupyter Notebook 16 2

  3. ProbabilisticallyRobustRecourse ProbabilisticallyRobustRecourse Public

    Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness. M. Pawelczyk, T. Datta, J. v.d Heuvel, G. Kasneci, H. Lakkaraju. International Conference on Learning Repr…

    Python 1 1

  4. carla-recourse/CARLA carla-recourse/CARLA Public

    CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms

    Python 283 62

  5. c-chvae c-chvae Public

    Python 12 4

  6. OpenXAI OpenXAI Public

    Forked from AI4LIFE-GROUP/OpenXAI

    OpenXAI : Towards a Transparent Evaluation of Model Explanations

    JavaScript