default search action
Ruth Fong
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2023
- [c15]Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, Andrés Monroy-Hernández:
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction. CHI 2023: 250:1-250:17 - [c14]Indu Panigrahi, Ryan Manzuk, Adam Maloof, Ruth Fong:
Improving Data-Efficient Fossil Segmentation via Model Editing. CVPR Workshops 2023: 4829-4838 - [c13]Vikram V. Ramaswamy, Sunnie S. Y. Kim, Ruth Fong, Olga Russakovsky:
Overlooked Factors in Concept-Based Explanations: Dataset Choice, Concept Learnability, and Human Capability. CVPR 2023: 10932-10941 - [c12]Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, Andrés Monroy-Hernández:
Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application. FAccT 2023: 77-88 - [c11]Nicole Meister, Dora Zhao, Angelina Wang, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky:
Gender Artifacts in Visual Datasets. ICCV 2023: 4814-4825 - [i21]Vikram V. Ramaswamy, Sunnie S. Y. Kim, Ruth Fong, Olga Russakovsky:
UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs. CoRR abs/2303.15632 (2023) - [i20]Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, Andrés Monroy-Hernández:
Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application. CoRR abs/2305.08598 (2023) - 2022
- [c10]Sunnie S. Y. Kim, Nicole Meister, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky:
HIVE: Evaluating the Human Interpretability of Visual Explanations. ECCV (12) 2022: 280-298 - [e1]Andreas Holzinger, Randy Goebel, Ruth Fong, Taesup Moon, Klaus-Robert Müller, Wojciech Samek:
xxAI - Beyond Explainable AI - International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers. Lecture Notes in Computer Science 13200, Springer 2022, ISBN 978-3-031-04082-5 [contents] - [i19]Vikram V. Ramaswamy, Sunnie S. Y. Kim, Nicole Meister, Ruth Fong, Olga Russakovsky:
ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features. CoRR abs/2206.07690 (2022) - [i18]Nicole Meister, Dora Zhao, Angelina Wang, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky:
Gender Artifacts in Visual Datasets. CoRR abs/2206.09191 (2022) - [i17]Vikram V. Ramaswamy, Sunnie S. Y. Kim, Ruth Fong, Olga Russakovsky:
Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability. CoRR abs/2207.09615 (2022) - [i16]Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, Andrés Monroy-Hernández:
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction. CoRR abs/2210.03735 (2022) - [i15]Indu Panigrahi, Ryan Manzuk, Adam Maloof, Ruth Fong:
Improving Fine-Grain Segmentation via Interpretable Modifications: A Case Study in Fossil Segmentation. CoRR abs/2210.03879 (2022) - [i14]Devon Ulrich, Ruth Fong:
Interactive Visual Feature Search. CoRR abs/2211.15060 (2022) - 2021
- [c9]Mandela Patrick, Yuki Markus Asano, Polina Kuznetsova, Ruth Fong, João F. Henriques, Geoffrey Zweig, Andrea Vedaldi:
On Compositions of Transformations in Contrastive Self-Supervised Learning. ICCV 2021: 9557-9567 - [i13]Sunnie S. Y. Kim, Nicole Meister, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky:
HIVE: Evaluating the Human Interpretability of Visual Explanations. CoRR abs/2112.03184 (2021) - 2020
- [b1]Ruth Fong:
Understanding convolutional neural networks. University of Oxford, UK, 2020 - [c8]Diego Marcos, Ruth Fong, Sylvain Lobry, Rémi Flamary, Nicolas Courty, Devis Tuia:
Contextual Semantic Interpretability. ACCV (4) 2020: 351-368 - [c7]Sylvestre-Alvise Rebuffi, Ruth Fong, Xu Ji, Andrea Vedaldi:
There and Back Again: Revisiting Backpropagation Saliency Methods. CVPR 2020: 8836-8845 - [c6]Andreas Holzinger, Randy Goebel, Ruth Fong, Taesup Moon, Klaus-Robert Müller, Wojciech Samek:
xxAI - Beyond Explainable Artificial Intelligence. xxAI@ICML 2020: 3-10 - [c5]Iro Laina, Ruth Fong, Andrea Vedaldi:
Quantifying Learnability and Describability of Visual Concepts Emerging in Representation Learning. NeurIPS 2020 - [i12]Mandela Patrick, Yuki Markus Asano, Ruth Fong, João F. Henriques, Geoffrey Zweig, Andrea Vedaldi:
Multi-modal Self-Supervision from Generalized Data Transformations. CoRR abs/2003.04298 (2020) - [i11]Sylvestre-Alvise Rebuffi, Ruth Fong, Xu Ji, Andrea Vedaldi:
There and Back Again: Revisiting Backpropagation Saliency Methods. CoRR abs/2004.02866 (2020) - [i10]Miles Brundage, Shahar Avin, Jasmine Wang, Haydn Belfield, Gretchen Krueger, Gillian K. Hadfield, Heidy Khlaaf, Jingying Yang, Helen Toner, Ruth Fong, Tegan Maharaj, Pang Wei Koh, Sara Hooker, Jade Leung, Andrew Trask, Emma Bluemke, Jonathan Lebensold, Cullen O'Keefe, Mark Koren, Théo Ryffel, J. B. Rubinovitz, Tamay Besiroglu, Federica Carugati, Jack Clark, Peter Eckersley, Sarah de Haas, Maritza Johnson, Ben Laurie, Alex Ingerman, Igor Krawczuk, Amanda Askell, Rosario Cammarota, Andrew Lohn, David Krueger, Charlotte Stix, Peter Henderson, Logan Graham, Carina Prunkl, Bianca Martin, Elizabeth Seger, Noa Zilberman, Seán Ó hÉigeartaigh, Frens Kroeger, Girish Sastry, Rebecca Kagan, Adrian Weller, Brian Tse, Elizabeth Barnes, Allan Dafoe, Paul Scharre, Ariel Herbert-Voss, Martijn Rasser, Shagun Sodhani, Carrick Flynn, Thomas Krendl Gilbert, Lisa Dyer, Saif Khan, Yoshua Bengio, Markus Anderljung:
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. CoRR abs/2004.07213 (2020) - [i9]Diego Marcos, Ruth Fong, Sylvain Lobry, Rémi Flamary, Nicolas Courty, Devis Tuia:
Contextual Semantic Interpretability. CoRR abs/2009.08720 (2020) - [i8]Iro Laina, Ruth C. Fong, Andrea Vedaldi:
Quantifying Learnability and Describability of Visual Concepts Emerging in Representation Learning. CoRR abs/2010.14551 (2020) - [i7]Kurtis Evan David, Qiang Liu, Ruth Fong:
Debiasing Convolutional Neural Networks via Meta Orthogonalization. CoRR abs/2011.07453 (2020)
2010 – 2019
- 2019
- [c4]Ruth Fong, Mandela Patrick, Andrea Vedaldi:
Understanding Deep Networks via Extremal Perturbations and Smooth Masks. ICCV 2019: 2950-2958 - [c3]Ruth Fong:
Occlusions for Effective Data Augmentation in Image Classification. ICCV Workshops 2019: 4158-4166 - [p1]Ruth Fong, Andrea Vedaldi:
Explanations for Attributing Deep Neural Network Predictions. Explainable AI 2019: 149-167 - [i6]Ruth Fong, Mandela Patrick, Andrea Vedaldi:
Understanding Deep Networks via Extremal Perturbations and Smooth Masks. CoRR abs/1910.08485 (2019) - [i5]Sylvestre-Alvise Rebuffi, Ruth Fong, Xu Ji, Hakan Bilen, Andrea Vedaldi:
NormGrad: Finding the Pixels that Matter for Training. CoRR abs/1910.08823 (2019) - [i4]Ruth Fong, Andrea Vedaldi:
Occlusions for Effective Data Augmentation in Image Classification. CoRR abs/1910.10651 (2019) - 2018
- [c2]Ruth Fong, Andrea Vedaldi:
Net2Vec: Quantifying and Explaining How Concepts Are Encoded by Filters in Deep Neural Networks. CVPR 2018: 8730-8738 - [i3]Ruth Fong, Andrea Vedaldi:
Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks. CoRR abs/1801.03454 (2018) - 2017
- [c1]Ruth C. Fong, Andrea Vedaldi:
Interpretable Explanations of Black Boxes by Meaningful Perturbation. ICCV 2017: 3449-3457 - [i2]Ruth Fong, Walter J. Scheirer, David D. Cox:
Using Human Brain Activity to Guide Machine Learning. CoRR abs/1703.05463 (2017) - [i1]Ruth Fong, Andrea Vedaldi:
Interpretable Explanations of Black Boxes by Meaningful Perturbation. CoRR abs/1704.03296 (2017)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-01-20 22:49 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint