Show & Tell 39.2 — ConceptExplainer: TCAV, Concept Bottleneck Models & Network Dissection #795
web3guru888
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
ConceptExplainer — Algorithm Deep Dive
TCAV (Testing with Concept Activation Vectors)
Kim et al. (2018) introduced TCAV to answer: "How important is concept C to the prediction of class K?"
Key insight: A TCAV score of 0.8 for concept "striped" and class "zebra" means 80% of zebra images have positive conceptual sensitivity to "striped."
Concept Bottleneck Models (Koh et al. 2020)
Benefits: (1) Inherently interpretable — inspect concept activations, (2) Human intervention — experts can correct concept predictions at test time, (3) Modular — swap concept predictors independently.
Prototype-Based Explanations
Integration with Existing Phases
See issue #789 for full spec.
Beta Was this translation helpful? Give feedback.
All reactions