Visualizing and Interacting with Concept Hierarchies

03/11/2013
by   Michel Crampes, et al.
0

Concept Hierarchies and Formal Concept Analysis are theoretically well grounded and largely experimented methods. They rely on line diagrams called Galois lattices for visualizing and analysing object-attribute sets. Galois lattices are visually seducing and conceptually rich for experts. However they present important drawbacks due to their concept oriented overall structure: analysing what they show is difficult for non experts, navigation is cumbersome, interaction is poor, and scalability is a deep bottleneck for visual interpretation even for experts. In this paper we introduce semantic probes as a means to overcome many of these problems and extend usability and application possibilities of traditional FCA visualization methods. Semantic probes are visual user centred objects which extract and organize reduced Galois sub-hierarchies. They are simpler, clearer, and they provide a better navigation support through a rich set of interaction possibilities. Since probe driven sub-hierarchies are limited to users focus, scalability is under control and interpretation is facilitated. After some successful experiments, several applications are being developed with the remaining problem of finding a compromise between simplicity and conceptual expressivity.

READ FULL TEXT

page 14

page 16

page 26

page 28

research
02/04/2021

Triadic Exploration and Exploration with Multiple Experts

Formal Concept Analysis (FCA) provides a method called attribute explora...
research
11/22/2018

Object-oriented Targets for Visual Navigation using Rich Semantic Representations

When searching for an object humans navigate through a scene using seman...
research
03/21/2018

On-demand Relational Concept Analysis

Formal Concept Analysis and its associated conceptual structures have be...
research
11/27/2018

Isabelle/jEdit as IDE for Domain-specific Formal Languages and Informal Text Documents

Isabelle/jEdit is the main application of the Prover IDE (PIDE) framewor...
research
04/04/2022

ConceptExplainer: Understanding the Mental Model of Deep Learning Algorithms via Interactive Concept-based Explanations

Traditional deep learning interpretability methods which are suitable fo...
research
02/25/2023

Agile Modeling: Image Classification with Domain Experts in the Loop

Machine learning is not readily accessible to domain experts from many f...
research
10/28/2022

Fashion-Specific Attributes Interpretation via Dual Gaussian Visual-Semantic Embedding

Several techniques to map various types of components, such as words, at...

Please sign up or login with your details

Forgot password? Click here to reset