< Back to list
Extracted Abstract:
—In this paper, we present the DeepTree exhibit, a multi developed DeepTree to facilitate collaborative team of computer scientists, learning scientists, biologists, and museum curators worked together throughout design, d and evaluation. We present the importance of designing the interactions and the visualization hand active learning. The outcome of this process is a fractal all life on earth; a custom rendering and navigation engine that prioritizes visual appeal and smooth fly interface that encourages collaborative exploration while offering guided discovery. We dataset encouraged free exploration, triggers emotional responses, and Index Terms—Informal science education, 1 IN TR O D U C T I O N The design of information visualizations to support science learning in museums must strike a balance between validity to educate, artistry to entice, and playfulness to engage. This form of visualization in public spaces is different from casual information visualization [46] often have specific scientific learning goals from the onset, requiring close collaboration among experts disciplines. It also differs from visualization domain experts or analysts [24] in that the users of the visualization system are mostly novices with a diverse range of experiences and backgrounds. In this paper, we present the DeepTree exhibit (cf. Fig. 1), an interactive visualization of the Tree of Life that illustrates the phylogenetic relationship of all life on DeepTree is part of a larger NSF-funded project called • Florian Block, Brenda Caldwell Phillips, and Chia Shen are with Harvard University. E-mail: {fblock, bcphillips, cshen}@seas.harvard.edu • Judy Diamond is with University of Nebraska. E-mail • E. Margaret Evans is with University of Michigan. E evansem@umich.edu • Michael S. Horn is with Northwestern University. E- horn@northwestern.edu Manuscript received 31 March 2012; accepted 1 August 2012 14 October 2012; mailed on 5 October 2012. For information on obtaining reprints of this article, please send e tvcg@computer.org. The DeepTree Exhibit: Visualizing the Tree of Life to Facilitate Informal Learning Michael S. Horn, Brenda Caldwell Phillips, Judy Diamond, E. Margaret Evans Senior Member, IEEE The DeepTree Exhibit: main view of the tree of life, image reel, and action button (left). Three kids collaboratively exploring the DeepTree (middle). Special learning activity for common descent, inheritance, and traits (right). In this paper, we present the DeepTree exhibit, a multi-user, multi-touch interactive visualization of the Tree of Life. to facilitate collaborative learning of evolutionary concepts. We will describe an iterative process in which a team of computer scientists, learning scientists, biologists, and museum curators worked together throughout design, d and evaluation. We present the importance of designing the interactions and the visualization hand-in-hand in order to active learning. The outcome of this process is a fractal-based tree layout that reduces visual complexity while being able to capture all life on earth; a custom rendering and navigation engine that prioritizes visual appeal and smooth fly-through; interface that encourages collaborative exploration while offering guided discovery. We present an evaluation dataset encouraged free exploration, triggers emotional responses, and facilitates visitor engagement and informal , collaborative learning, large tree visualizations, multi-touch interaction The design of information visualizations to support science must strike a balance between scientific and playfulness to . This form of visualization in public spaces is different [46] in that museums scientific learning goals from the onset, experts in a variety of visualizations designed for at the users of the with a diverse range In this paper, we present the DeepTree exhibit (cf. Fig. 1), an interactive visualization of the Tree of Life that illustrates the phylogenetic relationship of all life on earth. The funded project called Life on Earth [1] with the aim of helping concepts of biological evolution in (ISE) setting. The project is multi- two computer scientists, one learning scientist, developmental psychologists, one external science advisors. The DeepTree offers visitors an interactive visualization of the Tree of Life as a vehicle to grasp important evolutionary concepts including relatedness, biodiversity, common descent, and shared traits. The exhibit utilizes a multi-touch tabletop, and i collaborative learning in museums. The DeepTree and evaluated at the Harvard Museum of Natural History throughout the design and development process ( 2011 to March 2012) Our contribution in this work is three present an analysis of the problem domain, deriving challenges and key questions. Second, we describe the design and implementation of a fractal tree layout algorithm based on a relative coordinate system and a custom engine that provides seamless navigation through tree structures of unlimited size and depth. algorithm in conjunction with an interaction system and a multi-touch interface that allow a lay person to freely explore the tree, as well as to navigate to various points of interest enriched with learning content. The design of our exhibit is based on principles of informal science learning. Thirdly, we present insights collected from 18 months of iterative design, testing, and evaluation. We provid and guidelines for visualization and UI design for informal and Chia Shen are with Harvard : {fblock, bcphillips, cshen}@seas.harvard.edu mail: jdiamond1@unl.edu . E-mail: E-Mail: michael- accepted 1 August 2012; posted online For information on obtaining reprints of this article, please send e-mail to: The DeepTree Exhibit: Visualizing the Tree of Life to Facilitate Evans, and Chia Shen, and action button (left). Three kids collaboratively exploring touch interactive visualization of the Tree of Life. We learning of evolutionary concepts. We will describe an iterative process in which a team of computer scientists, learning scientists, biologists, and museum curators worked together throughout design, development, hand in order to facilitate while being able to capture through; and a multi-user present an evaluation showing that the large informal learning. interaction. of helping the public learn key al evolution in informal science education -disciplinary, consisting of one learning scientist, two cognitive developmental psychologists, one museum curator, and five The DeepTree offers visitors an interactive visualization of the Tree of Life as a vehicle to grasp important evolutionary concepts including relatedness, and shared traits. The exhibit touch tabletop, and is designed for collaborative learning in museums. The DeepTree was tested at the Harvard Museum of Natural History throughout the design and development process (from April Our contribution in this work is three-fold: First, we problem domain, deriving challenges and key questions. Second, we describe the design and implementation of a fractal tree layout algorithm based on a relative coordinate system and a custom-built rendering ovides seamless navigation through tree structures of unlimited size and depth. We developed this tree in conjunction with an interaction system and a touch interface that allow a lay person to freely explore the tree, as well as to navigate to various points of interest enriched with learning content. The design of our exhibit is inciples of informal science learning. Thirdly, we 18 months of iterative design, provide concrete lessons learned and guidelines for visualization and UI design for informal 2789 1077-2626/12/$31.00 © 2012 IEEE Published by the IEEE Computer Society IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 18, NO. 12, DECEMBER 2012 Authorized licensed use limited to: TU Wien Bibliothek. Downloaded on October 26,2024 at 11:13:37 UTC from IEEE Xplore. Restrictions apply. science education settings. We also highlight challenges faced when applying information visualization methodology to informal learning designs, and provide indicators demonstrating that the size of the tree structure increases engagement, triggers emotional responses, and may provide a beneficial context for visitor learning. 2 TH R E E CH A L L E N G E S Four distinctive groups of stakeholders are usually involved in the development of an informal science exhibit: (1) designers and developers, (2) scientists and museum curators, (3) end users, and (4) learning researchers and evaluators. In our case the designers are information visualization and human- computer interaction specialists; the scientists are biologists; the users are museum visitors, and the evaluators are learning scientists and cognitive psychologists. This inter-disciplinary scenario gives rise to a set of design challenges to InfoVis. 2.1 Challenge 1: Users are not domain experts. With the ubiquity of increasingly large hierarchical data sets, a significant body of research has focused on the visualization of large tree structures. A series of challenges have been addressed to optimize usage of screen real-estate [19,42,43,45,49,50,53], to provide a good global overview of the complex dataset [19,54,50], to facilitate effective navigation [34,36,43,45,50,51,52], and to support the comparison and analysis of large tree structures [21,30,40,51]. The majority of the proposed solutions have been driven by the requirements of expert audiences and professional domain tasks. As the DeepTree is for a lay audience in informal science education, our tree visualization was subjected to different design criteria and required different solutions. First, visitors cannot be assumed to be familiar with the underlying dataset (even the phylogenetic tree representation may be foreign to visitors), thus in contrast to maximizing the amount of elements on the screen, we must prioritize aesthetics to attract visitors [25], and provide visual clarity so visitors can easily recognize the tree itself, its visual components, and its meaning in the context of evolution. Secondly, instead of navigating the tree as efficiently as possible, we want to utilize animated navigation techniques that purposefully unfolds each branching structure in the tree of life to convey the sense of scale of the tree of life and bio-diversity. Thirdly, interaction is needed to systematically and subtly guide the visitors in the learning and discovery process, in addition to afford walk-up-and-use as described in [25]. 2.2 Challenge 2: Domain experts are not users. Using information visualization for science learning also requires us to take extra care with the visual representation of the tree structure layout itself. Biologists illustrate phylogenetic trees in many different ways, ranging from a ladder or diagonal branching pattern, to a rectangular tree or a circular tree [20] depending on whether they are drawing on a blackboard, sketching on paper, making a PowerPoint slide or writing a scientific paper. For them, convenience (e.g., sketching) and space limitations (e.g., to publish a tree in a journal paper) might dictate the layout of the tree. Our biological science advisers can validate whether the phylogenetic trees we visualize are scientifically correct, but they do not have the expertise to fully judge whether certain layouts are optimal for a learner. In this case, we need to base our visual designs on recent research on novice understanding of phylogeny [11,20]. 2.3 Challenge 3: Guided free-choice interaction. In the museum learning literature, Planned Discovery (PD) and Active Prolonged Engagement (APE) are two interactive exhibit paradigms pioneered and carefully examined by the San Francisco Exploratorium [29]. PD exhibits lead visitors through a set of prescribed scientific phenomena with directive labels, while APE exhibits are open-ended and experiential, enabling visitors to become active participants in the construction of meaning through the use of an exhibit. In this problem space, interaction design plays a central role for information visualization in providing visitors with learning opportunities. Getting it right can give the visitor the ability to actively engage in the interactive visualization. One challenge is to not only allow visitors to interact with the encoded data, but also to enable multiple ways to move freely through the visualization in order to understand the learning content. Another challenge is to design the interaction to enable learning at user-selected levels, so that the system provides guidance for novices and depth for experts, while leading both to new inquiries and discoveries. In the rest of this paper, we summarize how we addressed these three challenges in the design, development and evaluation of the DeepTree exhibit. 3 RIT E F O R DE S I G N A N D EV A L U A T I O N While information visualization [39], software engineering [15], exhibit design [29], and learning sciences [18] all advocate an iterative, or “spiral” approach to designing interactive systems, no existing methodology sufficiently addresses our three challenges. In the absence of single disciplinary experts who can continuously evaluate the efficacy of our visualization, we needed a process that could equally involve input from all four groups of stakeholders. We utilized an adapted process of Rapid Iterative Design and Evaluation [38] – RITE – to drive the development of the DeepTree exhibit. RITE proposes rapid iterations of design driven by expert observations, in a fashion similar to formative evaluations. We were able to exhibit our interactive prototype in our partner museum, and let museum visitors interact on a walk-up-and-use basis. We obtained IRB approval to collect field notes and record video of visitor interaction for internal analysis (a sign pointed out that video recording was in progress). Our formative evaluator also obtained feedback from the visitors on their experience with our exhibit as they were leaving. Deployments between iterations varied in length, but were typically one week and involved approximately 20-40 users. A new iteration was begun when it became clear that our design goals were not yet met, or when software bugs prevented meaningful observations. Twelve iterations were conducted over the course of a year, with over 250 visitors observed in total. RITE has a series of advantages. First, it allowed us to run all experiments in the museum setting where our exhibit would be installed with visitors who spontaneously interacted with our system – which is key to ensuring ecological validity. Secondly, in contrast to controlled experiments, the 2790IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 18, NO. 12, DECEMBER 2012 Authorized licensed use limited to: TU Wien Bibliothek. Downloaded on October 26,2024 at 11:13:37 UTC from IEEE Xplore. Restrictions apply. methodology is robust to (and in fact encourages) changes in design during the process, allowing us to quickly and continually improve user experience and learning content, make remedial modifications, rather than continue throughout a study period with a possibly flawed system (as advocated by [55]). Thirdly, collected observational data (video / audio recordings) can be independently analyzed by our computer scientists regarding UI usage, by our learning psychologists to extract indicators for learning outcomes, and by our curator to judge issues related to user engagement. RITE could also be extended by two additional assessment matrixes: 1) measures of active prolonged engagement [26,33] (APE), in which visitor engagement is derived based on dwell times and other interaction measures; and 2) discourse analyses in which conversations of selected groups were transcribed, coded and analyzed for learning indicators. While these matrixes take more time than expert observations (but less than full learning studies), they can be flexibly integrated into the RITE process. 4 DO M A I N PR O B LE M A N D DA T A CH A R A C T E R I Z A T I O N The requirements for the DeepTree exhibit are to create a collaborative (R1) and interactive (R2) exhibit that uses a visualization of the Tree of Life (R3) as a platform to help the wider public to learn about evolution (R4). The specific learning goals were further specified by our learning scientists as follows: LG1 All life on earth is related. LG2 Biodiversity on earth is vast. LG3 Relatedness comes from common descent. LG4 Species inherit shared traits from common ancestors. LG5 Evolution is ongoing and happens over very long periods of time. To inform our design, we translated the requirements and learning goals into a set of more specific design goals: G1 The tree rendering should be a) visually appealing, b) clearly show its components and minimize visual complexity, and c) have an easy to use interface. G2 Allow visitors to freely and seamlessly explore the tree of life. G3 Provide multiple entry points to engage with specific learning content. G4 Encourage multiple visitors to collaborate and work together when interacting with the exhibit. G5 The tree conveys the idea that a) its leaves represent “life”, b) that the tree includes “all” life, and c) that the tree’s branching pattern connects all leaves. G6 The tree conveys its enormous size. G7 Any two leaf nodes “meet” on an internal node that is the deepest common "parent" within the tree structure (most recent, in terms of time). G8 Internal nodes represent evolutionary innovations (traits) that through inheritance are passed down to all its children. G9 Time should be represented in the tree. To build an interactive visualization that can achieve these learning goals requires a substantial dataset. We used data from a combination of four publicly available biological databases: (1) The Tree of Life Web Project (tolweb.org) [7] database is our primary phylogenetic tree dataset. It represents the result of years of continuing collaboration by hundreds of scientists across the world. The data contains the phylogenetic tree itself, describing over 70,000 species (terminal branches) and 20,000 internal nodes with 123 levels of depth – defining the relationship amongst all species. The web portal [7] also provides about 8,000 thousand images for selected species. The database lacks many common names of species, species images, and time of divergence that are all required for our learning goals. We therefore further collected and merged data from three additional datasets. (2) Eol.org [2] catalogues over 1.6 million species along with imagery and common names. (3) NCBI [5] is a large database containing over 347,649 taxa and a large set of common names. (4) Timetree.org [8] provides estimates for times of divergence of any two species, from which we derive approximations of our internal nodes. Through the respective web APIs of these databases, we walked through our base tree of tolweb.org and queried an additional 10,000 common names, 40,000 images and 250 timestamps for important internal nodes within our tree, which we selected. 5 RE L A T I N G T O PR I O R TR E E VI S U A L I Z A T I O N S A list of around 230 tree visualizations can be found in [35]. The most relevant to our work are visualizations a) tailored for lay audiences, b) designed for large trees and c) visualizations of phylogenetic trees. 5.1 Tree visualization for the general public Static phylogenetic trees are ubiquitously used in museums and schoolbooks ([37] provides a good overview). Most strikingly, educational tree visualizations make heavy use of color, rich imagery, and easy to understand labels, which are also reflected by our visualization. For the purpose of illustration, most of these examples either capture only a small selection of species (contrasting with G5 & G6), or show complexity but do not go down to the species level [10]. Another issue is that “organic”-looking tree illustrations do not map time / succession of nodes to a clear axis [10], making it hard to trace relationships between species (G7, G8), as well as to extract the direction of time (G9). EMDialog [25] shows an interactive tree for an art museum – and thus targets the general public – but its layout and interaction techniques were not designed for large phylogenetic trees, nor multi-user interaction. Visualizations of family trees targeted at lay people [9,17] make relationships between nodes in the tree very apparent, but are not designed to scale up to thousands of nodes. Involv [33] is a interactive visualization of the Linnaean Taxonomy, which contains over 1.6 million species. The utilized Voronoi treemaps, just like all other treemaps, suffer from the fact that the underlying hierarchy is hard to discern [45,48,54]. Generally, we considered visualization based on tree-maps unsuited for our exhibit, as a central requirement was to clearly visualize the nested branching relationships between all species G6, G8 & G9. 2791BLOCK ET AL: THE DEEPTREE EXHIBIT: VISUALIZING THE TREE OF LIFE TO FACILITATE INFORMAL LEARNING Authorized licensed use limited to: TU Wien Bibliothek. Downloaded on October 26,2024 at 11:13:37 UTC from IEEE Xplore. Restrictions apply. 5.2 Visualizing large trees To a lay observer many existing visualizations of large trees do not “look like trees”. This includes radial and hyperbolic trees [12,21,36,54], treemaps [33,49,53], and other ways of depicting hierarchies that are prone to look “unconventional” to the non-expert, such as Information Pyramids [12], visualizations of hierarchical blood vessels [13], “island”-like 3D visualizations [14], point-based tree representation [50], or FlexTrees [51]. Another problematic artifact of rendering large trees is that when zoomed out large portions of the tree structure can merge into solid areas [34,40], making it hard for a lay person to recognize the tree or parse its structure. From a learning perspective, our visualization should be immediately and continuously recognizable as a tree structure, further its branching pattern and internal relationships should be easy to discern (G1b). To satisfy both aesthetics (G1a) and visual clarity (G1b), we also dismissed literal “botanic” visualizations of trees [30] and settled on simple, but aesthetically pleasing Bezier curves, similar to [31]. Much work is concerned with optimizing screen usage [19,42,45,50,53]. While this is desirable from the standpoint of an expert, for a layperson it constitutes overwhelming complexity (G1b). Also, instead of optimizing the performance of navigation and node retrieval [34,45,52], we chose a fly-through algorithm that purposefully unfolds the tree branching structure in sequence in order to create a sense of the scale of the underlying dataset (G6). The stark contrast between our requirements and those of expert audiences is reflected in the layout of the DeepTree, which in prior work was discarded for its inefficiency regarding space use and navigation (cf. Space Tree [45], Fig. 9). 5.3 Visualizations of large phylogenies Visualizations of large phylogenetic trees appear to be exclusively designed for professional audiences and domain tasks, such as those that visualize multiple traits [31], comparing large trees [40], visualizing clusters [21], and provide tree editing [34]. None of these examples seemed to provide solutions that catered for our requirements. Navigation of the tree of life has been a long standing challenge [44]. 6 VI S U A L EN C O D I N G A N D IN TE R A C TI O N DE S I G N While presenting visual encoding and interaction design separately, both components were tightly entwined throughout the development process in order to ensure key learning steps for visitors. We will highlight these inter- dependencies throughout the subsections. 6.1 Rendering the Tree of Life Our DeepTree visual design is based on recent research examining how learners comprehend phylogenetic trees that have been illustrated in textbooks [20] and museum exhibits [11], pinpointing problems leading to misconceptions and misinterpretations of the underlying scientific hypotheses. In concurrence with related work [11,20], we found that different tree depictions may induce different conceptual interpretations of the underlying structure. First, the way we illustrated the branches had an impact on the way visitors perceived the tree. We started with a very abstract rectangular layout (Fig. 2a), which did not satisfy our need for aesthetics (G1a); on the other end of the spectrum, we experimented with “organic” looking branches (Fig. 2d), similar to [30], but our learning psychologists wanted to avoid a too literal interpretation by the visitors, while still conveying the idea that the tree of life is in fact an abstract scientific model (G1b, G5c). To strike a balance between visual appeal and meaningful representation, we tried different types of Bezier- curves. “Elbow”-like curves (Fig. 2b) were visually appealing, but feedback from both science educators as well indicators from visitor observations showed that these curves convey a “sudden” split of an ancestral species, while in reality, speciation is a gradual process. In our final exhibit, we use curves as shown in Fig. 8c. These curves seemed to strike a good balance between attractiveness, conveying an abstract impression, and illustrating the gradual nature of speciation. The placement of species images in the tree also had conceptual impact. Initially, we had no images at all, which lead to a clean, but “empty” look. Adding images clearly increased visual engagement and provided a crucial motivator for free exploration, but the placement of the images also led to problematic misconceptions. Fig. 2e) shows one of our initial placements: tolweb.org assigns a sample of representative species for each internal node, which we positioned at the branching points. On the positive side, the images gave internal nodes more meaning and guided visitors' exploration: as images reoccur through zooming in, people can find their preferred species in the tree (for example one kid played a game of “chasing the monkey”). On the downside, placing the pictures on the internal node seemed to convey the idea that these species were “already alive” at that point in evolutionary history, which is wrong (the internal node represents common ancestors that lived in past). Our final layout is shown in (Fig. 2f). We anchor our pictures to the fixed canopy line, where they are constantly visible, while providing illustrating “directions” or “pointers” to the species positions in the tree. We display two types of image pointers: as soon as a terminal node comes into view, the species' image “sprouts” out of its location. Additionally, we permanently display 200 “signpost” species in the tree, which are scaled to convey a sense of distance: as visitors zoom deeper into the tree, these signposts grow in size. This way of positioning the pictures brought the illustration of all current “life” to the conceptually correct location in the tree, avoiding the mentioned misconception, while emphasizing the value of the picture in terms of aiding navigation and motivating free exploration. The pictures are a major attractor in our exhibit, which lead us to design a navigation technique around the images as well. Fig. 2. Different ways of drawing branches (a-d) and positioning images (e&f). 2792IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 18, NO. 12, DECEMBER 2012 Authorized licensed use limited to: TU Wien Bibliothek. Downloaded on October 26,2024 at 11:13:37 UTC from IEEE Xplore. Restrictions apply. In order to reinforce the role of time in the tree (G9), we labeled 200 important internal nodes with their estimated time. We also only show labels for those nodes that exceed a certain screen size to minimize the amount of text that is simultaneously visible, reducing screen clutter (G1b). 6.2 Tree layout algorithm The tree layout used by the DeepTree is shown in Fig. 3 (left). It is based on SpaceTree’s [45] “continuously scaled tree” – using fixed progressive scaling of the nodes. The principle governing this layout is that all children are fully contained within the width of the parent. This “fractal” rule leads to an exponential decrease of bounding box width based on the node’s level within the tree. For our purpose, this layout had several advantages. First, because of the rapid decrease of node size, only a few nodes are visible, as the lower levels rapidly shrink into sub-pixel “singletons”. This allowed us to maintain a clean, bare and intuitive look (G1) at all times. Secondly, the branches are laid out in a consistent coordinate system through which we can seamlessly zoom and pan (G2) – this was preferred by our learning experts over a layout with distortions (such as hyperbolic trees [41]) or frequent changes (such as expandable trees [45]) in order to avoid alternative interpretations by the learners. Due to the fractal nature of the layout, the same visual qualities apply to any given view, as the pattern of nested children continuously repeats itself. This enabled us to allow free exploration (G2), without compromising visual clarity and consistency (G1). However, due to the size of our tree structure, which had 123 levels at its deepest point currently and will undoubtedly increase continuously, we ran into accuracy problems with the structure on which the bounding boxes of our nodes were based. As we are continuously sub-dividing the available width of a node to accommodate its children, we are also continuously decreasing the accuracy of the floating point of our bounding box. If we assume a perfectly bifurcated tree, in which every node has exactly two children, we would exhaust our floating point accuracy after 52 levels (a double floating point allocates 52 levels to the fraction), preventing us to further subdivide space for contained nodes. Accuracy problems on the implementation levels should be a concern for all fractal algorithms (such as [36][42][52]), however, we could not find any reference to this problem in prior work. Rendering “all life” – and being able to render both large and deep trees respectively was central to G5b & G6. Furthermore, we wanted to have a scalable layout and rendering engine that could accommodate future changes of the Tree of Life. Our solution was to implement a layout and rendering engine that is based on relative bounding boxes (a full technical description is available for download [3]): the bounding box of each node are expressed relative to the top left corner of its parent’s bounding box, and as multipliers of the parent’s bounding box' width. In a first iteration, we calculated a relative distribution of nodes as shown in Fig. 2, left. However, our learning scientists criticized this layout as it positions species at different heights – which reinforces the misconception that some species are less evolved, while others are “higher” organisms. Top-aligning the tree removed this issue, as it moves every terminal node to the same vertical level (cf. Fig. 3, right). It also correctly conveyed time (G9) where the “canopy” embodies species that are alive today. 6.3 DeepTree Rendering Engine Rendering any portion of our tree requires three basic steps: First we choose the lowest node that has to be visible – initially the root of the tree – and assign it absolute bounds in a virtual coordinate system (Fig. 4, left); second, based on the absolute bound of this render root and the relative definitions of their bounding boxes, we recursively calculate the absolute bounds of its children (Fig. 4, center); third, based on a viewport defined in the virtual coordinate system, we transform the virtual bounding boxes into screen space (Fig. 4, right). We can terminate the recursive calculation of bounding boxes when the size of a node’s bounding box is sub-pixel, or when it is horizontally outside of the viewport. We can seamlessly navigate through the tree by translating and/or scaling the virtual viewport at each frame (basis for G2, G6), while applying two constraints. First, more detail can only be found in the very top of the tree – the canopy. Thus we always scale the viewport around the canopy, which causes the canopy to remain on the same vertical screen coordinate. Second, panning of the viewport is limited to the x-axis. These constraint had several benefits for us: 1) a portion of the canopy of the tree – the space in the tree where all the “life” is – would be always visible and at a consistent screen location, making it easy for visitors to keep it in focus, and use it as navigational aid (G5a); 2) it enabled a simple input gesture for manual navigation (G1&G2). As we are zooming into the tree, portions of the tree will be outside the viewport (cf. Fig. 5, left, red highlight). When the viewport changes we define the deepest parent of all visible nodes as the new render-root (Fig. 5, left, “candidate”). The absolute virtual bounds of the new render- root, as determined in the previous render pass, is set as the new initial bounding box for the described calculations (Fig. 5, right). Additionally, both the viewport, as well as the initial Fig. 3. Visualization of the DeepTree layout algorithm. Children are contained within the width of the parent node. (right) top aligned. Fig. 4. Projecting relative coordinate system to absolute screen coordinate system. 2793BLOCK ET AL: THE DEEPTREE EXHIBIT: VISUALIZING THE TREE OF LIFE TO FACILITATE INFORMAL LEARNING Authorized licensed use limited to: TU Wien Bibliothek. Downloaded on October 26,2024 at 11:13:37 UTC from IEEE Xplore. Restrictions apply. bounding box are multiplied by a factor, so that the accuracy of the floating point structures holding the bounding box is reset. This ensures that the structures holding the bounding boxes can always be sub-divided, at least until bounding boxes surpass pixel size, and the cut-off criteria is met, respectively. An equivalent process can be done when zooming out. It is important to note that a root transfer is not visible to the viewer, as it simply recalibrates the viewport around a new visible root, which stays in a fixed screen location before and after the transfer. This was essential to support G2. The described rendering engine allows us to render and seamlessly navigate trees with unlimited depth and size (G5b&G6). 7 IN TE R A C TI N G W I TH TH E DE E PTR E E The user interface of the DeepTree consists of three major components (cf. Fig. 1, left): the first is the main tree visualization, in which we provide basic interaction techniques to explore the tree; the second component on the very right is a scrolling image reel containing 200 species, which serves as the first entry point for learning; the third component is an “Action” button, centrally overlaying the image reel. Tapping the button reveals a sub menu with three items: “Relate”, “Find” and “Return”. Each of these components is described in the following subsections. 7.1 Basic Interaction Techniques Interaction techniques should work for a single visitor and for a group of visitors equally well in museum settings. To satisfy G1 and G2, we wanted to provide simple and easy to understand means of navigating the tree. Initially, we drew on established gestures from multi-touch devices such as the iPad, where moving a single touch pans the view, and two touches moved away from each other/towards each other zoom in and out, respectively. These gestures, however, do not scale well to a multi-user scenario. As we cannot distinguish between touches of different users, two users trying to pan with a single finger look, to our touch mechanism, identical to a single user using two touches. Also, two users trying to zoom into different areas of the tree creates a conflict, as both compete for adjusting the viewport in different ways. To solve this problem, we utilize the fact that the canopy of our tree is fixed at a constant vertical screen coordinate. We created the following input metaphor: moving a finger left or right pans the tree left or right; moving a finger down “pulls” the tree downwards, revealing more branches at its canopy; moving a finger up “pushes” the tree upwards, shrinking detail and revealing nodes that are below the current view. Fig 6 illustrates this metaphor. As both zooming and panning are expressed as a single directional gesture, we can also enable “flicking”, causing the motion of the tree to speed up until its momentum is depleted. This simple control for zooming and panning “scales” relatively well to multiple users: as soon as we sense multiple fingers, we can simply base our pan and zoom on the geometrical center of all touches (the average of all coordinate vectors). This enforces social cooperation and negotiation (G4): if two fingers move away from each other, they cancel each other out. If all users cooperate and “pull” into the same direction, the view is being updated accordingly. Our observation showed that visitors would spontaneously tap the images along the canopy, prompting us to facilitate this attraction to provide a complementary navigational aid. Initially, we experimented with automatically flying to a species once tapped, however, this method was discarded, as frequent accidental touches would trigger undesired effects. In our final exhibit, tapping only invokes a tooltip, prompting the user to hold the species. Once an image is held, it causes the tree to automatically zoom in towards the respective species. If the finger is released, the zoom stops. 7.2 Learning Entry Points 7.2.1 200 Signposts While providing free exploration, we also wanted to introduce a series of entry points that would “lure” visitors to important points in the tree (PD), which were enriched with learning activities (G3). First, we added a scrolling image reel that contains 200 species. The species were selected by our evolutionary biologists and museum curators to represent major evolutionary groups, and to lead the visitors to areas in the tree that had additional learning content. The list can be scrolled manually by vertically sliding a finger along its elements. If a user taps a tile, it shows an animated arrow, prompting the user to drag the tile onto the tree (Fig. 7b). Once dragged, the tile detaches from the reel, and starts showing a “chord”, connecting the dragged tile with the actual position of the respective species in the tree (Fig. 7c). The current view automatically starts zooming towards the respective species, but only as long as the tile is held. If tiles are left untouched for several seconds, the tiles snap back into their space in the reel to prevent screen clutter (in response to kids frequently pulling out as many tiles as they could). 7.2.2 Relating Species While visitors could effectively engage with our exhibit through free exploration, two of our core concepts – Common Ancestry and Shared Traits – and the corresponding design goals G7 and G8 were not commonly discerned by our museum audience. In response, we introduced a “Relate” function through an “Action” menu (Fig. 7a), which allowed users to drag any two species from the reel into a target UI slot (Fig. 7d), causing the tree to automatically fly to the most recent common ancestor of both species. Upon arrival, the Fig. 6. Zooming and panning with a single finger gesture. Fig. 5. Root transfer. 2794IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 18, NO. 12, DECEMBER 2012 Authorized licensed use limited to: TU Wien Bibliothek. Downloaded on October 26,2024 at 11:13:37 UTC from IEEE Xplore. Restrictions apply. lineages of both species are visually highlighted and a bouncing button appears at the location of the most recent common ancestor. This is the entry point for a separate activity – the trait display – as shown in Fig 1, right. Upon tapping the button, the current view is shrunk into the top-left corner of the display, revealing a simplified tree that shows selected important groups, and the two selected species. Bouncing buttons appear at a series of shared ancestors that, when tapped, “flood” the respective groups with color. Information bubbles appear that point out that all highlighted organisms have inherited a special trait from a shared ancestor. A “learn more” button can be tapped to get a description of the trait’s evolutionary meaning (e.g. “Jaws are used for hunting and eating”), as well as illustrating the trait in members of this group. 7.2.3 Fly Me There Based on visitor feedback, we also provided a “Find” function, which allows users to automatically fly to a species selected from the image reel. 7.2.4 Animation as attractor and “encoding” Animation can attract attention, support object consistency, and be emotionally engaging which were all desirable attributes for our exhibit [23]. Both our “Relate” and “Find” function are based on an animated flight through our tree space. This fly-through takes varying amounts of time, from a fraction of a second when one flies to a close-by species up to several seconds when navigating between species that are separated by high numbers of branching points. We found that apart from attracting and engaging visitors, this animation had several learning effects. First, when making multiple successive “Relate”-queries (e.g. Human-Chimp, Human- Banana), visitors make inferences regarding the relative “closeness” of both species based on the direction of the fly- through: if the tree zooms out, the two species that are compared are further related than the previous pair, and vice versa (G7). We saw this type of inference frequently, for example, comparing humans and X. The length of the fly- through was also effective in conveying the size of the tree, and the vastness of biodiversity (G6). Additionally, the path of the fly-through was chosen to fly from species to species via their common ancestor – enforcing G7. As the speed of the animated viewport comes to a temporary halt before it starts accelerating, visitors are also able to read the time label of the common ancestor, which adds the concept of deep time to automatic fly-through (G9). 7.2.5 Multi-user interface Similar to our basic interaction technique, which scaled to larger groups by enforcing cooperation, we wanted to provide a touch interface to access our functions that would adhere to similar principles. In our initial designs we used buttons – hence tapping – as the primary mode of navigating to points of interest. However, buttons were prone to accidental activation [47] (e.g. through sleeves) and even when intended other collaborators often lacked the awareness for cause-and- effect in our interface, as tapping is easy to overlook. To remedy this issue we introduced the “slot-tile-drag” interface metaphor that are used for our Find and Relate actions. First, dragging is less prone to being accidentally invoked than tapping. Dragging elements towards a target slot in the center of the screen is also easier to detect by all by-standers. It is also possible to anticipate and intervene, for instance, by physically blocking hands, or by covering the target slot with a hand. Generally, we found this system to work very well in practice as it encouraged consensual navigation of the tree. Regardless of its suitability, tapping was the most common spontaneous way of interacting with graphical elements, and was observed to also be the first type of interaction with our species tiles on the side. Consequently, we used the tap modality to invoke visual instructions: if a tile is tapped, an animated arrow shows up for few seconds, and if a slot is tapped, and “shadow hand” performs a drag motion from the reel to the slot. This was usually sufficient instructions for users to drag the species tiles out onto the tree. We can generalize our UI interface principles as follows: • Tapping is the most common spontaneous way of interacting with graphical controls, so buttons should be used for all local actions that do not affect the experience of all participants. • Tapping should not be used for actions that trigger global changes, as it is a) prone to accidental touches and unintended action and b) easily goes unnoticed. • Dragging visual elements to the center of the screen is the most suitable form of triggering a global change, as it enables anticipation and possibility for intervention. • Dwelling can be used for elements that clutter the screen, as it requires active attendance to maintain an effect. Based on these criteria, the final interface presented in this paper enabled the majority of visitors and visitor-constellation to effectively collaboratively interact with the DeepTree after an acceptable learning curve – it took some visitors a few seconds to learn about the action button, and dragging the species tiles, but even little children could perform required interactions with relative ease. 8 EV A L U A T I O N In this section, we present an analysis of our observational data, which was conducted as part of our extended RITE method. We recorded conversations of 18 visitor groups to extract insights in regards to engagement and group discourse (6 multi-generational groups, 7 child dyads, 2 young adult dyads, 2 single older adults, and 1 single child). Most of the groups approached the table on their own. We then observed free exploration, and presented visitors with posttest questions. All interactions were recorded and utterances were transcribed and coded for 4 selected groups. We do not include discussions on demographics or group constellation in museums, as it is described in depth in [16,25]. Fig. 7. DeepTree UI: a) action menu, b) image reel, c) chord to species location, and d) Relate dialog. 2795BLOCK ET AL: THE DEEPTREE EXHIBIT: VISUALIZING THE TREE OF LIFE TO FACILITATE INFORMAL LEARNING Authorized licensed use limited to: TU Wien Bibliothek. Downloaded on October 26,2024 at 11:13:37 UTC from IEEE Xplore. Restrictions apply. 8.1 User-Selected Level of Learning A general observation we made was that the large dataset serves different purposes for different levels of learning. To visitors with basic biological knowledge, the size of the large tree encourages free exploration and conveys the vast size of biodiversity. Many college students have been exposed to phylogenies before, and are well versed in evolutionary biology. For this group of visitors, the tree allows them to see a seemingly complete tree of life for the first time. It is important to note that the size of the tree seemed to enable more advanced learners to make deeper connections with their preexisting knowledge, while not hindering the beginner in their discovery of more general and basic information. For example, when asked to describe the exhibit following exploration, beginners made comments such as “How anything, like anything, can relate to anything...like, for example, humans can relate to anything like a banana or coral fish” and “How far they go back...how long ago they have a common ancestor.” Advanced learners made comments such as “The relation between humans and bananas is something I never would have thought about” and “You learn in class that everything is connected...we all came from this little cell that started somewhere, but to think of it this way, we are all connected...from that direct line.” Both beginners and advanced learners commented on the deep time displayed: “Whoa! Fourteen million years ago...that’s a long time ago!” and “We’re going back to the distinction between animals and plants so we are going back a very, very, very long way.” 8.2 Engagement Groups spent an average of 8:30 minutes at the exhibit (range 3:50 – 15:40). Groups accessed our “Relate” and “Find” function between 1 and 6 times. Engagement through free exploration of a large dataset. The measured dwell times exceeded those of regular exhibits, and indicate that our exhibit could facilitate active prolonged engagement (APE) [29]. Children, in particular, engaged in manual exploration of the tree. Over the course of our iterative design, we have established that the size of the tree, and enriching the data with images and common names are both essential to facilitate exploration. In previous iterations, our dataset had significantly less imagery and common names. We also tested smaller trees (same layout and rendering engine, but only containing our 200 “signpost” species). While we did not continuously measure dwell times, our observation throughout several iterations showed a clear qualitative improvement regarding the level of engagement through providing an enriched large dataset. Striking a balance between APE and PD. All groups used at least once our “Relate” and “Find” function, which led to planned discovery (PD) of our trait display. As these actions were triggered through visitor’s own initiative, and the parameters were chosen freely, we prefer to use the term Guided Discovery (GD), as it may appear to the user as if they bumped into the content by free choice. 8.3 Discourse Analysis Based on four selected groups (1 multigenerational, 1 child dyad, 2 young adult dyads) we coded 264 utterances in total. The utterances were categorized into biological content (23%), questions (16%), affective responses (9%) and other: e.g., UI statements (52%). We further analyzed all biological content, as shown in Table 1. The discourse analysis provided two more indicators: Conveying our learning goals. The discourse analysis indicated that we had brought all core evolutionary concepts to the visitor’s attention (LG1, LG3, LG4 and LG5), albeit at different intensities: relatedness (LG1) was the most prominent topic, which we attribute to the