Engaging Audiences in Virtual Museums by Interactively Prompting Guiding Questions

Virtual museums aim to promote access to cultural artifacts. However, they often face the challenge of getting audiences to read and understand a large amount of information in an uncontrolled online environment. Inspired by successful practices in physical museums, we investigated the possible use of guiding questions to engage audiences in virtual museums. To this end, we first identified how to construct questions that are likely to attract audiences through domain expert interviews and mining cultural-related posts in a popular question and answer community. Then in terms of the proactive level for attracting users' attention, we designed two mechanisms to interactively prompt questions: active and passive. Through an online experiment with 150 participants, we showed that having interactive guiding questions encourages browsing and improves content comprehension. We discuss reasons why they are useful by conducting another qualitative comparison and obtained insights about the influence of question category and interaction mechanism.

READ FULL TEXT VIEW PDF

page 1

page 6

05/12/2018

Learning to Ask Good Questions: Ranking Clarification Questions using Neural Expected Value of Perfect Information

Inquiry is fundamental to communication, and machines cannot effectively...
10/05/2020

Actors in VR storytelling

Virtual Reality (VR) storytelling enhances the immersion of users into v...
02/28/2021

Exploring the social influence of Kaggle virtual community on the M5 competition

One of the most significant differences of M5 over previous forecasting ...
08/01/2020

An Empirical Study of Clarifying Question-Based Systems

Search and recommender systems that take the initiative to ask clarifyin...
06/22/2021

MuseumViz – Towards Visualizing Online Museum Collections

Despite the growth of online museums for India's cultural heritage data,...
10/28/2021

What makes us curious? analysis of a corpus of open-domain questions

Every day people ask short questions through smart devices or online for...
04/11/2022

A Web-Scale Analysis of the Community Origins of Image Memes

Where do the most popular online cultural artifacts such as image memes ...

1. Introduction

A virtual museum, or digital museum, collects, organizes, and displays digital artifacts online, which is mainly created for educational and entertainment purposes (Falk et al., 1998; Museum, 2018d; Schweibenz, 1998; Carrozzino et al., 2018). Compared to a physical museum, a virtual museum is not limited by geographical locations, and can provide richer user experience to fulfill its instructional and recreational function (Sun, 2013; Shan, 2018). A recent research reports that digital museum visitors have outnumbered those of real museums (Hawkey, 2004). This indicates the increasing popularity of the new form of interacting with cultural artifacts. In China for example, economic growth generates a new surge of public interest in tangible and intangible cultural properties (Romero Jr, 2014). Chinese museums are thus motivated to cultivate curiosity about the history and diversity of China with the help of digital and web technologies (Romero Jr, 2014). For instance, the Palace Museum alone has digitalized about one million artifacts into high-resolution images and put them online for people to browse at will (Shan, 2018).

However, providing free access to digital artifacts does not ensure that virtual museums can sufficiently engage public audiences in appreciating cultural values. It is particularly challenging to get them to read and understand a large amount of information in an uncontrolled online environment (Schweibenz, 1998). Although existing features such as 3D navigation (Kiourt et al., 2016), content searching (Skov and Ingwersen, 2014), virtual reality (VR) (Cheng et al., 2017), etc., enhance their sense of immersion, audiences in a virtual museum may still feel disoriented among the myriad of contents, not knowing what would be interesting for them (Bonis et al., 2009; Sun, 2013; Rayward and Twidale, 1999). In the context of physical museums, researchers and practitioners show that a good tutor can encourage visitors to interact more with exhibits and understand them better by asking guiding questions at a proper time, in a proper way (of Natural History, 2018). Even displaying inspiring questions without answers can spark visitors’ curiosity, and attract them to stay longer (Roberts et al., 2018). Figure 1 (a) presents a real-world example, where museum staff posts questions at the entry of a recent exhibition on Dunhuang Caves to engage visitors (Academy, 2018b).

Figure 1. Left: guiding questions in an exhibition on Dunhuang Caves. Right: one exhibition in our study that uses interactive guiding questions. ((a). “Which Dunhuang mural was the first one copied by Chang Shuhang?”, (b). “ Will the Yuan Dynasty inherit the tradition of the Song Dynasty?”).

Inspired by these successful practices in physical museums, we propose to engage audiences in virtual museums through interactively prompting guiding questions (Figure 1 (b)). In this paper, we use virtual museums of Chinese cultural artifacts as a case to examine our proposed approach. We aim to answer the following research questions (RQs):

  1. RQ1. How can we construct guiding questions that are more likely to engage audiences in virtual museums?

  2. RQ2. How would prompting guiding questions influence audiences’ behaviors?

To address RQ1, we first conducted interviews with experts to collect their opinions on virtual museums and how to engage audiences. Then we analyzed cultural artifact-related posts shared in a popular question and answer (QA) community to derive how to construct questions that are likely to attract general audiences in a museum context (Anderson et al., 2001) . Meanwhile, in terms of the proactive level for attracting users’ attention (Bowman et al., 2010), we designed two interaction mechanisms of prompting guiding questions: active and passive. The active mechanism prompts questions when audiences watch the exhibit, which may distract their attention. The passive one prompts questions after audiences watch the current exhibit to reduce interruptions. To investigate the efficacy of our question and mechanism design on audience engagement (RQ2), we conducted an online between-subject experiment with 150 participants. Results showed that having interactive guiding questions encourages users to browse significantly more exhibits and considerably improve content comprehension. Interestingly, passive prompting got visitors to go both farther and deeper in virtual museums, while active prompting only leaded them to go farther. Finally, we invited another 16 participants to compare the two mechanisms and share their thoughts and preferences. It seems that application questions work particularly well in the active setting, because they do not require higher level thinking that may distract audiences’ attention (Bowman et al., 2010). In contrast, analysis, evaluation, and creation questions are more effective in the passive setting, due to the cognitive process for sparking curiosity (Anderson et al., 2001; Litman, 2005; Loewenstein, 1994). The contributions of this paper are:

  1. Through interviews with experts and the analysis of a social QA community, we extracted guidelines for how to construct questions to attract online audiences.

  2. We designed two interaction mechanisms for prompting guiding questions. Through an online experiment, we showed the efficacy of using interactive guiding questions to engage audiences in virtual museums.

  3. Through a qualitative comparison study, we obtained insights about the influence of question category and interaction mechanism.

The remaining paper is organized as follows. We first review the background and related works. Then we present the overall flow and connections of our studies. Following that, we present each study and their findings in more details, including domain expert interview, QA community analysis, interaction mechanism design, online experiment of guiding questions, qualitative comparison of the two interaction mechanisms. Finally, we discuss the design considerations derived from this study and limitations of this work.

2. Background and Related Works

2.1. Museum and Virtual Museum

A museum is “a non-profit, permanent institution in the service of society and its development, open to the public, which acquires, conserves, researches, communicates and exhibits the tangible and intangible heritage of humanity and its environment for the purposes of education, study and enjoyment” (of Museums, 2008). The definition of museums given by International Council of Museums (ICOM) (of Museums, 2008) shows the main responsibility of museums: protecting and promoting the sense of cultural heritage to the public (of Museums, 2008; Sun, 2013). The extension of physical museums to virtual museums continues the mission (Sun, 2013; Rusu et al., 2017). Moreover, putting digitalized cultural artifacts online have multiple benefits. From museums’ perspective, it alleviates the congestion problem in exhibition halls, protects and communicates cultural artifacts better (Shan, 2018). For instance, the Palace Museum has an average visitors about 80,000 per day, which results in great challenges for managing museums and provides personalized visiting and learning experience (Shan, 2018). From visitors’ perspective, people can access digital artifacts without the limitation of geographical locations and time. In addition, with the development of new media technology, such as low-cost VR devices, new experiences can be created in an affordable way (Novati et al., 2005). The recent development of virtual museums aims to build museums in the cyberspace (Schweibenz, 1998), where digital artifacts are not limited by a specific region or culture. For example, the project Google arts and culture (Seales et al., 2013) integrates digital artifacts from different museums, and forms a large scale online platform for people to explore.

2.2. Practice in China

In China, with the development of economy, people pay more attention to cultural related activities recently. As explained in (Romero Jr, 2014), this is because China is a diverse country in terms of culture, ethnic group, and language, people are interested in learning and communicating different cultural experiences. The efforts from the government is also huge. For instance, as reported in (Romero Jr, 2014), more than 50 museums are built in the past 10 years in a small county.

In terms of virtual museums, the development in China can be divided into three stages (Li, 2008): online museums, digitalization of physical museums, and virtual museums. Starting from the 1990s, some museums began to build their websites spontaneously to publish activity news, which is the first stage: online museums (Li, 2008). In the second stage, museums gradually digitalized their artifacts in a small scale, and putted the digital content online for people to browse. Finally, the virtual museum stage is still on-going, which aims to build museums in the cyberspace. For virtual museums, the digital content will not be limited by one museum, but the whole knowledge space all over the world. For example, the digital project of Dunhuang academy (Academy, 2018a) aims to collect and organize all information about Dunhuang, not limited by the physical space. Leading by the Palace Museum, many museums start digitalizing their tangible artifacts, and put digital content online, such as online exhibitions, video lectures, documentaries, etc. (Shan, 2018). Despite the phenomenon, displaying plain collections can hardly draw people’s interests and guide them properly (Rusu et al., 2017; Seales et al., 2013). Therefore, engaging audiences to understand and appreciate cultural values in virtual museums is still a challenging problem.

2.3. Engaging in Museums and Virtual Museums

In general, to engage visitors in museums or virtual museums, two approaches can be investigated: increasing the sense of immersion and providing guidance. For example, in physical museums, to increase the sense of immersion, various new media technologies have been investigated in the past (Bruns et al., 2007; Muntean et al., 2017). In (Bruns et al., 2007), a mobile augmented reality (AR) application is proposed to help people locate artifacts in physical museums. Visitors can explore museum space by interactively connecting information in mobile devices and artifacts in real world. In (Muntean et al., 2017), Reese et al. investigate a storytelling mechanism to engage visitors, where users learn cultural values by exploring real artifacts in the physical space and media content in a tablet platform. Similarly, in virtual museums, various technical approaches to increase immersion have been applied (Kiourt et al., 2016; Cheng et al., 2017; Skov and Ingwersen, 2014). But because of the short of exploring opportunities in a physical space, engaging audiences in virtual museums is more challenging (Rusu et al., 2017; Seales et al., 2013).

Although increasing the sense of immersion can engage visitors, to help them understand and appreciate cultural values, proper guidance is needed (McTavish, 2006; Swartout et al., 2010; Rayward and Twidale, 1999). Empirical experience from experts suggests that personalized tutors that conduct conversations through asking questions can help engage visitors to appreciate the cultural value better (of Natural History, 2018). For instance, in (Swartout et al., 2010), Swartout et. al show that using virtual tutors with language interaction can help children learn science and technology in a physical museum. And more recently, in a physical museum, referring to psychology theory of curiosity (Litman, 2005; Loewenstein, 1994), Roberts et. al. show that displaying big questions can draw visitors’ attention and increase their engagement in cultural artifacts effectively (Roberts et al., 2018). Successful practices of using guiding questions to engage visitors in physical settings imply the possibility to engage audiences in virtual museums. But in terms of how to construct questions and how to interact with users in an online uncontrolled environment, more care should be taken.

2.4. Question Study

Asking questions is usually an important behavior for people to seek information. Moreover, a selective question asked at a proper time, in a proper way can also mean more. Existing research works show that using questions can potentially attract humans’ attention (Roberts et al., 2018), inspire creative thinking (Rothe et al., 2017), provoke informative answers (Hawkins et al., 2015), and lead to in-depth comprehension (Cohen, 1976). For constructing questions computationally, one approach is data-driven (Du et al., 2017a). But it suffers asking factual knowledge questions instead of creative ones (Du et al., 2017a). Another approach formulates it as an optimization problem, and constructs the best question based on the given context (Rothe et al., 2017). The Rational Speech Act (RSA) framework models question asking and answering as a Bayesian reasoning process (Frank and Goodman, 2012; Hawkins et al., 2015), and can provide reasonable explanations of question construction. But similar to the optimization one, it is domain specific and not appropriate for inspiring creative and in-depth thinking in the context of cultural artifacts (Rothe et al., 2017; Frank and Goodman, 2012; Hawkins et al., 2015).

As the interest of Human-Computer Interaction (HCI), we study the perception of questions from users’ perspective. And in terms of cultural content appreciation, we investigate how to ask a question that can engage listeners in appreciating the cultural value. In particular, we analyze existing social QA platforms to find the patterns of what is a good question to engage audiences.

2.4.1. QA Community

The emerging social QA communities, such as Quora (Quora, 2018), Zhihu (Zhihu, 2018), etc., provide platforms for people to share knowledge and experience. In QA websites, audiences can ask questions, answer others’ questions, or watch others’ questions and answers. If a question is more interesting, people are more likely to view and answer it. Analysis of QA communities can potentially provide insights on how to engage ordinary people in appreciating cultural content.

3. Overall Flow of the Study

Figure 2. The overall flow of the study in this paper.

The overall flow of the study is summarized in Figure 2. In particular, we started from domain expert interviews (a) to obtain opinions of engaging audiences in virtual museums. Then we sought a popular social QA community Zhihu (Zhihu, 2018), which is the largest Chinese QA community, to explore how to construct questions to engage visitors (b). Meanwhile, in terms of the proactive level for attracting users’ attention (Bowman et al., 2010), we designed two mechanisms to interactively prompt questions: active and passive (c). To see the effects of guiding questions, we conducted an online experiment (d). In particular, we built two online exhibitions, and equipped them with or without guiding questions. Then we ran a between-subject experiment, and analyzed the results quantitatively. Furthermore, we conducted another qualitative comparison to obtain more insights in terms of question category and interaction mechanism (e). For all studies, except interviewing one expert in English, we conducted in Chinese, and translated scripts into English for discussion and summarization.

4. Domain Expert Interview

To seek suggestions of engaging audiences in virtual museums from professionals, we conducted semi-interviews with two domain experts of organizing museum exhibitions (E1, E2). Each interview session lasted for about one hour.

E1 is an executive officer in our university library, and is in charge of organizing exhibitions in the university. He presented his experience of organizing a Calligraphic Art exhibition in the university. The interview was conducted in English and on campus. E2 is a team leader of cultural relics protection service in Chongqing, China, who has experience of conducting several online and offline exhibitions for children. During the interview, E2 introduced her suggestions on engaging audiences in online exhibitions. The interview was conducted remotely through instant messages. We summarize our main findings in the following.

4.1. Make Content Interesting and Easy to Understand

E2 suggested to design exhibitions from children’s point of view to make the exhibition easy to understand and interesting. General audiences have different backgrounds, educational levels, and may not have enough expertise of the exhibition to understand various concepts, terminologies, and so on. So it is important to make the content of online exhibitions “understandable”.

4.2. Help Audiences Memorize Some Keywords

Both E1 and E2 confirmed the importance of making audiences learn from exhibitions. E2 suggested to extract key elements of the exhibition. And based on the elements, it is possible to derive questions to help audiences memorize keywords, which encourages them to “recall” the exhibition.

4.3. Leave Time for Audiences

Overwhelmed in a large amount of information, audiences may feel disoriented easily. As E1 introduced his experience of the combination of online and offline settings to leave time for audiences to digest the content. “So if they find some interesting things in the gallery, they can further find more details on our “exhibits” web page.”

4.4. Interact with Audiences

E1 and E2 thought that interacting with audiences is the key to engage them. In this way, people feel they are communicating with the exhibition. For example, E1 introduced his experience of inviting artists to provide “face-to-face talks, demonstrations, guided tours, etc.”, to deepen audiences’ learning about exhibitions. “In this way our exhibitions engage and interact with visitors.”

Learning from the interview, we can see that to engage audiences, both content and interaction are important. The content should be “understandable” and “interesting”, and we should “interact” with audiences and “leave time” for them to think. And as pointed out by E2, prompting questions can help audiences memorize “keywords” of the exhibition. Therefore, we consider two dimensions for designing our guiding questions: what types of questions and how to interactively prompt questions. To explore what types of questions can attract audiences, we analyzed a popular QA community. And we consider the problem of how to prompt questions from the aspect of attracting audiences’ attention. More details are shown in the following sections.

5. QA Community Analysis

We analyzed a popular social QA community Zhihu (Zhihu, 2018)

to explore how to construct questions to engage audiences online. In Zhihu, users can post, answer, and follow questions. An attractive question can get more answers and followers. Each question in Zhihu is classified into one or more topics, either done by posters or dedicated editors. Therefore, a question can be seen as being annotated into different categories. We examined the hierarchy of all relevant topics about museums. After regular group meetings and discussions, we chose the topic

cultural artifact (文物), and mined all questions in it. In Zhihu, the topic cultural artifact covers concrete subtopics including Chinese painting, porcelain, bronzes, furniture, building, etc., and also abstract ones like cultural heritage, preservation of cultural relics, history of cultural artifacts, etc., which is suitable for our analysis purpose.

5.1. Data Statistics

Until May 20th, 2018, we collected 1041 questions under the topic cultural artifact, with 3291 answers in total. The average answer number is (), and the average follower number is (). The answer number and follower number are linear correlated (Pearson coefficient: 0.70, p-value: 0). And the portion of questions that follower number is bigger than answer number is about .

5.2. Question Category

What types of questions may attract audiences and lead them to think? We analyzed the most followed questions to find the pattern. In particular, we chose questions with follower number bigger than (about of all questions), and ran the thematic analysis method (Braun and Clarke, 2006) to code the questions. We used the Bloom’s revised taxonomy of cognitive process (Anderson et al., 2001) as basic codes. The cognitive process contains six categories to evaluate students’ mastery of knowledge in verb form: remember, understand, apply, analyze, evaluate, and create. Classifying questions into different cognitive categories can be seen as having increasing level for inspiring thinking. To annotate the questions, two authors first familiarized themselves with the crawled questions and the Bloom’s revised taxonomy. Following question construction guidelines of the Bloom’s revised taxonomy (of UCD Teaching and at University College Dublin, 2016; Lord and Baviskar, 2007), for each coding, we gave an argument to support the decision. For instance, the argument of apply could be reasoning with personal experience.

Figure 3. Box plot of the follower number of the coded questions (the follower number is in logarithmic coordinate).

The summarized box plot is shown in Figure 3. Generally, higher level questions in the Bloom’s revised taxonomy can draw more interests. For higher level questions that have only a few followers, we found that they are usually lack of interesting key elements or short questions without too much background information, such as “I want to study clinical medicine, but I am studying cultural relics protection. Is there any cultural relic protection direction associated with medicine?” (analyze, plain key element), “Is the Chinese clothing made by bamboo knots good?” (analyze, short question), and so on. Therefore, for designing questions in our experiment, we prioritized higher level questions with interesting key elements and background information.

5.3. How to Construct Guiding Questions?

Based on the thematic analysis, we summarized typical templates of different question categories. In addition, we also summarized common question features, such as what the question asks for, how many key elements, etc., as arguments for question construction. To make the templates easy to use, we first simplified the questions by removing meaningless decorations and paraphrasing them to simple structures. And we removed uncommon templates that have only a few examples. Referring to the arguments and templates, new guiding questions in different categories can be constructed.

5.3.1. Remember

Questions in this category are usually short questions without any background information. For each question, there is only one key element, e.g., an unknown item, a place, cost, time, etc. For instance, “what is Hanfu 111the historical traditional dress of the Han Chinese? ” Several typical templates are:

  • What is … ?

  • Where is … ?

  • How much is … ?

  • When is … ?

5.3.2. Understand

Questions in this category ask for people’ attitude, feeling, or thought about something. For example, “what kind of mood do most people like Hanfu have?” Several typical templates are:

  • What is your/somebody’s attitude towards … ?

  • What mood do/does … have?

  • What is your opinion about … ?

5.3.3. Apply

People usually ask for an approach, personal experience, or an recommendation. For instance, “what kind of experience is having a puppet?” Several templates are:

  • How … ?

  • Can you recommend … ?

  • What kind of experience is … ?

  • What is … used for?

5.3.4. Analyze

There are usually more than one key elements in questions. To follow the questions, people need to do reasoning or inference. For instance, “why do we spend a lot of money to recover lost artifacts?” Several templates are:

  • Why should we … ?

  • What is the reason … ?

  • Is … the same to … ?

  • Is … suitable/good/bad/true/false?

  • What can we do if … ?

5.3.5. Evaluate

Questions in this category ask for a judgment of a statement. For instance, “how do you think the school classifies Hanfu as fancy dress?” Several templates are:

  • How do you think … ?

  • How to evaluate … ?

  • …, is it true/false?

5.3.6. Create

In this category, people usually imagine some scenarios, and ask for potential results. For example, “imagine building a museum belonging to our generation hundreds of years later, what will be there?” Several templates are:

  • Imagine …, what will be … ?

  • If …, can … ?

  • Will it be … if … ?

6. Interaction Mechanism

According to the domain expert interview, interaction is an important element to engage audiences. We consider it from the attention aspect (Bowman et al., 2010), namely, when a question is prompted, whether audiences concentrate all their attention to it, or part of attention. In particular, we designed two interaction mechanisms: active and passive. The active mechanism prompts questions more proactively when audiences watch the exhibit, and audiences need to multitask between watching the exhibit and a prompted question. The passive one allows audiences to pay more attention to the exhibit, and only prompts questions when they change exhibits.

The flowcharts of the two mechanisms are shown in Figure 4

. The main difference is at the interaction moment when users try to leave the current slide. The active mechanism prompts questions when users browse the current exhibit based on preset time slots of questions, but does not interrupt them when they try to leave. Therefore, audiences need to multitask. Instead, the passive one does not interrupt users during browsing the current exhibit, but users need to watch all the questions before they change exhibits. In this way, they can pay more attention to questions and exhibits.

For implementation, a question has a preset start time and a preset end time . For the active mechanism, a question will be prompted at , and be hidden at . For the passive mechanism, the start prompting time of a question depends on users’ input, and once is prompted, it will be hidden after the period of .

Figure 4. The flowcharts of the active mechanism (a) and the passive mechanism (b). Each question has a prompting start time , and an end time .

7. Online Experiment

To see how interactive guiding questions influence audiences’ behaviors, we conducted an online between-subject experiment with 150 participants. In particular, we designed two online exhibitions, and equipped each with three versions of interaction mechanism: baseline, passive, and active. The passive and active version follow the design in section 6, and the baseline version does not have guiding questions.

7.1. Online Exhibition Design

We chose two unfamiliar topics: Chinese ritual bronzes (Wikipedia, 2018b) and Chinese painting (Wikipedia, 2018a), and designed two online exhibitions in Chinese. The unfamiliarity can bring more challenges for engaging audiences. For layout design, we referred to three typical online exhibitions (Museum, 2018b, a, c) from the Palace Museum. An online exhibition includes a menu bar for navigating into different themes, and under each theme, visitors can navigate a series of slides horizontally.

Figure 5. Typical layout of our online exhibitions: (a) menu bar, (b) text, (c) image asset, (d) navigation arrow, (e) exit, (f) dialog box, (g) navigation button.

With similar complexity in terms of the number of captions and images, we designed the content of the two exhibitions. The image assets and explaining text were downloaded from Wikipedia (Wikipedia, 2018a, b) and searched results from Google that have proper copyrights. The layout of the two exhibitions is the same. As shown in Figure 5, users can navigate the exhibition into different themes through the navigation bar (a). For each button, users can hover on it to see the full description. Under each theme, there are several slides, and each slide contains text introduction (b) and an image asset (c). When users click (c), a dialog box (f) will be popped up with more detailed introduction of the artifact. Users can further view different artifacts through the navigation buttons (g). The slides under a theme can be navigated through the left or right arrow (d). When a guiding question is popped out, the position intended to be clicked by users will wobble for a moment to attract users’ attention, such as the arrow (d) in Figure 5. Users can leave the exhibition by click (e). We refer to vertical complexity of an exhibition as the number of themes, and horizontal complexity as the maximal number of slides in different themes. In addition, we refer to depth complexity as the maximal number of images contained in the image asset after clicking. The complexity of the Chinese ritual bronzes exhibition is for vertical, for horizontal, and for depth. And the complexity of the Chinese painting exhibition is for vertical, for horizontal, and for depth.

Context Question Category
Painting of Luoshenfu (洛神赋) Can you imagine how Cao Zhi’s (曹植) poem ”Luoshenfu” could be expressed by painting? Create
Inscription (铭文) In general, bronzes with inscription are more precious. How you know think about inscription? Understand
Ding (鼎) How the ancients cook? Apply
Erlitou culture (二里头文化) What is Erlitou culture? Remember
Table 1. Typical questions used in the two exhibitions.

7.2. Question Design

For each slide of the exhibition, we designed one or two questions that are used to guide users to browse along vertical, horizontal, or depth direction. Each question is at different position to inspire visitors to click, including at the image position for depth direction, at the arrow position for horizontal direction, and at the menu position for vertical direction.

Referring to the constructing guidelines in section 5, we wrote questions in different categories according to the slide content. In particular, as suggested by E2 in section 4, we first extracted key elements in the current slide, such as item, action, event, statement, and so on. We selected one interesting element, and assigned its question category according to the arguments of different categories in section 5. In addition, we prioritized higher level questions in the Bloom’s revised taxonomy and detailed background information to inspire thinking. After assigning the category, we chose a suitable template in that category and constructed a question with the key element. After multiple rounds of designing, discussing, and revising, we created 135 questions in total for the two exhibitions (remember: , understand: , apply: , analyze: , evaluate: , create: ). For comparing the effects in the following studies, we further categorized them into three levels for inspiring thinking: low (remember, understand), middle (apply), high (analyze, evaluate, create), similar to the classification in (Lord and Baviskar, 2007). Low level questions only ask people to recall, interpret something, which needs the lowest level thinking. Middle level questions need people to apply one concept to another, which needs middle level thinking. High level questions require more complex reasoning, evaluating, creating, etc., which needs the highest level thinking. Several typical questions are shown in Table 1.

7.3. Experiment

We conducted a between-subject experiment to minimize the learning effects (Lazar et al., 2017), and measured participants’ browsing behavior and responses with the designed online exhibitions. During the experiment, each participant interacted with one version of two online exhibitions. And we counterbalanced the order of the two exhibitions. We treated different guidance versions as independent variable, and evaluate them in terms of the browsing behavior, content comprehension, and exhibition experience. We hypothesize that:

  1. H1. The guiding questions (passive and active) will encourage audiences browse virtual museums more.

  2. H2. The guiding questions (passive and active) will help audiences comprehend the content better.

Whether having interactive guiding questions improves user experience is hard to predict. In addition, without users’ feedback, the exhibition experience of the two interaction mechanisms is also hard to compare. We explore them through the experiment process, and further compare the two mechanisms through a qualitative study.

7.3.1. Participants

We recruited participants by sending advertisements through platforms including a Chinese survey platform WJX 222www.wjx.cn and crowdsourcing communities in QQ 333QQ is a popular Chinese instant messaging application, and people can create interest groups in it. The website is im.qq.com.. Each participant can get 2 CNY for about hours work. Alternatively, we help them fill out their questionnaires as an exchange for their participation to our experiment. In total, 150 participants took the study (86 females, average age 21.8, SD: 5.65). Each version contains 50 participants.

7.3.2. Procedure

We first showed an introduction webpage to participants about the procedure of the overall experiment, following a questionnaire with six questions to test their background knowledge about the online exhibition content. Each participant then browsed two online exhibitions. We counterbalanced the order of the two exhibitions and recorded the browsing behavior. After each exhibition, participants needed to fill out a questionnaire to report their browsing experience. At the end of the experiment, participants were asked to fill out another questionnaire with 16 questions to test their comprehension of the content. For some participants recruited through QQ, we also conducted an interview to ask their feedback about the exhibition and question experience (32 participants).

7.4. Analysis and Results

One-way MANOVA shows that there is a statistically significant difference in terms of of browsing behavior, content comprehension, and exhibition experience, , ; Wilk’s , . We summarize the detailed statistical analyses in the following.

7.4.1. Browsing Behavior: Click

We count the click number of the two exhibitions as , where if users view this page, otherwise , is the index of the two exhibitions, denote the indices of horizontal, vertical, and depth of the exhibition. The breath click number can then be denoted as , and the depth click number is . To test whether the guiding questions influence users’ browsing behavior, we define as the guidance count, where is the indicator, if the next page users clicked is the page that the question indicated, , otherwise, . Therefore, if the guidance count of the passive or active version is higher than the one of the baseline, it shows our guiding questions influence users’ browsing behavior.

Figure 6.

Means and standard errors of click number in terms of all pages, breath direction, depth direction, and guidance count (

).

The summarized result is shown in Figure 6. One-way ANOVA analysis shows that there is a significant effect of click number (, , ). In addition, Bonferroni post-hoc test shows that the click number of the passive version is significantly bigger than the baseline version (), but the the passive version and active version (), and the active version and baseline version () are not significantly different.

There is a significant effect of breath click number (one-way ANOVA, , , ). Bonferroni post-hoc test shows that the number of the passive version is significantly bigger than the baseline version (), and the active version is marginally bigger than the baseline version () but the active version and passive version () are not significantly different.

Similarly, there is a significant effect of depth click number (one-way ANOVA, , , ). Interestingly, Bonferroni post-hoc test shows that the depth click number of the passive version is significantly bigger than both the baseline version () and the the active version (). But the active version and the baseline version () are not significantly different.

The effect of guiding questions is significant (one-way ANOVA, , , ). Bonferroni post-hoc test shows that the passive version is significantly bigger than the baseline version (), and the active version is marginally bigger than the baseline version (). But the active version and the passive version () are not significantly different.

The result indicates our guiding questions generally can encourage users to click and browse more of the online exhibition content, specially for the passive interaction mechanism (H1 partially supported). In addition, it is interesting to notice that the active version seems not influence users’ depth clicking behavior, while the passive version influence both depth and breath clicking. We will discuss it more in the qualitative comparison.

low middle high
active 10.0
passive 12.0
Table 2. The summarized ratios of the average click number per question category (low: remember, understand; middle: apply; high: analyze, evaluate, create).

7.4.2. Browsing Behavior: Guidance

We further investigate the browsing behavior in terms of question category and interaction mechanism together. Because the number of different question categories is not equal and the positions are not consistent, it is difficult to analyze it statistically. Therefore, we examine it qualitatively by analyzing the guidance count in different question categories and interaction mechanisms. In particular, we calculate the ratio of the average click number per question category , where if users view the page indicated by a question in the category, otherwise , is the number of question in the category. We use the low, middle, and high categories defined previously, and each category has roughly similar number of questions. The result is shown in Table 2.

It is worth pointing out that different question categories have different effects on the interaction mechanism. Generally, higher level has a bigger ratio than the lower level, which shows that questions that inspire people to think encourage them to browse more. However, for the active version, the middle level seems to play a more important role than the high level. We give a detailed discussion in section 8.

7.4.3. Content Comprehension

One-way ANOVA analysis shows that no significant effect is found for the pre-testing score (baseline: , , passive: , , active: , , , , ), which implies that the samples of the three versions have similar background of the exhibition content. However, there is a significant effect of the post testing score (, , ). In particular, as shown in Figure 7, Bonferroni post-hoc test shows that the score of the passive version is significantly higher than the baseline version (). But the active version and the baseline version (), the active version and the passive version () are not significantly different. But there is a trend that the active version () is higher than the baseline version (). The result indicates that the passive mechanism can help general audiences comprehend and recall the exhibition content better (H2 partially supported). “I think the guiding questions are useful. Sometimes I just ignore some concepts. But if it prompts a question to remind me, I would then think the answer and look at the text more carefully.”–P120 (passive version).

Figure 7. Means and standard errors of post score ().

7.4.4. Exhibition Experience

We measure the exhibition experience in terms of engagement, rewarding, satisfactory, and preference of the three versions on a 7-point Likert scale. In particular, we ask participants four questions after each exhibition: 1). engagement, please indicate the level you are engaged in the exhibition; 2). rewarding, please indicate the level of rewarding after you browse the exhibition. For example, after browsing the exhibition, you get to know some new knowledge, or you deepen the understanding of some concepts; 3). satisfactory, please indicate the level you are satisfactory with the exhibition; 4). preference, please indicate the level you like the exhibition.

One-way MANOVA analysis shows that there are no significant effects of the four measurements, which indicates that the guiding questions do not interfere the exhibition experience significantly. However, there is a trend that the passive version makes people feel more engaged, rewarding, satisfactory, and preferable than the active version.

The non-significant effect may be due to users’ different preferences. From our interviews, the reported experience differs from individual to individual. For example, for the passive version, although most participants thought the questions can inspire them to read more of exhibits, others treated it as an interruption. “It is not necessary to prompt questions for each slide, it is too annoying. Sometimes I just do not want to read (depth direction).”–P115 (passive version). We investigate it more in the following qualitative comparison section.

8. Qualitative Comparison

From the online experiment, we can see that the active and passive mechanism influence users’ behaviors differently. In addition, the interaction mechanism seems to correlate with question category (Table 2). To further compare the two designs and obtain insights on interaction mechanism and question category, we conducted a qualitative study with another 16 participants (11 females, average age , ). The participants are recruited with crowdsourcing communities in QQ groups, with a similar procedure in the previous experiment. The average pre-testing score is (). In this study, we asked each participant to first browse the two exhibitions with the active and passive version separately. The order of the two exhibitions and the two versions were counterbalanced. We then asked participants to compare the two interaction mechanisms, and conducted a semi-structured interview with each participant to collect their feedback. The reported pros and cons of the two versions are summarized in Table 3.

pros cons
active easy to interpret, attention grabbing interruptive
passive leave time for browsing non-intuitive, require extra interaction (perceived)
Table 3. The summarized reasons for different preferences of the interaction way.

8.1. Analysis of Interaction Mechanism

8.1.1. Interruption

In many occasions, participants want to have a period of time to read the current content that they are interested in. Therefore, similar to our quantitative analyses in the previous study, most participants felt the passive version is better. For example, some participants interpreted the guiding questions in the active version as “interruption”. ”I think prompting questions actively is too straightforward, and it interrupts me during reading. The second version (passive) is better. I can look at the content longer.”–P3.

8.1.2. Input

Multitasking between watching exhibits and guiding questions is more convenient for users if they want to see the questions. For example, some participants treated the questions as important clues to understand the exhibition. Therefore, some participants interpreted the active version as “intuitive” and “attention grabbing”. “I like the first one (active version). The second one is too complex, not intuitive. The first one is better. It is more common, and shows me guidance more directly.”–P5. In contrast, the passive version triggers guiding questions when users try to leave the current exhibit, which is perceived as needing “extra operations”.

Prompting questions when users watch the current exhibit may interrupt them, which leads to poor comprehension (Figure 7). This aligns with previous work (Bowman et al., 2010) that multitasking lowers students’ reading performance. However, due to the difficulty of determining when users finish the current exhibit, we may need extra operations to see guiding questions, as the passive version does. This may possibly annoy users. Balancing guidance and interruption is critical for designing interaction mechanism of guiding questions.

8.2. Analysis of Question Category

8.2.1. Align with Audiences’ Understanding

As mentioned in section 7, it is interesting that middle level questions in the active version (apply) encourage audiences to browse more than the high level ones (analyze, evaluate, create, Table 2). From users’ feedback, one possible reason is that questions in this category happen to align with audiences’ understanding. “I like the second one (active version), maybe because sometimes it prompts a question that happens to be the one I want to ask. And when it appears in a proper time, it inspires me to read deeply.”–P10. Because questions in this category need lower level thinking than high level questions, the interruption brought by active interaction is not serious enough, i.e., people have time to digest the questions.

8.2.2. Inspiring Thinking

For the passive version, when people have more time to read the content, higher level questions play a more important role, as shown in Table 2. In other words, if people have time to digest the content, it is more important to inspire them to think. This may also explains the phenomenon in section 7 that the passive version has a much bigger depth click number than others.

The result confirms that the curiosity theory from psychology (Litman, 2005; Loewenstein, 1994) for engaging audiences applies in online settings, which is consistent with previous works in physical museums (Roberts et al., 2018). But designers should be more careful to deal with the relationship between curiosity and attention.

9. Discussion

In this section, we discuss several design considerations derived from our study and limitations of this work.

9.1. Design Consideration

9.1.1. Use Interesting Language to Illustrate Content

Making the content of virtual museums easy to understand is not enough, it is better to use interesting language to attract audiences’ interest. For our online exhibition design, we used text descriptions from Wikipedia, which are usually considered to be easy to understand. However, participants in the online experiment still complained the content as “dull”, “not interesting”. As suggested by E2, creating exhibits from children’s perspective could make it more appropriate to illustrate cultural content for public education.

9.1.2. Guide Audiences When Necessary

To change audiences’ behaviors, interfering them when they watch exhibits are unavoidable. Interestingly, our online experiment shows that the two interaction mechanisms do not influence participants’ experience significantly. Therefore, to encourage browsing and improve content comprehension, it seems safe to use necessary interference methods to guide audiences.

9.1.3. Maintain the Freshness of Interaction

Although guiding questions do not harm the overall experience significantly, interactively popping out questions in a uniform way for all slides still annoy users. For instance, in our qualitative comparison study, participants described “always popping out questions” as “tedious” and “not necessary”. To maintain the freshness of guiding questions, it is helpful to use different interaction mechanisms and switch between them.

9.2. Limitation

Our work has several limitations. First, we only used Chinese cultural artifacts as an example, and conducted our experiment in China. For generalizing the principles of question construction and interaction mechanism, more systematic studies should be done. Second, the starting and displaying time of a question in the active mechanism are fixed. If we can measure audiences’ engagement level (Andujar and Gilbert, 2013; Sun et al., 2017), and prompt questions accordingly, better experience can be expected. Third, constructing guiding questions still needs manual work. Using the summarized guidelines to build a larger dataset and training question generation models (Du and Cardie, 2017; Du et al., 2017b; Lee et al., 2018) can potentially deploy our method in a larger scale (Seales et al., 2013). Fourth, our targeted audiences are ordinary people. For special groups like children, elderly, special interest visitors (Skov and Ingwersen, 2014), etc., more works should be considered.

10. Conclusion

We conducted a series of studies to understand how to interactively prompt guiding questions to engage audiences in virtual museums. In particular, we used Chinese cultural artifacts as a case to examine our approach. We derived guidelines on how to construct questions to inspire different levels of thinking. Through an online experiment and a qualitative comparison study, we obtained insights about the influence of question category and interaction mechanism. Further works could include automatic question construction, deploying and testing the method in a broader area.

References

  • (1)
  • Academy (2018b) Dunhuang Academy. July 11 - October 22, 2018b. Exhibition: Digital Dunhuang - Tales of Heaven and Earth (Jointly Organised by Hong Kong Heritage Museum and the Dunhuang Academy). Hong Kong. http://www.info.gov.hk/gia/general/201807/10/P2018070900434.htm?fontSize=1
  • Academy (2018a) Dunhuang Academy. Retrieved September 4, 2018a. Dunhuang Academy Website. http://en.dha.ac.cn
  • Anderson et al. (2001) Lorin W Anderson, David R Krathwohl, Peter W Airasian, Kathleen A Cruikshank, Richard E Mayer, Paul R Pintrich, James Raths, and Merlin C Wittrock. 2001. A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. White Plains, NY: Longman (2001).
  • Andujar and Gilbert (2013) Marvin Andujar and Juan E. Gilbert. 2013. Let’s Learn!: Enhancing User’s Engagement Levels Through Passive Brain-computer Interfaces. In CHI ’13 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’13). ACM, New York, NY, USA, 703–708. https://doi.org/10.1145/2468356.2468480
  • Bonis et al. (2009) Bill Bonis, John Stamos, Spyros Vosinakis, Ioannis Andreou, and Themis Panayiotopoulos. 2009. A Platform for Virtual Museums with Personalized Content. Multimedia tools and applications 42, 2 (2009), 139–159.
  • Bowman et al. (2010) Laura L. Bowman, Laura E. Levine, Bradley M. Waite, and Michael Gendron. 2010. Can Students Really Multitask? An Experimental Study of Instant Messaging While Reading. Computers & Education 54, 4 (2010), 927 – 931. https://doi.org/10.1016/j.compedu.2009.09.024
  • Braun and Clarke (2006) Virginia Braun and Victoria Clarke. 2006. Using Thematic Analysis in Psychology. Qualitative Research in Psychology 3, 2 (2006), 77–101.
  • Bruns et al. (2007) E. Bruns, B. Brombach, T. Zeidler, and O. Bimber. 2007. Enabling Mobile Phones To Support Large-Scale Museum Guidance. IEEE MultiMedia 14, 2 (April 2007), 16–25. https://doi.org/10.1109/MMUL.2007.33
  • Carrozzino et al. (2018) Marcello Carrozzino, Marianna Colombo, Franco Tecchia, Chiara Evangelista, and Massimo Bergamasco. 2018. Comparing Different Storytelling Approaches for Virtual Guides in Digital Immersive Museums. In International Conference on Augmented Reality, Virtual Reality and Computer Graphics. Springer, 292–302.
  • Cheng et al. (2017) Alan Cheng, Lei Yang, and Erik Andersen. 2017. Teaching Language and Culture with a Virtual Reality Game. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 541–549. https://doi.org/10.1145/3025453.3025857
  • Cohen (1976) Ruth Cohen. 1976. Learning to Ask Questions. ERIC (1976).
  • Du and Cardie (2017) Xinya Du and Claire Cardie. 2017. Identifying where to focus in reading comprehension for neural question generation. In

    Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

    . 2067–2073.
  • Du et al. (2017a) Xinya Du, Junru Shao, and Claire Cardie. 2017a. Learning to Ask: Neural Question Generation for Reading Comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 1342–1352. https://doi.org/10.18653/v1/P17-1123
  • Du et al. (2017b) Xinya Du, Junru Shao, and Claire Cardie. 2017b. Learning to Ask: Neural Question Generation for Reading Comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 1342–1352. https://doi.org/10.18653/v1/P17-1123
  • Falk et al. (1998)

    John H Falk, Theano Moussouri, and Douglas Coulson. 1998.

    The Effect of Visitors Agendas on Museum Learning. Curator: The Museum Journal 41, 2 (1998), 107–120.
  • Frank and Goodman (2012) Michael C Frank and Noah D Goodman. 2012. Predicting Pragmatic Reasoning in Language Games. Science 336, 6084 (2012), 998–998.
  • Hawkey (2004) Roy Hawkey. 2004. Learning with Digital Technologies in Museums, Science Centres and Galleries. https://telearn.archives-ouvertes.fr/hal-00190496 A NESTA Futurelab Research report - report 9.
  • Hawkins et al. (2015) Robert X. D. Hawkins, Andreas Stuhlmüller, Judith Degen, and Noah D. Goodman. 2015. Why Do You Ask? Good Questions Provoke Informative Answers. In CogSci.
  • Kiourt et al. (2016) Chairi Kiourt, Anestis Koutsoudis, and George Pavlidis. 2016. DynaMus: A Fully Dynamic 3D Virtual Museum Framework. Journal of Cultural Heritage 22 (2016), 984 – 991. https://doi.org/10.1016/j.culher.2016.06.007
  • Lazar et al. (2017) Jonathan Lazar, Jinjuan Heidi Feng, and Harry Hochheiser. 2017. Research Methods in Human-Computer Interaction. Morgan Kaufmann.
  • Lee et al. (2018) C. Lee, T. Chen, L. Chen, P. Yang, and R. T. Tsai. 2018. Automatic Question Generation from Children’s Stories for Companion Chatbot. In 2018 IEEE International Conference on Information Reuse and Integration (IRI). 491–494. https://doi.org/10.1109/IRI.2018.00078
  • Li (2008) Wenchang Li. 2008. 发展中的中国数字化博物馆[The Developing Virtual Museums in China]. International Museum 1 (2008), 61–69.
  • Litman (2005) Jordan Litman. 2005. Curiosity and the Pleasures of Learning: Wanting and Liking New Information. Cognition & Emotion 19, 6 (2005), 793–814.
  • Loewenstein (1994) George Loewenstein. 1994. The Psychology of Curiosity: A Review and Reinterpretation. Psychological Bulletin 116, 1 (1994), 75.
  • Lord and Baviskar (2007) Thomas Lord and Sandhya Baviskar. 2007. Moving Students from Information Recitation to Information Understanding-Exploiting Bloom’s Taxonomy in Creating Science Questions. Journal of College Science Teaching 36, 5 (2007), 40.
  • McTavish (2006) Lianne McTavish. 2006. Visiting the Virtual Museum: Art and Experience Online. New Museum Theory and Practice: An Introduction (2006), 226–246.
  • Muntean et al. (2017) Reese Muntean, Alissa N. Antle, Brendan Matkin, Kate Hennessy, Susan Rowley, and Jordan Wilson. 2017. Designing Cultural Values into Interaction. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 6062–6074. https://doi.org/10.1145/3025453.3025908
  • Museum (2018a) The Palace Museum. Retrieved September 10, 2018a. 卡塔尔阿勒萨尼收藏展[Treasures from The Al Thani Collection]. www.dpm.org.cn/subject_althani/thealthanicollection.html
  • Museum (2018b) The Palace Museum. Retrieved September 10, 2018b. 故宫博物院藏傅抱石作品展[Online Exhibition of Baoshi Fu]. www.dpm.org.cn/topic/fubaoshi_welcome.html
  • Museum (2018c) The Palace Museum. Retrieved September 10, 2018c. 翡红翠绿在线展览[Online Exhibition of Jadeite]. www.dpm.org.cn/topic/feihongcuilv.html
  • Museum (2018d) Virtual Multimedia Museum. Retrieved August 22, 2018d. The ViMM Definition of A Virtual Museum. https://www.vi-mm.eu/2018/01/10/the-vimm-definition-of-a-virtual-museum
  • Novati et al. (2005) Gianluca Novati, Paolo Pellegri, and Raimondo Schettini. 2005. An Affordable Multispectral Imaging System for the Digital Museum. International Journal on Digital Libraries 5, 3 (2005), 167–178.
  • of Museums (2008) International Council of Museums. 2008. ICOM Definition of a Museum. http://archives.icom.museum/definition.html
  • of Natural History (2018) Harvard Museums of Natural History. Retrieved August 28, 2018. Engaging Museum Visitors: Casual Conversations Through Asking Questions. In Online content. https://www.nemanet.org/files/4813/8552/9230/Neurodiversity_1.pdf
  • of UCD Teaching and at University College Dublin (2016) Open Educational Resources of UCD Teaching and Learning at University College Dublin. 2016. How to Ask Questions that Prompt Critical Thinking. http://www.ucdoer.ie/index.php/How_to_Ask_Questions_that_Prompt_Critical_Thinking
  • Quora (2018) Quora. Retrieved September 10, 2018. Quora-A Place to Share Knowledge and Better Understand the World. www.quora.com
  • Rayward and Twidale (1999) W. Boyd Rayward and Michael B. Twidale. 1999. From Docent to Cyberdocent: Education and Guidancein the Virtual Museum. Archives and Museum Informatics 13, 1 (01 Mar 1999), 23–53. https://doi.org/10.1023/A:1009089906902
  • Roberts et al. (2018) Jessica Roberts, Amartya Banerjee, Annette Hong, Steven McGee, Michael Horn, and Matt Matcuk. 2018. Digital Exhibit Labels in Museums: Promoting Visitor Engagement with Cultural Artifacts. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, Article 623, 12 pages. https://doi.org/10.1145/3173574.3174197
  • Romero Jr (2014) Aldemaro Romero Jr. 2014. Scholar Explains Recent Museum Boom in China. The Edwardsville Intelligencer (2014), 3.
  • Rothe et al. (2017) Anselm Rothe, Brenden M Lake, and Todd Gureckis. 2017. Question Asking as Program Generation. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Curran Associates, Inc., 1046–1055.
  • Rusu et al. (2017) Cristian Rusu, Virginia Zaraza Rusu, Patricia Muñoz, Virginica Rusu, Silvana Roncagliolo, and Daniela Quiñones. 2017. On User eXperience in Virtual Museums. In Social Computing and Social Media. Human Behavior, Gabriele Meiselwitz (Ed.). Springer International Publishing, Cham, 127–136.
  • Schweibenz (1998) Werner Schweibenz. 1998. The ”Virtual Museum”: New Perspectives For Museums to Present Objects and Information Using the Internet as a Knowledge Base and Communication System. In ISI.
  • Seales et al. (2013) W. B. Seales, S. Crossan, M. Yoshitake, and S. Girgin. 2013. From Assets to Stories via the Google Cultural Institute Platform. In 2013 IEEE International Conference on Big Data. 71–76. https://doi.org/10.1109/BigData.2013.6691673
  • Shan (2018) Jixiang Shan. Retrieved July 20, 2018. The Countenance of Public Cultural Facilities - The Palace Museum as an Example. In The “Palace Museum Academy” Talk Series (recored video accessible at https://www.youtube.com/watch?reload=9&v=Fo4jO3XmIXg&feature=youtu.be). Hong Kong. http://www.info.gov.hk/gia/general/201703/20/P2017031700754.htm
  • Skov and Ingwersen (2014) Mette Skov and Peter Ingwersen. 2014. Museum Web Search Behavior of Special Interest Visitors. Library & Information Science Research 36, 2 (2014), 91–98.
  • Sun (2013) Jing Sun. 2013. From “Telling” to “Engaging”: A Brief Study of the Educational Role of Museum in China. Procedia - Social and Behavioral Sciences 106 (2013), 1242 – 1250. https://doi.org/10.1016/j.sbspro.2013.12.139 4th International Conference on New Horizons in Education.
  • Sun et al. (2017) Mingfei Sun, Zhenjie Zhao, and Xiaojuan Ma. 2017. Sensing and Handling Engagement Dynamics in Human-Robot Interaction Involving Peripheral Computing Devices. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 556–567. https://doi.org/10.1145/3025453.3025469
  • Swartout et al. (2010) William Swartout, David Traum, Ron Artstein, Dan Noren, Paul Debevec, Kerry Bronnenkant, Josh Williams, Anton Leuski, Shrikanth Narayanan, Diane Piepol, Chad Lane, Jacquelyn Morie, Priti Aggarwal, Matt Liewer, Jen-Yuan Chiang, Jillian Gerten, Selina Chu, and Kyle White. 2010. Ada and Grace: Toward Realistic and Engaging Virtual Museum Guides. In Intelligent Virtual Agents. Springer Berlin Heidelberg, Berlin, Heidelberg, 286–300.
  • Wikipedia (2018a) Wikipedia. Retrieved August 2, 2018a. 中国画[Chinese Painting]. https://zh.wikipedia.org/wiki/%E4%B8%AD%E5%9B%BD%E7%94%BB
  • Wikipedia (2018b) Wikipedia. Retrieved August 2, 2018b. 中国青铜器[Chinese Ritual Bronzes]. https://zh.wikipedia.org/wiki/%E4%B8%AD%E5%9B%BD%E9%9D%92%E9%93%9C%E5%99%A8
  • Zhihu (2018) Zhihu. Retrieved September 10, 2018. 知乎[Zhihu-A Chinese Question and Answer Website]. www.zhihu.com