How linguistic descriptions of data can help to the teaching-learning process in higher education, case of study: artificial intelligence

11/27/2017 ∙ by Clemente Rubio-Manzano, et al. ∙ Universidad del Bío-Bío 0

Artificial Intelligence is a central topic in the computer science curriculum. From the year 2011 a project-based learning methodology based on computer games has been designed and implemented into the intelligence artificial course at the University of the Bio-Bio. The project aims to develop software-controlled agents (bots) which are programmed by using heuristic algorithms seen during the course. This methodology allows us to obtain good learning results, however several challenges have been founded during its implementation. In this paper we show how linguistic descriptions of data can help to provide students and teachers with technical and personalized feedback about the learned algorithms. Algorithm behavior profile and a new Turing test for computer games bots based on linguistic modelling of complex phenomena are also proposed in order to deal with such challenges. In order to show and explore the possibilities of this new technology, a web platform has been designed and implemented by one of authors and its incorporation in the process of assessment allows us to improve the teaching learning process.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 17

page 34

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Feedback is an indispensable component of an effective teaching and learning environment in education HattieTimperley2007 ; NicolMacfarlaneDick2006 ; SarsarHarmon2017 ; VandeKleij2012 . Additionally, its personalization offers possibilities to deliver feedback that is the most appropriate for the user’s expertise, cognitive abilities and to their current moods and attentiveness Hamilton2009 . However, providing students with personalized and immediate feedback is a complex task and it is usually a standardized process (every student receives the same feedback, e.g., knowledge of correct response) due to the large number of students Hamilton2009 . In writing skill, for example, truly immediate feedback is impractical Lavolette2015 .

A possible solution to solve this limitation is to employ computer-assisted assessment (CAA). CAA is a longstanding problem that has attracted interest from the research community since the sixties and has not been fully resolved yet Perez2005 . The main aim is to study how the computer can help in the evaluation of students’ learning process Marin2004 . The literature has exposed several advantages:

  • It provides educators with didactic advantages VandeKleij2012 .

  • It provides students with immediate information in a timely manner and it is particularly useful when the number of students is high and resources are scarce Hamilton2009 .

  • It is a quick way of providing feedback and it reduces the teacher’s workload Lavolette2015

  • It can be personalized, hence allowing the process of assessment to be enhanced from both teacher’s and students’ points of view

Automatic assessment methods can be grouped into five main categories: statistical, natural language processing (NLP), information extraction (IE), clustering and integrated-approaches

NoorbehbahaniKardan2011 . Several examples of successful applications can be found in the literature:

  • Automatic creation of summaries assessment for intelligent tutoring systems Perez2005 ;

  • Automatic generation of formative feedback in the university classroom for specific concept maps scaffold students’ writing Lachner2017

  • A framework to provide students with feedback on algebra homework in middle-school classrooms Fyfe2016 ;

  • Automatic test-based assessment of programming Douce2005 ;

  • Automatic assessment of free text answers using a modified BLEU algorithm NoorbehbahaniKardan2011 .

  • Feedback for serious computer games to provide learners with useful and immediate information about the player’s performance Burgos2009 .

The use of CAA in an undergraduate course of artificial intelligence can be very beneficial when a project-based learning is employed as teaching-learning methodology. An important skill to be acquired by undergraduate students of artificial intelligence courses is to get a better understanding of the different kind of heuristic algorithms existents for implementing computer games bots. In such project, each student individually 111For us, the projects must be individually developed because team-based learning could has certain limitations when it is applied for acquiring programming skills, however this discussion is out of the scope of this paper must design and implement a computer game by programming the artificial intelligence of the various agents (bots) acting in the virtual world. This kind of project can be seen as a real computer game-based learning Prensky2007 .

Computer Game-based learning 222Note that, we do not use computer games for learning, students design and implement a computer game, that is, the computer game is the result, it is not used as a pedagogical resource, which is a type of CCA tool, was selected as a learning strategy because video-games are now used as new and powerful platforms for teaching and learning. In fact, the development of video games is currently a very motivational topic for the computer science students.

In this context, the classical assessment of a computer game-based learning project consists in checking if the bots developed by the students are correctly designed and implemented. This process has important flaws:

  1. It is a time-wasting task, mainly due to the excessive time required by the teacher to check the project’s functionality. This becomes a serious problem when the number of students is high and there is only one teacher.

  2. It is a complex task mainly due to the difficulty of evaluating a lot of important details about the implementation, which are usually missed in an execution trace: quantity (memory occupied, iterations performed, data structure used, etc.) and quality (how the artificial intelligence agent is good at capturing coins in the virtual world: is it fast, brave, intelligent?). Additionally, both of them -quantity and quality- are difficult to capture due to the nature of the computer algorithm: they are running very fast, the debugger generates a lot of information which is difficult to understand, also large amount of data generated in the execution of a program.

  3. Impossibility (or very difficult) of performing individual project-based learning.

In the literature, some works have been proposed to provide learners and/or teachers with a CAA tool based on linguistic descriptions of data generated into the learning process: Automatic Textual Reporting in Learning Analytics Dashboards Ramos-Soto2017 , Feedback reports for students based on several performance factors Gkatzia2013 and Reports describing the learner’s rating in a specific learning activity Sanchez2012 .

In RubioTrivino2016 linguistic descriptions were used for improving player experience in a computer game called YADY (Your actions define you). There are remarkable differences with respect to the present work. While in RubioTrivino2016 the feedback aims to improve player experience, now the feedback aims to support the teaching-learning process.

In this paper, we propose a methodology and a data-driven software in order to automatically generate personalized and technical feedback from the data generated during the heuristic algorithm execution. A combination of three computational techniques is proposed, namely: bot’s behaviour analysis, computational perception networks and natural language generation based on templates. The idea is that each student can gets immediate, technical and personalized feedback about their faults committed during the development of the project and they can learn about the heuristic algorithms employed for programming computer games bots.

On the other hand, this approach is very beneficial to the teachers since allows them to:

  1. Save time for evaluating others aspects of the projects what implies a better understanding of them.

  2. Enhance the classical process of assessment providing students with personalized and technical feedback.

  3. Support individual project-based learning in order to get a more closed tracing of the projects and the opportunity of focusing on the weak skills of the students and its strengthening.

In order to show and explore the possibilities of this new technology a web platform has been designed and implemented by one of the authors following the phases and steps indicated in the methodology and the software specification (see User’s Manual in Appendix). Additionally, this portal has been incorporated into the teaching learning process, now each student can consult the feedback in any time he/she wants and compare different kind of algorithms for programming computer games bots, also he/she can establish his/her own plan work for learning. Additionally, behavior profiles for computer game bots and human players allow to compare the quality of the algorithms designed by students by using an adaptation of the Turing test which will be presented at Hingston2009 .

The structure of the paper is as follows. Section 2 introduces several general concepts regarding project-based learning in artificial intelligence and provides a very brief review of the state of art on the different involved disciplines. Then, in section 3 a methodology for incorporating linguistic description of data is proposed and incorporated into the AI projects. Section 4 details the software architecture for providing teacher and students with personalized and technical feedback. Afterwards, section 6 explains the experimentation and evaluation carried out on the projects of the student by employing an adaptation of the Turing test. Finally, section 7 provides some concluding remarks.

2 Preliminary Concepts

2.1 Linguistic Descriptions of Data and Natural Language Generation

Linguistic Description of Data (LDD) intends to automatically produce expressions that convey the most relevant information contained (and usually hidden) in the data. It uses a number of modelling techniques taken from the soft computing domain (fuzzy sets and relations, linguistic variables, etc.) that are able to adequately manage the inherent imprecision of the natural language in the generated texts Ramos-Soto2015b . LDD models and techniques have been used in a number of fields of application for textual reporting in domains such as: Deforestation Analysis Conde-Clemente2017a , Big Data Conde-Clemente2017b , Advices for saving energy at home Conde-Clemente2017c , Self-Tracking Physical Activity Sanchez-Valdes2015 , cosmology Sanchez-Valdes2013 ; ArguellesTrivino2013 , driving simulation environments Esciolaza2013 , air quality index textual forecasts Ramos-Soto2015a , weather forecasts Ramos-Soto2013 . It is a subfield of Artificial Intelligence (AI) which allows us to produce language as output on the basis of data input.

NLG models and techniques have been applied for textual reporting in various domains, such as meteorological data Golberg1994 ; Coch1998 , care data Portet2009 , project management WhiteCaldwell1998 , air quality BusemannHoracek1997

2.2 Restricted Equivalent Functions

A restricted equivalent function (REF) BBP06 is a function which allows to establish a similarity between the elements of a domain. A REF can be formally defined as follows:

Definition 2.1

A REF, , is a mapping which satisfies the following conditions:

  1. for all

  2. if and only if

  3. if and only if and or and

  4. for all , c being a strong negation.

  5. For all x,y,z , if , then and

For example, satisfies conditions (1)-(5) with for all . A similarity measure based on REFs between linguistic terms has been recently proposed in order to enhance the inference engine of Bousi Prolog Rubio2017 .

2.3 Project-based learning in Artificial Intelligence

From the year 2011 a project-based learning methodology based on computer games is applied into the intelligence artificial course at the University of the Bío-Bío. This methodology aims to provide students with a better understanding of the heuristic algorithms which can be employed in real world applications. To this end, the project aims to develop a computer game in which the bot’s ability should be like that of the human players, being the programming skills and the abilities for incorporating them in the computer game very important competencies to be achieved as well.

In the year 2017 the project consisted in the development of a set of computer game bots which should be designed and implemented by using the Java programming language. A computer game bot aims to remain itself inside of a scenario based on cells during the most time possible. The student must take into account that the bots can lose energy in each movement performed (1 point of energy each five seconds). Three Bots opponent (also programmed by the students) will treat to stole its energy and a set of rewards will be distributed at the scenario which for providing bots with additional energy.

Figure 1: Project-based learning employed in the course of Artificial Intelligence at the University of the Bío-Bío

The first competence to be acquired for students in this project is the well-known algorithm thinking competence which is a pool of abilities for construction, analysis, specification and understanding of algorithms for solving given problem. Additionally, improving and adapting algorithms for given problems is an important ability to be acquired as well. In particular, the learning methodology employed in the artificial intelligence course at the University of the Bío Bío is as follows (see Figure 1):

  1. Theoretical explanation of the Heuristic algorithm is provided to students in order that they can get a better understanding of them from a theoretical point of view.

  2. Implementation of the algorithms must be performed by the students by using a particular programming language (Java is currently used) in order that programming skills can be acquired by the students.

  3. Understanding behavior of the algorithm in a real-life context. The implementation performed by the student is incorporated into the computer game.

  4. Evaluation of the Bots created by the students is performed by checking if the bots is acting in a similar way than human expert players.

We are going to pay attention on the third and the fourth items because an important question here is how a student of an artificial intelligence course can be sure that his/her designed and implemented bots is correctly working when it is incorporated in the computer game. An informal way to get it is by observing to the bot and to check that it is performed all the functionalities.

A limitation of this process is that some details about the design and the implementation could be lost due to the large amount of data generated during the execution of the algorithm. This fact makes difficult to get an optimal assessment of the projects turning the pedagogical monitoring of the students into a complex task.

In order to address this flaw the concept of ”algorithm behavior profile” is proposed. This profile is formed by the linguistic descriptions automatically generated by analyzing the data generated during its execution, then ”computer game bot behavior” is defined as the behavior of a bot which has been implemented by using heuristic algorithms.

Then, as LDD allows us to automatically generate a human behavior player profile RubioTrivino2016 , algorithm behavior profile can be obtained. By using this idea, the students can check if the designed computer game bot has a similar behavior than the human player one. In order to formally define this comparison a Turing test based on LDD and REF for comparing profiles is proposed and explained in detail in section 3.3.

3 Methodology for incorporating linguistic description of data in AI projects

Figure 2: General methodology for automatically generated personalized feedback by using linguistic reports

This methodology aims is to provide us with a guide for creating a data-to-text system which transforms data in linguistic descriptions about the behavior of the computer bots implemented by using heuristic algorithms. First, data generated by the movements of the computer games bots will be grouped in metrics. The values captured for each metric will be clustered in linguistic terms. Next, a set of if-then rules will be generated by aggregating these linguistic terms. The final result is a bot’s behavior profile report automatically generated.

The proposed methodology is formed by three phases: Bot’s Behaviour Analysis, Linguistic Descriptions of Data and Evaluation (see Figure 2).

3.1 Phase 1. Bots’s Behaviour Analysis

The aim of this phase is to analyze actions performed by the intelligent agents in order to get a set of behavior patterns. In order to do that, the following steps can be followed:

  1. Selection of Entities, attributes and interactions the main features of the entities which are acting in the virtual world must be selected in order to capture useful information about which entities must be taken into account in the process of assessment. To this end, entities, interactions and attributes must be identified and well-established.

    In our case, four entities have been identified: agent, opponents, rewards and obstacles. The agent has three important attributes: position x and y, energy and time employed to capture each reward. The rest of the entities and its attributes are shown in the Figure 3.

    Entity Attriibute Description Interactions and its effects
    Agent (1) Position Opponents (stole energy), Obstacles (protection), Rewards(gain energy)
    Position
    Energy
    Opponent(3) Position Obstacles,Rewards
    Position
    Reward(4) Position Obstacles,Rewards
    Position
    Obstacle (M) Position Obstacles,Rewards
    Position Obstacles,Rewards
    Figure 3: Entities, attributes and interactions. The number (i) is indicating the number of entities in the scenario. An agent, three opponents, four rewards and obstacles are the entities of the computer game
  2. Definition of Metrics (Quantity and Quality). From the entities selected in the previous step, a set of metrics can be defined in order to analyze its behavior. We are going to split between quantity metrics and quality ones. Quantity provides us with information about the performance of the algorithms (memory occupied and iterations performed). On the other hand, quality provides us with information about the behavior of the heuristic algorithms, that is, how good an algorithm is for implementing a computer game bot which should act like a human player. These metrics are defined from the entities and attributes identified in the previous item. The Figure 4 shows these metrics and the corresponding descriptions.

    Metric Description
    Number of obstacles between the agent and the , a rectangular area is created from the position of the agent and the , respectively
    Distance between two entities and , being the position of and the position of
    Energy of the player in an instant of time during the play session
    Time registered from the start of the play session to the end of it
    True or false if a reward was captured at this instant of time
    Number of iterations performed for the execution of the heuristic algorithm (It is executed in each move)
    Amount of memory required for the execution of the heuristic algorithm (It is executed in each move)
    Figure 4: Metrics defined for analyzing the behavior of the heuristic algorithms
  3. Definition of a Computational Procedure to capture numerical data. We are going to design and implement a procedure for capturing the data and it assigns values captured during a play session to the metrics defined in the previous task. A simple algorithm for capturing data can be performed in order to put them into a data structure which allows us to handle data in an efficient way.

    In our case, traces of execution (Figure 6) have been employed as computational procedure for capturing and storing data. Tracing recording, or tracing is a commonly used technique useful in debugging and performance analysis. Concretely, trace recording implies detection and storage of relevant events during run-time, for later off-line analysis. We use a trace recording which stores the metrics defined in the previous item. The result is stored in a text file contained values for each metric defined in Figure 4. A set of execution traces can be found at the portal web (see User’s Manual in Appendix).

  4. Detection of Behaviour Patterns. Basic behaviour patterns can be established on the input data captured. An agent behavior pattern is associated with actions, that is, when a set of actions are produced then a set of effects are trigged,e.g., when the opponent is close to the player then the player goes far away from he/she, so player and opponent are related and could provide us with some interesting behaviour pattern. Note that, patterns are related with the metrics defined in the previous item and they should created from them.

Behavior Pattern Description of the actions Metrics related
Attitude How the agent acts with respect to the reward, the distance between opponent and reward must be evaluated. Distance
Situation How the agent acts with respect to the opponent, the energy and the protection must be evaluated Protection and Distance
Kind of move Which is the result of a movement, distance between the agent and the reward and opponents must be evaluated Distance and Energy
Performance Which is the performance of each movement, time and memory must be measured Memory and Iterations
Figure 5: Behavior Patterns created from the metrics defined in Figure 4

This phase provides us with a complete set of behavior patterns from actions performed by entities in the virtual world (see Figure 4). It has been designed and implemented by using a computational perception network (see Figure 8).

Figure 6: Trace of execution created from the values captured during the execution of the algorithms

3.2 Phase 2. Linguistic Descriptions of Data in a 2D virtual world for automatically generating behavior profiles

The aim of this phase is to establish a cognitive model from the previously behavior patterns identified and to generate linguistic descriptions about it. In order to do that the following step can be followed:

  1. Selection of behavior patterns to be studied. The behavior patterns defined in the previous module are analyzed and selected according with our particular interest in them, that is, which behavior patterns are important in order to create the ”algorithm behavior profile”. For example, a particular sequence of movement is not relevant for us, but the reason for performing it, it is really important.

  2. Modeling of the selected behavior patterns. A cognitive model for treating the patterns computationally is established. Taxonomies, ontologies, linguistic terms, if-then rules or a combination of them could be used in this item without giving details about the implementation.

  3. Implementation of the behavior model. The aim of this task is to implement a computational solution for representing the behavior model of our problem. Details about the implementation should be given. For example an ”Ad-hoc” implementation could be employed or a package for automatically generating linguistic descriptions could be used Conde-Clemente2017d . It will depends on the kind of application to be developed. In our case, the PHP programming language has been employed because our ojective was the developement of a web platform and, in this context, PHP is a good alternative.

  4. Linguistic Descriptions Generator. A linguistic description generator is designed and implemented providing us with textual messages from the execution of the computational perception network. In our case, linguistic summaries of data based on fuzzy quantifiers have been employed which allows us to summarize the data generated in the process.

This phase provide us with the behavior profile which is a graphical and textual describing the most relevant information about the behavior of the computer game bot during the execution of the algorithms used for its implementation.

3.3 Evaluation: A Turing Test for Computer Game Bots based on LLD and REFs

The analysis of algorithms for computer games is quite different that in others IA topics, while in the classical approach the aim is to simulate near-optimal intelligent behaviour, in computer games the aim is to provide interesting opponents for human players, not optimal ones Soni2008 .

As we mentioned the key in the evaluation in our artificial intelligence course is to develop computer game bots whose ability must be similar to the ability shown by a human player.

How it could be formally evaluated? In Hingston2009 was proposed a variation of the Turing Test, designed to test the abilities of the computer game bots to impersonate a human player. The Turing test for (computer game) bots is as follows: ”Suppose we are playing an interactive video game with some entity. Could you tell, solely from the conduct of the game, whether the other entity was a human player or a bot? If not, then the bot is deemed to have passed the test”

This kind of Turing test is adapted to our methodology by using LLD and REF. The aim of this new Turing test is to establish a formal and effective method of comparison between the human behavior profile generated for a human expert player and the behavior bots profiles generated by a computer game bot implemented by using some heuristic algorithm. Therefore, a heuristic algorithm is near-behavioral (for us it is an “optimal” algorithm) when its associated profile is similar to the human expert profile. This novel process of assessment for heuristic algorithms will be detailed in the section 5.

4 A data-driven software architecture based on Linguistic Modelling of Complex Phenomena

The software architecture proposed is formed for four modules which implements the phases and steps described in the methodology previously detailed:

  1. Tracing module

  2. Computational Perception Network module

  3. Behavior Profile Report Generation module

  4. Evaluation module

4.1 Tracing module

The tracing module aims to implement the functionality explained in the phase 1 once data have been, selected, captured and stored. The output of this module is a file contained the needed data captured during the execution of the algorithms implemented in the project (see portal web for more detail).

4.2 Computational Perception Network module

A computational perception network allows to implement the functionality explained in the phase 2. We use here the concept of Declarative Computational Perception(DCP) network which is inspired by the definition of CP network proposed TrivinoSugeno2013 and it allows to model the problem in a declarative way. A declarative CP can be recursively defined as follows:

  • Base case. A CP=(A,()), being A=(

    ) a vector of linguistic expressions that represents the whole linguistic domain of CP whose values are calculated by aggregating each

    to either one or several elements of .

  • Inductive case. A CP=(A,()), being A=() a vector of linguistic expressions whose values are calculated by aggregating each to either one or several elements of .

Note that, the base case is produced when a CP is defined in terms of a real numbers set which belongs to a particular domain; i.e., a 1CP.

The recursive case is produced when a CP is defined in terms of linguistic terms from a set of CPs; i.e., 2CP. We say that a set of sub-CPs completely define a CP or that a CP can be defined in terms of a sets of sub-CPs .

The computational perception presented in RubioTrivino2016 is enhanced for the problem previously presented in section 2.3. In this case, additional variables must be considered and hence the computational perceptions network must be enhanced, rules and templates must be also updated for these new requirements.

Currently, a computer game bot can stay in a safe, easy, dangerous or risky situation, it depends on three factors, its protection (low,normal,high) with respect to the opponent, the distance (close,normal,far) to the opponent and the energy (low,normal,high) that the bot has in this moment. Four attitudes can be detected for a computer game bot: wise, brave, cautious and passive. This depend on two factors, the distance between the bot and the closest reward and the distance between the opponent and the closest reward. A computer game bot can perform four types of movements: good, bad, scare, kamikaze. This depend on three factors, the distance between player and the closest reward, the distance between the bot and the opponent and the energy of the bot. The ability of a computer game bot depends on its attitude, kind of movement performed and the time. The skill of a computer game bot depends on its attitude, kind of movement performed and situations detected. The resources (time and space) required for the execution of the algorithm used for implementing the artificial intelligence of the bot. More formally, the computational network can be declaratively defined as follows (see Figure 

8 and appendix for more details).

  • =((Safe,Easy,Dangerous,Risky), (, , )).

  • =((Wise,Brave,Cautious, Passive), (, ))

  • =((Good, Bad, Scare, Kamikaze), (, , ))

  • =((Expert, Intermediate, Basic, Dummy), (, , ))

  • =((Very_Skilled, Skilled, Improvable, Very_Improvable), (, , ))

  • =((Very_Efficient,Inefficient,Very_Inefficient),())

4.3 Behavior Profile Report Generation module

The system selects, among the available possibilities, the most suitable linguistic expressions in order to describe the input data.

We use in order to generate a summarization of vector of linguistic expressions that represents the whole linguistic domain. These kind of CP allows us to obtain the total number of times in which a value () occurred during the execution.

These kind of CP provides us with a set of variables, its associated value and a degree , which indicates the fuzzy average for a particular value. For example, a value for CP Situation could be with at an instant and with at instant , and so on. Therefore, at the end of the execution, we will have that (in the example “safe”) has been given times with different degrees (of course, some of these degrees could be equals). Thus, the final degree is calculated as follows: . For example, the following summaries can be obtained from different (see Figure 8).

The generation of the report is performed by using the set of . For each CP a linguistic description is created in function of the pair . Percentages are calculated for each . The percentage is then transformed in a linguistic term of quantity as follows: few is when ; several when or many when . Then, we are going to consider four cases:

  1. There exists a pair whose is greater than 66 percent

  2. There exists a pair whose is greater than 33 percent

  3. There are two pair whose is greater than 33 percent

  4. There not exists any pair whose is greater 33 percent

A complete example of behavior profile generation from data execution is detailed in Example 1.

Example 1

Suppose a raw of the execution trace described in the Figure 6.

1, 13, 4, 12, 2, 12, 3.60, 3.16, 1.41, 17, 5.0, 2.0, 1.0, 15.26, 17.08, 13.0, 13.89, 15995,false. 42, 924, J

The data are processed and grouped. In the Figure 7 is shown the result. The second column represents data captured and the third one terms linguistic created from these data. Linguistic terms are implemented by using using fuzzy sets (trapezoidal functions), membership degrees are also shown in the Figure 7 with P=Position Player(1,13) O1= Position Opponent1(3,16), O2=Position Opponent 2(4, 12) and O3=Position O3(2, 12).

Name Data Linguistic Term Generated
Distance (P,o1) 3.60 Close (3.69, 0, 0, 4, 7) = 1
Distance (P,o2 3.16 Close (3.16, 0, 0, 4, 7) = 1
Distance (P,o3) 1.41 Close (1.41, 0, 0, 4, 7) = 1
Energy(P) 17 High (17, 10, 13, 100, 100) = 1
Protection (P,o1) 5.0 High (5.0, 4, 6, 380, 380) = 0.5
Protection (P,o2) 2.0 Normal (2.0, 1, 3, 3, 5) = 0.5
Protection (P,o3) 1.0 Low (1.0, 0, 0, 0, 2) = 0.5
Distance (P,R*) 15.26 High (15.26, 13, 16, 38, 38) = 0.75
Distance (o1,R*) 17.08 High (17.08, 13, 16, 38, 38) = 1
Distance (o2,R*) 13.0 Normal (13.0, 6, 9, 11, 14) = 0.33
Distance (o3,R*) 13.89 High (13.89, 13, 16, 38, 38) = 0.29
Time 15995 Small (15995, 0, 0, 90000, 150000) = 1
Iterations at this movement 42 Normal (42, 18, 30, 42, 54) = 1
Memory occupated(Bytes) 924 Low (924, 0, 0, 768, 1280) = 0.69
Figure 7: Data captured during a trace execution and the linguistic terms associated

Then each CP is instantiated with the values of the linguistic terms and if-then fuzzy rules are computed by using the average as t-norm of aggregation for computing computational perceptions as follows (Obtained after processing and computing data in Figure 7): Distance(A,R*)=(High ,0.29) Distance(P,R)=(Normal, 0.33) Energy=(High,1) Attitude=(Cautious,0.54)

Protection=(Low,0.5) Distance(P,O)=(Close,1) Energy = (High,1) Situation=(Dangerous,0.83)

Distance(J,R*)=(Normal,0.33) Distance(P,O)=(Close,1) Energy=(High,1) Movement=(Bad,0.91)

Attitude=(Cautious,0.54) Situation=(Dangerous,0.83) Movement=(Bad,0.91) Ability=(Dummy,0.82)

Attitude= (Cautious,0.54) Movement=(Bad,0.91) Time=(Small,1) Skill=(Improvable,0.76)

Memory=(Low,0.69) Iteration=(Normal,1) Resources=(Efficient,0.76) Finally, the are computed:

  • (wise,17.53), (brave,101.55), (cautious,14.05), (passive,10.78)

  • (risky,24.44), (dangerous,651.39), (safe,32.26), (easy,0)

  • (good,24.21), (scared,0), (kamikaze,94.82), (bad,48.01)

  • (skillful,7.56), (little skilled,0.72), (improvable,122.2), (very improvable,31)

  • (expert,38.48), (intermediate,0), (basic,31.93), (dummy,94.88)

  • (very efficient,41.42), (efficient,121.86), (inefficient,0), (very inefficient,15.33)

From these and using the case established in a template, the instantiation is produced. An example of template and its instantiation is showed in the Figure 8. The rest of the sentences for each CP are detailed in the appendix and a complete example is detailed in Figure 8

Template Instantiation after Bots play session
The bot showed attitudes. Definitely, were safe. The bot proved capable of performing movements. The bot displayed an skill level times. The bot proved to be times. During most of the execution, the measured use of resources demonstrates an operation that is times . The bot showed several brave attitudes. Definitely, many situations were safe. The bot proved to be capable of performing several good movements. The bot displayed an expert skill level several times. The agent proved to be skillful several times. During most of the execution, the measured use of resources demonstrates an operation that is many times very efficient
Figure 8: Automatic generation of a bot behavior profile: from trace of execution to linguistic descriptions

5 Experimentation and Evaluation

As we mentioned one the most important objectives in AI is to create an agent that simulates the human ability. The bots behavior profile is compared with the human expert player profile (see Figure 9) by using a similarity measure based on REFs. The human player profile was created after that an expert human player played to the computer game with the following result:

  • Attitude is mainly brave during the most part of the time.

  • Situation is mainly safe during the most part of the time.

  • Movements were mainly good during the most part of the time.

  • The player is expert.

  • The player is skilled.

  • The use of computational resource is efficient in time and space

The final grade (from 1 to 7) is computed by using the similarity between the human behavior profile and the bot one. The equation for calculating the final grade is as follows:

(1)
  1. : 1 point (all the students have 1 point as a minimum score - it is mandatory at the University of the Bío-Bío)

  2. is the similarity between human player and bot attitude.

  3. : is the similarity between human player and bot situation.

  4. : is the similarity between human player and bot movements.

  5. :is the similarity between human player and bot ability.

  6. : is the similarity between human player and bot skill.

  7. :is the similarity between human player and bot efficiency.

where is a similarity measure between computational perceptions. The following definition formalizes this measure.

Definition 5.1

Given two , whose percentage linguistic vectors and respectively. A similarity measure between and is defined as:

being

Example 2

Let be two summation computational perceptions for the human player and the computer game bot, respectively:

  • (wise,122.35), (brave,289), (cautious,87.59), (passive, 8.75)

  • (wise,17.53), (brave,101.55), (cautious,14.05), (passive, 10.78)

Then, the percentages linguistic vectors are calculated for each by using their totals and , respectively:

  • (wise,0.240), (brave,0.569), (cautious,0.172), (passive,0.017)

  • (wise,0.122), (brave,0.709), (cautious,0.097), (passive,0.075)

Now, the similarity can be calculated:

Hence, . The rest of the similarities is computed in a similar way. The final grade together with the linguistic reports generated for the human player and the bot designed by an anonymous student are shown in the Figure 9.

Figure 9: Similarity between hehavior profiles reports: human player versus bots

6 Conclusions

In this paper a novel and promising technology for automatically generating behavior profile reports and immediate feedback from the traces of execution of the heuristic algorithms has been presented.

The concepts of the algorithm behavior profile has been proposed as a pedagogical tool for evaluating computer game bot quality. A Turing test based on the similarity between bot behavior profile and human player has been defined and implemented.

These pedagogical resources provide teachers with an useful tool for getting information about the quality of the heuristic algorithm designed by the students what allows to improving the teaching and the learning process. The project created by the students can be evaluated in any time from two point of view: quantity (performance of the algorithm -space and time-), quality (kind of situations, movements, attitudes, abilities, skills).

As future work we would like to incorporate our technology in other high educational disciplines in order to obtain personalized feedback.

Acknowledgments

This work has been done in collaboration with the research group SOMOS (SOftware-MOdelling-Science) funded by the the University of the Bío-Bío.

Appendix A Definition of the Computational Perceptions

a.1 CP Situation

This CP is defined as follows: =((Safe,Easy,Dangerous,Risky), (, , )) with = ((low, intermediate,high),(0,1,,380))), with low(0,0,0,2), intermediate(1,3,3,5) and high(4,6,380,380); =((close, normal, far), (0,1,,38)), with close(0,0.4,7), normal(6,9,11,14) and dar(13,16,38,38)

Consecuent (Situation) Antecedent (Protecction,Distance,Energy)
Risky Intermediate, Close, Normal
Dangerous Low, Close, Normal
Safe Intermediate, Normal, Normal
Easy Low, Normal, Normal
Dangerous Low, Normal, Low
Dangerous Normal, Close, Low
Dangerous Normal, Normal, Low
Cases for CP Situation Sentence
1 Definitely, situations were
2 situations were
3 situations were , although situation also were
4 Diverse situations were detected during the most part of the play session

a.2 CP Attitude

This CP is defined as follows: =((Wise,Brave,Cautious, Passive), (, )) and = ((close, normal, far),(0,1,, size(scenario))) with close(0,0.4,7), normal(6,9,11,14) and far(13,16,38,38), being R* the closest reward to the agent; = ((close, normal, far),(0,1,, size(scenario))) with close(0,0.4,7), normal(6,9,11,14) and far(13,16,38,38), being R* the closest reward to the agent

Consecuent(Attitude) Antecedent (Distance,Distance)
Wise Close, Normal
Brave Close , Close
Cautious Normal, Close
Passive Normal, Normal
Cases for CP Attitude Sentence
1 During the most part of the play session, the bot showed attitudes
2 The bot showed attitudes
3 The bot showed attitudes , but also it showed attitudes
4 The bot does not show a particular attitute during the play session’

a.3 CP Movement

This CP is defined as follows: =((Good, Bad, Scare, Kamikaze), (, , )), with = ((close, normal, far),(0,1,, size(scenario))) with close(0,0.4,7), normal(6,9,11,14) and far(13,16,38,38), being R* the closest reward to the agent; = ((close, normal, far),(0,1,, size(scenario))) with close(0,0.4,7), normal(6,9,11,14) and far(13,16,38,38), ; =((low, normal, high),(0,1,, size(scenario)))

Consecuent(Movement) Antecedent(Distance,Distance,Energy)
Good Close, Mormal, Normal
Good Close, Close, Low
Scare Normal, Normal, Normal
Kamikaze Close, Close, Normal
Bad Normal, Close, Normal
Cases for CP Movement Sentence
1 Certainly, of the movements performed by the bot were
2 The bot proved to be capable of performing movements
3 The bot proved to be capable of performing attitudes , but also performed movements
4 The bot performs indistinctly several movements during the play session

a.4 CP Ability

This CP is defined as follows: =((Expert, Intermediate, Basic, Dummy), (,,)) with ((little,normal,large),(0,1,,)) with little, normal, and large

Consecuent(Ability) Antecedent(Attitude,Movement,Time)
Expert Wise, Good, Small
Intermediate Brave, Good, Normal
Basic Passive, Bad, Much
Dummy Passive, Scare, Much
Cases for CP Ability Sentence
1 Clearly, the bot displayed a/an player times
2 The bot displayed a/an player times
3 The displayed a/an player times, however times it acted as a/an
4 No kind of player has been identified

a.5 CP Skill

This CP is defined as follows: =((Very_Skilled, Skilled, Improvable, Very_Improvable), (, , ))

Consecuent(Skill) Antecedent(Attitude,Movement,Situation)
Very_Skilled Wise, Good, Easy
Skilled Cautious, Good, Safe
Improvable Brave, Bad, Dangerous
Improvable Passive, Bad, Risky
Cases for CP Skill Sentence
1 Certainly, the bot proved to be times
2 The agent proved to be times
3 The agent proved to be times, nevertheless times proved to be
4 No kind of skill can be proved at the current play session

Appendix B User’s Manual

This section aims to explain in detail the use of the web platform for automatically generating human player and bot behaviour profiles from execution traces. A quick test of the application can be performed by downloading examples of traces at the following URL:

http://youractionsdefineyou.com/assess/web/examples_traces

First, the user must access to the URL:

|http://www.youractionsdefineyou.com/assess

The main window shows two options: log in and register. The register of a user consists in introducing email, user name, full name, RUT and password. A confirmation via email will be sent to the user if the registration was correct. The log in of a user consists in introducing the user name and the password. Second, behaviour profile report can be obtained by selecting and loading an execution trace file, then the behavior profile report is automatically generated. Additionally, the report can be exported in PDF (see Figure 10)

Figure 10: Linguistic Report obtained from a trace of execution of a human expert player

References

  • (1) Arguelles, L., & Trivino, G. I-struve: Automatic linguistic descriptions of visual double stars. Engineering Applications of Artificial Intelligence, 26(9), 2083-2092.
  • (2) Bustice, H., Barrenechea, E., Pagola, M. Restricted Equivalence Functions. Fuzzy Sets and Systems 157, pp. 2333-2346 (2006).
  • (3) Bergquist, W.H and S.R. Philips. A HandBook for faculty development. Volume 1, Council for the Advancement of Small Colleges
  • (4) Burgos, D., Van Nimwegen, C., Van Oostendorp, H., & Koper, R. (2007). Game-based learning and the role of feedback. A case study.
  • (5) Busemann, S., & Horacek, H. Generating air quality reports from environmental data. In Proceedings of the DFKI Workshop on Natural Language Generation (pp. 15-21).
  • (6) Coch, J. Interactive generation and knowledge administration in MultiMeteo. In Proceedings of the Ninth International Workshop on Natural Language Generation (pp. 300-303).
  • (7) Conde-Clemente, P., Alonso, J. M., Nunes, E. O., Sanchez, A., & Trivino, G. New Types of Computational Perceptions: Linguistic Descriptions in Deforestation Analysis. Expert Systems with Applications.
  • (8) Conde-Clemente, P., Trivino, G., & Alonso, J. M. Generating automatic linguistic descriptions with big data. Information Sciences, 380, 12-30.
  • (9) Conde-Clemente, P., Alonso, J. M., & Trivino, G. Toward automatic generation of linguistic advice for saving energy at home. Soft Computing, 1-15.
  • (10) Conde-Clemente, P., Alonso, J. M., & Trivino, G. (2017, July). rLDCP: R package for text generation from data. In Fuzzy Systems (FUZZ-IEEE), 2017 IEEE International Conference on (pp. 1-6). IEEE.
  • (11) Douce, C., Livingstone, D., & Orwell, J. Automatic test-based assessment of programming: A review. Journal on Educational Resources in Computing (JERIC), 5(3), 4.
  • (12) Eciolaza, L., Pereira-Fari A, M., & Trivino, G. (2013). Automatic linguistic reporting in driving simulation environments. Applied Soft Computing, 13(9), 3956-3967.
  • (13) Fyfe, E. R. Providing feedback on computer-based algebra homework in middle-school classrooms. Computers in Human Behavior, 63, 568-574.
  • (14) Gkatzia, D., Hastie, H., Janarthanam, S., & Lemon, O.

    Generating student feedback from time-series data using Reinforcement Learning.

    ENLG 2013, 115.
  • (15) Goldberg, E., Driedger, N., & Kittredge, R. I. Using natural-language processing to produce weather forecasts. IEEE Expert, 9(2), 45-53.
  • (16) Hamilton, I. R. (2009). Automating formative and summative feedback for individualised assignments. Campus-Wide Information Systems, 26(5), 355-364.
  • (17) Hattie and Timperley 2007 The power of feedback. Review of educational research, 77(1), 81-112.
  • (18) Hingston, P. (2009). A turing test for computer game bots. IEEE Transactions on Computational Intelligence and AI in Games, 1(3), 169-186.
  • (19) Lachner, A., Burkhart, C., & N ckles, M. (2017). Formative computer-based feedback in the university classroom: Specific concept maps scaffold students’ writing. Computers in Human Behavior, 72, 459-469.
  • (20) Lavolette, E., Polio, C., & Kahng, J. The accuracy of computer-assisted feedback and students’ responses to it. Language, Learning & Technology, 19(2).
  • (21) Marin, D. R. P. Automatic evaluation of users’ short essays by using statistical and shallow natural language processing techniques Doctoral dissertation, Master?s thesis, Universidad Aut noma de Madrid. http://www. ii. uam. es/dperez/tea. Pdf.
  • (22) Noorbehbahani, F., & Kardan, A. The automatic assessment of free text answers using a modified BLEU algorithm. Computers & Education, 56(2), 337-345.
  • (23) Hattie and Timperley 2007 Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in higher education, 31(2), 199-218.
  • (24) Pérez, D., Gliozzo, A. M., Strapparava, C., Alfonseca, E., Rodríguez, P., & Magnini, B. (2005, May). Automatic Assessment of Students’ Free-Text Answers Underpinned by the Combination of a BLEU-Inspired Algorithm and Latent Semantic Analysis. In FLAIRS conference (pp. 358-363).
  • (25) Portet, F., Reiter, E., Gatt, A., Hunter, J., Sripada, S., Freer, Y., & Sykes, C. Automatic generation of textual summaries from neonatal intensive care data. Artificial Intelligence, 173(7-8), 789-816.
  • (26) Prensky, M. Digital game-based learning (Vol. 1). St. Paul, MN: Paragon house.
  • (27) Ramos-Soto, A., Vazquez-Barreiros, B., Bugarín, A., Gewerc, A., & Barro, S. Evaluation of a Data?To?Text System for Verbalizing a Learning Analytics Dashboard. International Journal of Intelligent Systems, 32(2), 177-193.
  • (28) Ramos-Soto, A., Bugarín, A., Barro, S., Gallego, N., Rodríguez, C., Fraga, I., & Saunders, A. (2015, November).
  • (29) Automatic generation of air quality index textual forecasts using a data-to-text approach.
  • (30) In Conference of the Spanish Association for Artificial Intelligence (pp. 164-174). Springer International Publishing.
  • (31) Ramos-Soto, A., Pereira-Fari a, M., Bugarín, A., & Barro, S. (2015, August). A model based on computational perceptions for the generation of linguistic descriptions of data. In Fuzzy systems (FUZZ-IEEE), 2015 IEEE international conference on (pp. 1-8). IEEE.
  • (32) Ramos-Soto, A., Bugarín, A., Barro, S., & Taboada, J. (2013, September). Automatic generation of textual short-term weather forecasts on real prediction data. In International Conference on Flexible Query Answering Systems (pp. 269-280). Springer Berlin Heidelberg.
  • (33) Rubio-Manzano, C., & Trivino, G. Improving player experience in Computer Games by using players’ behavior analysis and linguistic descriptions. International Journal of Human-Computer Studies, 95, 27-38.
  • (34) Rubio-Manzano, C. Similarity measure between linguistic terms by using restricted equivalence functions and its application to expert systems 9th European Symposium on Computational Intelligence and Mathematics. Faro (Portugal), October 4th ? 7th, 2017.
  • (35) Sarsar, F., & Harmon, S. (2017) Student and Instructor Responses to Emotional Motivational Feedback Messages in an Online Instructional Environment TOJET, 16(1).
  • (36) Sánchez-Torrubia, M. G., Torres-Blanc, C., & Trivino, G. (2012). An approach to automatic learning assessment based on the computational theory of perceptions. Expert Systems with Applications, 39(15), 12177-12191.
  • (37) G. Sanchez-Valdes, D., & Trivino, G. Linguistic and emotional feedback for self-tracking physical activity. Expert Systems with Applications, 42(24), 9574-9586.
  • (38) Sanchez-Valdes, D., Alvarez-Alvarez, A., & Trivino, G. Linguistic description about circular structures of the Mars? surface. Applied Soft Computing, 13(12), 4738-4749.
  • (39) Soni, B., & Hingston, P. (2008, June). Bots trained to play like a human are more fun.

    In Neural Networks, 2008. IJCNN 2008.(IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on (pp. 363-369). IEEE.

  • (40) Singh, R., Gulwani, S., & Solar-Lezama, A. (2013). Automated feedback generation for introductory programming assignments. ACM SIGPLAN Notices, 48(6), 15-26.
  • (41) G. Trivino, M. Sugeno. Towards linguistic descriptions of phenomena. International Journal of Approximate Reasoning, 54(1), 22-34.
  • (42) van der Kleij, F. M., Eggen, T. J., Timmers, C. F., & Veldkamp, B. P. Effects of feedback in a computer-based assessment for learning. Computers & Education, 58(1), 263-272.
  • (43) Weber, B. G., Mateas, M. A data mining approach to strategy prediction. In Computational Intelligence and Games, 2009. CIG 2009. IEEE Symposium on (pp. 140-147). IEEE.
  • (44) White, M., & Caldwell, T. (1998). A practical, extensible framework for dynamic text generation. In Proceedings of the Ninth International Workshop on Natural Language Generation (INLG) (pp. 238-247).