Quantitative analysis of approaches to group marking

10/31/2018 ∙ by Hugh Harvey, et al. ∙ University of Bristol 0

Group work, where students work on projects to overcome challenges together, has numerous advantages, including learning of important transferable skills, better learning experience and increased motivation. However, in many academic systems the advantages of group projects clash with the need to assign individualised marks to students. A number of different schemes have been proposed to individualise group project marks, these include marking of individual reflexive accounts of the group work and peer assessment. Here we explore a number of these schemes in computational experiments with an artificial student population. Our analysis highlights the advantages and disadvantages of each scheme and particularly reveals the power of a new scheme proposed here that we call pseudoinverse marking.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Group projects, where a small group of students tackles a challenge, can enhance the learning experience [1, 11]. A central idea of this form of teaching is that the students help each other and learn to overcome problems together. This is beneficial as it allows students to develop transferable skills in project management and leadership, besides technical skills [11, 4].

To an assessor, group projects can present a considerable challenge [8, 4]. Many academic systems require the assessor to assign an individualised mark to each student participating in the course. These marks should not only be fair and unbiased, but should also take the transferable skills that the student acquired into account [12]. While the technical progress made by a group is generally easy to assess, e.g. on the basis of a project report, it is more difficult to judge the technical and collaborative skills of individual members of a group. In a typical setting, each student works on several shorter projects with different groups. The challenge is then to compute a student’s individualised mark from the set of group marks received.

A simple approach is to assign to each student the mean of the marks of the groups in which the student participated. However, as we show below, this has the undesirable effect of reducing the variance of the marks, producing a cluster around the mean. A number of alternatives exist: The assessors can individualise marks based on their own experience of the project or they can ask the students to write reflexive accounts of the project, which are taken into account in the marking 

[6].

A recent review of group marking [5] noted that the most common approach to individualised marking is peer assessment, where students are asked to mark the other members of the groups they participated in, by assigning actual marks, e.g. [1, 8], or grouping them in categories, e.g. [14]. It has been noted that peer marking is an effective tool if used appropriately, but there can be some flaws when dealing with students who are either particularly generous or harsh with their assessment of peers [10].

Other options include self-assessment, where you ask the student within the group to perform an additional piece of work and assign a portion of their mark based on that. It is noted that, while self-assessment is a simple approach, it often deviates from the markers assessment significantly, partly due to factors beyond the scope of the project [18, 19].

An interesting scheme proposed in [13] reveals another alternative, here called Iterated Individual Weighting Factor (IWF-it). This method involves peer assessment as above, but then adjusting the weighting of each group member’s opinion depending on how close to the true group mark they predicted for their group. This approach does attempt to discount potential bias in the peer assessments by inaccurate individuals, but is computationally expensive.

There has been an attempt to simplify the process of marking group projects by the Australian Learning and Teaching Council. This resulted in a software package called (Self and Peer Assessment Resource Kit) [9]. This software has been tested in a university-level engineering setting [20]. The test showed that, while the idea was sound, a high proportion of the test subjects were dissatisfied with the programs attempt to distribute the group marks.

Here, we consider the effectiveness of different approaches through quantitative analysis of fairness. Our analysis highlights two promising approaches: First, a particular peer assessment method in which each group member is asked to assess the value of each member of the group, in order to then fairly divide the marks between them. While there is a risk with this approach that marks can be manipulated by coordinated responses, it otherwise yields fair results from a transparent and easily understood procedure. Second, pseudoinverse marking can be implemented to eliminate the need for peer assessment. This approach uses the marks of each individual student for a series of projects and then computes the best estimate of the mark that the student should receive. This approach avoids the risk of strategic coalitions and the administrative effort of peer assessment at the cost of transparency.

2 Methods

In the following, we test various marking schemes by applying them to a virtual student population. We numerically generate populations of students. Each student is assigned a number which represents their “ideal mark”

, i.e. the mark that they would receive under an optimal marking scheme. We draw the ideal marks for the virtual student population from a Gaussian distribution with mean mark of 60 and standard deviation of 12, which is consistent with assumptions of the UK academic system.

After the virtual population has been created, we assume that each student undertakes projects in groups of size over the course of the unit. The unit mark for each student is then found as some function of the marks of the groups the student participated in, and potentially some other information, such as peer assessment.

Without much analysis, it is intuitive that having larger groups (greater ) makes it harder to determine accurate individualised marks, whereas having more group marks for each student (greater ) makes it easier. We verified that this is indeed the case for all schemes considered. For all but the last scheme studied in this paper the effect of participation in more groups is very predictable as it just leads to an averaging. For the analysis presented here, we vary group size and project per student together and consider specifically the case , which allows us to present results more concisely.

In our our virtual population simulation, we assume that the group mark for group is the mean of the ideal marks of the group’s participants, which we can mathematically express as

(1)

where is the number of participants of project , and is the participation matrix which is defined by

(2)

For illustration consider the matrix

(3)

This matrix describes the partitioning of 4 students into 4 groups, such that student 1 and student 2 participate in group 1 (first row), student 3 and student 4 participate in group 2 (second row), student 1 and student 3 participate in group 3 (third row) and student 2 and student 4 participate in group 4 (fourth row).

Once the group marks have been determined we apply a set of different marking schemes (explained below) to assign individualised marks to the students. We analyse the accuracy of the marking schemes first by considering their performance in a scenario where and . For this scenario, we can draw scatter plots showing the individualised mark of a student as a function of their ideal mark . We furthermore study the accuracy as a function of the variables and . For this comparison, we quantify the accuracy in terms of the maximal absolute error

(4)

the mean absolute error

(5)

and the root mean square error

(6)

Among these, the mean absolute error provided a measure of the overall accuracy, whereas the maximum error provides an estimate of “unfairness” by focussing on the most extreme deviation across the group. The mean square error is an intermediate between these two extremes, averaging over the population but assigning a higher weight to individual marks that deviate strongly.

3 Results

We now compare six different schemes for generating individualising marks. Starting from the simplest and building up to the most mathematically complex. The first is simply to assign the group mark to all the group members (Sec. 3.1), which provides a baseline or null-model against which other methods can be judged. We then consider using additional information such as reflexive accounts (Sec. 3.2), before proposing an improvement on this scheme that we call mark-adjusted reflexive accounts (Sec. 3.3). Subsequently, we consider two methods for peer marking, normalised peer assessment (Sec. 3.4) and peer ranking (Sec. 3.5), a scheme we propose based on an approach that has been proposed for sports rankings [17, 15]. Finally, we propose a mathematical method that aggregates results from different projects to infer individualised marks using the Moore-Penrose pseudoinverse (Sec. 3.6).

3.1 Self-organised peer pressure (SOPP)

In our simplest scheme, the final mark for a student is the average of the marks of the groups that the students participated in

(7)

where . We call this scheme the self-organised peer pressure (SOPP) method because every student has a direct interest in the success of their groups, whereas other marking schemes might create secondary objectives such as to improve standing within the group (possibly at a cost to other group members or group success) to optimise outcomes from peer assessment.

We note that even the very simple SOPP method leads to individualised marks unless multiple students participate in exactly the same groups.

The disadvantage of this method is that it leads to a “regression to the mean”-type of effect. Plotting the expected assigned versus the ideal mark for a group of students (Fig. 1) shows that the best students receive on average less than their ideal mark whereas weaker students receive more than their ideal mark. This result is intuitive as, under this scheme, good students suffer from being grouped with students that are on average weaker than themselves, whereas weak students benefit from being groups with students who are on average stronger than themselves.

Considering the effect of students participating in more, larger projects (Fig. 2) shows that average errors saturate. However, this saturation occurs at a relatively high level where the root mean square error reaches almost 10 percentage points, which corresponds to a whole degree classification in the British academic system. Perhaps more worrying is that the maximum error is high and keeps increasing, indicating that this marking scheme is increasingly unfair to individual students.

In summary, the advantage of the SOPP scheme is that it is very easy and intuitive. This makes the scheme easy to apply and marks are found by a highly transparent procedure. Furthermore, it creates incentives that are well aligned with the spirit of project work. For each student, the only way to improve their marks is to improve the mark of the groups they participate in. This mimics a typical workplace situation where ultimately the success of a project matters rather than the contributions of individual team members.

The good alignment of the assigned mark and project goals is undermined by the large error in the marks, which is of the same order of magnitude than the range of the marks returned by the method. One may be tempted to rescale the marks to address the regression-to-the-mean, resulting in a narrow distribution of marks. However, such a rescaling would exacerbate the individual errors to levels that could be judged unacceptable.

3.2 Reflexive accounts (RA)

A common approach to individualise marks is to pair group projects with individual assignments such as reflexive accounts on the project. Here we assume that for every group project a student participates in, they also carry out an individualised assignment. We assume that the assessor does not arrive at the student’s ideal mark when marking the individualised component. This represents the student performing better or worse for a reflexive report than in their report, and possible subjectivities in the assessors marking. The assessor instead arrives at the students ideal mark with an error, drawn from an uniform distribution with a range of

.

In the simplest case, considered in this section, the mark that the student receives in a given project is then found as a linear combination of the group mark and the mark for the individual assignment. Based on personal experience and some preliminary trials we focus on the case where the group mark enters with weight and the individualised component enters with weight . This means that, taking multiple projects into account the final mark of a student is computed as

(8)

where represents the mark assigned to student on project for the reflexive piece of work, as a function of their ideal mark . As the final mark is to a significant proportion based on the ideal mark, we expect better overall performance of this scheme. The regression-to-the-mean that we identified as the major problem of the SOPP method is ameliorated but not eliminated (Fig. 1). Also considering the participation in larger groups shows a quantitatively better, but qualitatively similar picture to the SOPP method.

In summary, the RA approach offers few surprises. It uses a linear combination of the SOPP method with the ideal result and hence produces results that interpolate between the SOPP outcome and the ideal outcome. The advantage of the RA scheme is that we can somewhat ameliorate the poor performance of the SOPP scheme, while largely maintaining transparency mathematical simplicity. However, the improvement comes at the cost of abandoning the spirit of project work to some extent as the project outcome is supplemented by a secondary objective which is an individual written assignment. We thus lose some of the beneficial alignment of the project with real-life workplace scenarios and introduce significant non-project workload for both the student and the assessor.

Changing the weighting of group and individualised components in the marking gives some control over the balance between the advantages of project work and marking fairness. What makes this scheme somewhat unsatisfactory is that the trade-off remains linear. Introducing a small individualised component only marginally improves mark accuracy, while significantly improving accuracy requires us to sacrifice most of the advantages of project-based assessment.

3.3 Mark-adjusted reflexive accounts (MRA)

We now explore an alternative use of marks for reflexive account, where the final mark is not entered linearly but are instead used to judge the individual’s contribution to the project. We assume again that the mark received for the reflexive account is based on the student’s ideal mark . The same uniform distribution with range of is used to represent the reflexive marking error. We then estimate the contribution student to project by

(9)

The total number of mark points that we can distribute for project is the project mark times the number of students involved in the project , i.e.

(10)

Instead of distributing these points equally we now partition them according to the estimated contribution to the project, such that student receives for project

(11)

The final mark for the student after participating in multiple projects is then

(12)
Figure 1: Scatter plot of assigned mark against ideal mark for various marking schemes. Shown are results for Self organised peer assessment (SOPP - top left), Reflexive accounts (RA - top right), Mark-adjusted reflexive accounts (MRA - middle left), Normalised peer assessment (NPA - middle right), Peer ranking (PR - bottom left) and Pseudoinverse marking (PiM - bottom right). Diagonal lines indicate the ideal distribution of assigned marks. Pseudoinverse marking is the best distribution of marks. (, , and )
Figure 2: Plot of the different measures of error against different group sizes for various marking schemes. Shown are results for Self organised peer assessment (SOPP - top left), Reflexive accounts (RA - top right), Mark-adjusted reflexive accounts (MRA - middle left), Normalised peer assessment (NPA - middle right), Peer ranking (PR - bottom left) and Pseudoinverse marking (PiM - bottom right). Normalised peer assessment and Pseudoinverse marking show interesting trends as using larger group sizes reduces all three errors.

Considering the outcomes of the computational experiment we see that good students still receive marks that are systematically less than their ideal mark (Fig. 1). Compared to the simpler RA scheme the performance of MRA is almost exactly identical. This is also confirmed when considering different group sizes and project iterations (Fig. 2). A small difference exists for very small group sizes, but this is mostly cosmetic. Consider that, under the MRA scheme, the reflexive account does not affect the marking at all. Thus any error made in the marking of the reflexive account does not affect the student’s mark. So this difference appears due to specific assumptions of the computation experiment and is of little practical relevance.

Perhaps more significant is another advantage of the MRA scheme that is not considered in the computational experiment. This is the case where one student in an otherwise well-performing group does not engage with the project at all.

Consider the (somewhat extreme but not unheard of) example where a student does not do any work for the project or the reflexive account, but the project is still marked as 60. Under the assumptions made in this report the average ideal mark for the other 3 students on the project would have to be 80 to achieve this result while compensating for the work not done by the defecting student. Assuming that the 3 engaged students achieve on average the same level of marks in their reflexive accounts their mark under the RA scheme would be

(13)

While the defecting student receives

(14)

which in the British system would mean that the defecting student still passes while the very strong engaged students only receive an upper second class mark. By contrast, in the MRA scheme the engaged students would receive on average their ideal mark of 80, a solid first class result, while the defecting student receives a zero mark.

Even if the defecting student decides to invest effort in the reflexive account, but not the project, the student would need to achieve a mark of 51 in the reflexive account to receive the same mark of 42 that would be assigned under the RA scheme. We note that the MRA scheme can mean that students can fail although both their reflexive account and their overall group mark are sufficient to pass. However, this is unlikely to happen unless a student only participates in groups where all others receive vastly higher marks on the reflexive accounts.

In summary, the MRA scheme is slightly more complex than the RA scheme, but the calculations are still simple enough to be carried out very quickly with pen and paper, a calculator or a simple spreadsheet. On average, the improvement of performance over the RA scheme does not seem to justify the extra complexity of the MRA. However, MRA is superior in the extreme case where students do not engage with the project at all. Because the scheme is still quite transparent, this means that MRA can create strong incentives for the students to engage with the projects. We caution however that it is prudent to be conservative in the marking of reflexive accounts if this scheme is used and avoid extreme marks unless they are clearly indicated.

3.4 Normalised peer assessment (NPA)

In peer assessment the students assess the performance of each member of their group. The immediate advantage of this method is that information on the relative contributions to the project is sourced directly from the students involved. The obvious drawback is that the students are given a way to influence their mark other than delivering a good project outcome. There is thus a risk that students use peer assessment strategically to maximise their own marks rather than to provide genuine information about project contributions.

A variety of different peer assessment schemes have been proposed to maximise the advantages and minimise the disadvantages of this approach [7]. To gain insights into the student’s ability to manipulate their marks, one can distinguish between strategies that students can implement for themselves and those that require the formation of coalitions with other students. An example of the former case would be a student could give themselves a high mark and/or all other group members a low mark to maximise their outcome. An example of the latter case would 3 members of a group of four conspiring to increase their marks at the expense of the fourth student.

The topic of coalition formation in groups is very complex and subject of active research. However, a simple argument can be made to show that peer assessment can fail if multiple students conspire to mark others strategically. Consider the (not very far fetched) situation where three members of a four-person group conspire against the fourth student. In this case the three conspirators could mark each other highly while assigning a zero mark to the fourth student. In the absence of other sources of information, this situation is indistinguishable from a scenario in which one of the students did not engage with the project at all. Thus no marking scheme can deliver a satisfactory outcome in both of these scenarios at the same time.

Here we particularly consider a normalised peer marking scheme [7]. In this scheme every member of a group marks all other members, but not themselves. Not allowing students to mark themselves prevents students from inflating their own mark, the most direct form of mark manipulation.

In the next step the marks assigned by each student are normalised. For example, let be the mark that student assigned to student . We can then compute the normalised mark

(15)

the normalised marks are in the range [0,1] and reflect the student ’s opinion of the proportional contribution of the other group members. Taking only these normalised marks into account prevents students from inflating their own contribution by marking all others contributions lowly.

To determine the mark that student receives for the project , we first compute the total mark points as in the MRA scheme above (Eq. 10) and then distribute these points proportionally to the normalised marks a student has received

(16)

where is the total number of mark points that can be distributed according to (Eq. 10) and the denominator is the number of students in the group.

In the numerical simulation of this method, each student is assumed to rank another student based on their ideal mark with some random error. This random error represents students not accurately judging the ability of other group members. Like for similar errors for other marking schemes, an uniform distribution with a range of was used. Under these assumptions the method achieves good results, particularly there is no systematic bias against stronger students (Fig. 2). Unlike all other schemes in this report, the error of this method decreases when group size is increased. This is intuitive as the assigned mark builds is computed from a larger number of observations, and thus profits from a “Wisdom of the Crowd” effect [3].

While the performance of this method in our test is very good, the clear drawback of the method is that it is vulnerable to manipulation by coalition formation. One could argue that this can be somewhat mitigated by adjusting the mark if indications of a coalition formation within a group exists (e.g. from observation of the group work). However, allowing for such adjustments to some extend defeats the purpose of having a transparent procedure. We therefore recommend this method particularly for projects with large group sizes, where the method performs particularly well, and the impact of small coalitions within the group is lesser.

3.5 Peer ranking (PR)

We now turn to two unusual methods. For sports rankings, network-based approaches have recently received attention [17, 15]. A similar approach can be taken to peer marking. Instead of marking each group member with a precise mark students rank the other members of their group. The ranking consists of a pairwise comparison of the perceived contribution of every pair of other members. We can interpret each such ranking as a link in the network leading from the weaker to the stronger student. (see Fig. 3 for an example)

A

C

B

D
Student A: Student B: Student C: Student D:
Figure 3:

Example of peer ranking. In a set of four students each student forms an opinion about the relative contributions of the other three students (Bottom right). This information can be represented as a directed graph (left), which can be in turn represented as a matrix (top right). The leading eigenvector of this matrix provides an aggreagted measure for the relative contributions.

Network structures can be encoded in matrices. Here we use a modified adjacency matrix , defined by

(17)

where is the number of times that student was ranked lower than student , is the number of times that student was ranked higher than student and , are constant parameters. An example can be seen in Figure 3.

The advantage of the matrix notation is that we can now generate marks using a spectral approach, similar to the famous PageRank algorithm [16]. For this purpose, we compute the leading eigenvector of and normalise it such that .

After the normalisation, the entries of the eigenvector are in the range with a mean . The -th entry is a proxy for the relative contribution for student in the group.

We then use the eigenvector entries to personalise the marks such that the final mark that student receives for the project is

(18)

where is index assigned to student within group and is another scalar parameter used to control the weight attributed to the peer ranking.

Based on preliminary tests we used the parameters , and . We assumed that students rank each other student in order of a perceived mark, which is the ideal mark and a random error. Like for the normalised peer assessment method (Sec. 3.4), this random error is drawn from an uniform distribution with range . These perceived marks are used to rank each other student, and then only the set of ranked lists from each student is used.

The results of our computational experiment show that these choices of parameters receive a relatively low absolute error, but has a slight systematic bias favouring weaker students. This bias could be reduced by reducing , albeit at the cost of increasing non-systematic error. Individual error appears primarily in groups of equal or almost equal strength where the ranking method amplifies the small differences. One may suspect that this is a lesser problem in reality where students are not able to rank each other perfectly based on tiny differences. However, a detailed investigation of this point would likely require a study of ranking behaviour of real students, which is beyond the scope of the present paper.

In summary, the peer ranking scheme, performs slightly worse than the NPA method (particularly in case of large group sizes). It is also considerably less transparent. We therefore judge that the NPA method will be superior in most cases.

3.6 Pseudoinverse Marking (PiM)

All methods considered so far were applied to one project group at a time. However, in a setting where students participate in multiple projects additional information can be gained by taking into account how they perform in groups of different composition.

We continue working on the assumption that the mark of a project report reflects the average ideal mark of the participants in the project, i.e.

(19)

In vector notation this can be written as

(20)

where the matrix is defined by . In marking, we have determined the project marks , and we know the as it follows from the partitioning of students into groups. Our aim is to compute the ideal marks that the students should receive.

Formally we can compute by multiplying the inverse of which yields

(21)

If students were partitioned into groups such that is invertible, this relationship should yield the desired marks exactly.

In typical cases another slight complication arises because common ways of dividing students into groups lead to singular matrices such that an inverse does not formally exist. In this case, the method can still be applied if we replace the inverse with the Moore-Penrose pseudoinverse [2]. If each student participates in only a single project, then calculating the pseudoinverse of corresponds to implementing the SOPP method (Sec. 3.1). Clearly, this is undesirable so the number of projects each student participates in must be at least the number of students in the group. In an ideal case, every student completes multiple projects and interacts with a completely different set of people every time. In this case, Eq. (21) can be used to deconvolute the contributions and recover the ideal mark using the pseudoinverse.

In our numerical experiment, we consider such an case where every student contributes to 4 projects and interacts with distinct other students in the process. The assignment resulted in a singular . The results (Fig. 1) show that the estimated ideal mark computed with the Moore-Penrose pseudoinverse are in excellent agreement with the ideal marks. The method producing the smallest errors of all methods considered here.

One could criticise that the accuracy of the method relies on our assumption that the project marks achieved are the mean of the ideal marks of the project participants. However, this is actually not so much an assumption as a definition of what is meant by the ideal mark.

Apart from the high accuracy this method has many other advantages. For example, it does not require any additional information (peer marking, ranking, accounts) from the student and thus eliminates the workload that would otherwise be required to source such information. Moreover, the method is very robust and could be easily implemented, say in a spreadsheet.

Another advantage is that, unlike for any other method, the measured errors all decrease as the population size increases (Fig 4). Intuitively, as the algorithm can draw on more information, the accuracy of the ideal mark predictions should increase. Additionally, for larger populations, more possible student combinations for groups are possible, allowing a more optimal assignment for groups.

A drawback is that the final marks are only available after all projects have been completed, and students may find the method intransparent. Perhaps the main disadvantage is that there is a lower threshold for the number of projects in which a student needs to participate. This de-facto limits the applicability of the method to projects that are carried out in small groups.

4 Conclusion

In this paper we explore several schemes for individualising group project marks in computational experiments using a virtual student population. The most suitable scheme depends on the specific circumstances, including the number of students and the number of project each student participates in. An overview of the errors for typical settings is shown in Table 1.

Method Name Average Absolute Error Maximum Error
Self Organised Peer Pressure 5.5 13.8
Reflexive Accounts 3.7 10.6
Mark-Adjusted Reflexive Accounts 3.5 11.3
Scaled Peer Assessment 1.7 4.5
Ranked Peer Assessment 3.3 12.3
Inverse Problem Approach 1.3 1.9
Table 1: Overview of the errors of various methods for a typical scenario. In a group of 52 students each student undertakes 4 project in groups of 4 people.

The simplest scheme (SOPP) where students receive the average of the group marks for the projects that they participated in systematically favours weaker students, but has the advantage of not generating additional workload and being very close to a real life workplace situation. Marking of reflexive accounts (RA) did not fully remove the systematic bias from the marks and has the disadvantage of generating significant additional workload. Among the two RA marking schemes, the mark-adjusted reflexive accounts (MRA), proposed here, provides stronger incentive to engage with the project. Normalised Peer Assessment (NPA) resulted in very accurate marks, particularly for projects with many members, but entails the risk of mark-manipulation by coalition formation. A peer ranking scheme (PR) proposed here performed similarly but is significantly less transparent. Finally, pseudo-inverse marking (PiM), also proposed here, achieves very accurate results without the risk of mark-manipulation or additional workload, but requires that the number of projects to which a student contributes is at least as large as the group size in the projects.

In absence of factors favouring a certain approach our analysis highlights normalised peer assessment as the best scheme for projects with large group sizes and pseudoinverse marking as the best scheme for marking a series of projects carried out in small groups.

This paper also illustrated how agent based computational models can be used to explore the fairness and accuracy of marking schemes. Here we have used only a very simple model, and plenty of opportunities for improvements and refinements still exist. For example, one could allow the students to allocate their time investment into the project strategically, or build in social dynamics, but these extensions are beyond the scope of the current paper. We hope that in the future more of these analysis will be carried out to yield deeper insights into the mathematical properties of group marking schemes.

References

  • [1] M. A. Abelson and J. A. Babcock. Peer evaluation within group projects: A suggested mechanism and process. Organizational Behavior Teaching Review, 10(4):98–100, 1986.
  • [2] J. Barata and M. Hussein. The moore-penrose pseudoinverse: A tutorial review of the theory. Brazilian Journal of Physics, 42(1-2):146–165, 2012.
  • [3] M. Crosscombe and J. Lawry. Exploiting vagueness for multi-agent consensus. In Multi-agent and Complex Systems, pages 67–78. Springer, 2017.
  • [4] W. M. Davies. Groupwork as a form of assessment: common problems and recommended solutions. Higher Education, 58(4):563–584, 2009.
  • [5] J. Dijkstra, M. Latijnhouwers, A. Norbart, and R. A. Tio. Assessing the “I” in group work assessment: State of the art and recommendations for practice. Medical Teacher, 38(7):675–682, 2016.
  • [6] J. E. Dyment and T. S. O’Connell. Assessing the quality of reflection in student journals: A review of the research. Teaching in Higher Education, 16(1):81–97, 2011.
  • [7] C. Spatar et al. A robust approach for mapping group marks to individual marks using peer assessment. Assessment and Evaluation in Higher Education, 40(3):371–389, 2014.
  • [8] M. R. Fellenz. Toward fairness in assessing student groupwork: A protocol for peer evaluation of individual contributions. Journal of Management Education, 30(4):570–591, 2006.
  • [9] M. Freeman and J. McKenzie. Spark, a confidential web–based template for self and peer assessment of student teamwork: benefits of evaluating across different subjects. British Journal of Educational Technology, 33(5):551–569, 2002.
  • [10] J. Goldfinch. Further developments in peer assessment of group projects. Assessment & Evaluation in Higher Education, 19(1):29–35, 1994.
  • [11] C. E. Hmelo-Silver. Problem-based learning: What and how do students learn? Educational Psychology Review, 16(3):235–266, 2004.
  • [12] B. Jackel, J. Pearce, A. Radloff, and D. Edwards. Assessment and feedback in higher education: A review of literature for the higher education academy. Higher Education Academy, 2017.
  • [13] S. Ko. Peer assessment in group projects accounting for assessor reliability by an iterative method. Teaching in Higher Education, 19(3):301–314, 2014.
  • [14] M. Lejk and M. Wyvill. Peer assessment of contributions to a group project: a comparison of holistic and category-based approaches. Assessment & Evaluation in Higher Education, 26(1):61–72, 2001.
  • [15] S. Motegi and N. Masuda. A network-based dynamical ranking system for competitive sports. Scientific Reports, 2:904, 2012.
  • [16] L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab, 1999.
  • [17] J. Park and M. E. J. Newman. A network-based ranking system for US college football. Journal of statistical mechanics, 10:P10014, 2005.
  • [18] S. Sharp. Deriving individual student marks from a tutor’s assessment of group work. Assessment & Evaluation in Higher Education, 31(3):329–343, 2006.
  • [19] J. J. Suñol, G. Arbat, J. Pujol, L. Feliu, R. M. Fraguell, and A. Planas-Lladó. Peer and self-assessment applied to oral presentations from a multidisciplinary perspective. Assessment & Evaluation in Higher Education, 41(4):622–637, 2016.
  • [20] C. Wu, E. Chanda, and J. Willison. Implementation and outcomes of online self and peer assessment on group based honours research projects. Assessment & Evaluation in Higher Education, 39(1):21–37, 2014.