Therefore, a series of International Workshops in Performance Analysis of Sport commenced in 2007 providing an annual forum for the communica- tion of ideas in addition to the bi-annual World Congresses of Performance Analysis of Sport. The fi rst International Workshop was held in Cardiff, Wales in 2007 and emphasised developments in commercial performance analysis systems. The second International Workshop was hosted by Leeds Carnegie University in 2008 and emphasised performance analysis in coach- ing, particularly in Olympic sports. The University of Lincoln hosted the 3rd International Workshop in 2009, which was a forum for academic
score being conceded can reveal a path from a root cause to the critical situ- ation and subsequent score. This is very similar to the way safety require- ments of software-intensive systems are analysed (Ericson, 2005), except that the analysis is done during system development to help avoid hazard- ous conditions and possible mishaps. O’Donoghue (2007c) described how fault tree analysis could be used to identify critical states from where the path followed could lead to a vulnerable situation or an alternative path could lead to a safe state. A key difference between the use of fault trees in safety analysis and their use in performance analysis of sport is that in sport, a team will wish to create perturbations that they can take advantage of.
Analysis of coach behaviour
Analysis of coach behaviour has become an established area of research (Gilbert and Trudel, 2004, Kahan, 1999, More, 2008, Potrac et al., 2002, Van der Mars, 1989). There are different quantitative methods of analysing coach behaviour, including the Computer Aided Coaching Analysis Instrument (Franks et al., 2001, Harries-Jenkins and Hughes, 1995, More and Franks, 1996), the Revised Coaching Behaviour Recording Form (Côté et al., 1995, Durand-Bush, 1996) and the Arizona State University Observation Instrument (ASUOI) (Lacy and Darst, 1984, 1985, 1989).
There are many aspects of coaching such as the knowledge of the coach, the decisions made by coaches and the development of training programmes that are best studied using methods other than performance analysis.
However, coaching and teaching style as well as coach behaviour during coaching sessions and competition can be observed and analysed in detail.
The behaviour of high level and successful coaches has been reported (Bloom et al., 1999, Gilbert and Jackson, 2006). The ASUOI is becoming a recog- nised standard for the analysis of coach behaviour and it has been used to analyse the behaviour of strength and conditioning coaches (Massey et al., 2002), coach behaviour during ice-hockey games (Trudel et al., 1996), age group effects (Cushion and Jones, 2001) and the behaviour of physical edu- cation teachers (Paisey and O’Donoghue, 2008).
Developments in computer technology have not only led to more sophis- ticated match analysis systems but also to more sophisticated systems for analysing coach behaviour. For example, Brown and O’Donoghue (2008a) used a split screen system to allow simultaneous viewing of the coach and the wider training session. The use of microphones that can be worn by the coach with sound being transmitted to a receiver connected to a camcorder allow the words of the coach to be recorded in detail while the camera may remain distant enough not to interfere with the training. Qualitative analy- sis can complement the quantitative analysis of coach behaviour, providing explanations of the behaviour that is recorded. Paisey and O’Donoghue (2008) analysed physical education teachers’ behaviour using the ASUOI as well as in-depth qualitative observational analysis of the video recordings of
the physical education lessons. Donnelly and O’Donoghue (2008) analysed netball coach behaviour at three different levels using the ASUOI, and used a follow-up interview to discuss the similarities and differences in the behav- iours of the coaches.
Performance indicators for different sports
Performance analysis allows the complex and dynamic nature of sports per- formance to be represented in an abstract way, using performance indica- tors that focus attention on the most relevant characteristics. The term
‘performance indicator’ is not another name for ‘variable’ but is a term for those variables that are demonstrated to be valid measures of important aspects of performance and which possess the metric properties of having an objective measurement procedure, a known scale of measurement and a valid means of interpretation. Aiming for these qualities for performance indicators in performance analysis will help ensure that the term ‘perform- ance indicator’ is used in a similar manner to the way it is used in business and engineering fi elds.
Different investigations have used differing sets of variables to character- ise sports performance. For example, some time-motion analysis investiga- tions look at the distribution of time among different classifi cations of movement (Bangsbo et al., 1991, Bloomfi eld et al., 2004, O’Donoghue et al., 2005b). Other time-motion investigations use indicators relating to dis- tances covered and velocity profi les (Di Salvo et al., 2009, Hughes et al., 1989, Reilly and Thomas, 1976, Withers et al., 1982). Similarly, studies of tennis strategy fail to agree on standard sets of variables to use to represent strategy. For example, Hughes and Clarke (1995) used player positioning, ball placement and rally times. O’Donoghue and Ingram (2001) used point types, especially the percentage of points where players attack the net.
Therefore, since the advent of the term ‘performance indicators’ in perform- ance analysis of sport (Hughes and Bartlett, 2002), there has been a research effort into defi ning the most valid performance indicators in different types of sports. Hughes and Bartlett (2002) classifi ed formal games as invasion games, net games, wall games and striking/fi elding games and defi ned the types of performance indicator of interest when analysing performances in those sports. There are other sports that do not fall within the classifi cation made by Hughes and Bartlett (2002) where performance indicators have been proposed. For example, canoe slalom (Wells et al., 2009), middle dis- tance athletics (Brown, 2005) and martial arts (Shapie et al., 2008) are sports where performance indicators have been proposed based on the broad principles outlined by Hughes and Bartlett (2002).
Varying approaches have been proposed to identify the key performance indicators to characterise different aspects of sports performance. Focus groups have been established to obtain expert opinion as to the key aspects of matches to record and present (McCorry et al., 1996). Artifi cial neural
networks and regression analysis have been used to identify the factors most associated with outcome indicators in tennis (Choi et al., 2006b). Tennis is an interesting sport to consider, as fi ve set matches can contain periods where the eventual winning player is losing. Furthermore, in real-time feed- back systems that can be used within the match, it is necessary to use per- formance indicators based on sections of the match. Therefore, Choi et al.
(2008) used individual quarters of basketball matches and individual sets of tennis matches to determine the performance indicators most associated with winning performance. Statistical analysis has also been undertaken to compare the winning and losing teams within matches to identify the per- formance indicators that distinguish between them (Choi et al., 2006a).
This approach can be criticised because some matches will be played between very successful teams. Therefore, an alternative approach is to classify teams according to fi nishing positions within tournaments and identify perform- ance indicators that distinguish between successful and unsuccessful teams within tournaments (O’Donoghue et al., 2008).
Investigations comparing winning and losing teams within matches and successful and unsuccessful teams within tournaments can identify many per- formance indicators as being signifi cantly different between the samples of teams being compared. One of the disadvantages of such investigations is that many of the matches used are between teams or players with a large dif- ference in ability. Such matches, where the outcomes are wins for the higher ranked of the two teams (or players) may not be the most critical ones for coaches to prepare for. The most even matches, where the result could be a win or a loss with almost equal probability are the most important matches.
Therefore, recent research has attempted to determine the performance indi- cators that distinguish between winning and losing performances in matches between closely ranked teams (Csataljay et al., 2008).
Very often, the performance indicators identifi ed though peer review or more quantitative methods will contain pairs of performance indicators that are highly correlated to each other. A group of correlated performance indi- cators represent the same broad aspect of performance or at least different aspects of performance that are strongly associated. Therefore, O’Donoghue (2008a) proposed the use of principal components analysis to identify inde- pendent broad dimensions within sports performance and those perform- ance indicators associated with them. The performance indicator most highly loaded onto a principal component could be selected to represent the factor of interest.
The biggest problem with many of the quantitative techniques used to identify valid performance indicators is that they require a lot of data to already exist. The purpose of identifying valid performance indicators may be to develop a system to allow these to be analysed, which is a ‘chicken and egg’ situation. We need the system fi rst to gather the data to test the validity of the performance indicators, but we need to know the performance indi- cators fi rst in order to develop the system! However, there may be internet
sources of data that are available for such exercises, or the purpose of the entire study may be to identify valid performance indicators for a given sport that can then be utilised in practice by others.
Work-rate analysis and evaluation of injury risk
Analysis of work-rate has been undertaken using both manual and compu- terised methods to determine distances covered and the breakdown of match time among different movement classes (O’Donoghue, 2008b). Speed, agility and quickness training programmes are being used to prepare ath- letes in many sports and yet there are very few studies giving an understand- ing of the agility requirements of competing in those sports. This type of research is very time-consuming even when computerised systems are used for data entry (Bloomfi eld et al., 2007a, b, Robinson and O’Donoghue, 2008). However, the results of such research are of clear benefi t to those developing the conditioning elements of players’ training programmes.
The technique developed by Bloomfi eld et al. (2004) has not only been used for the analysis of agility requirements of soccer (Bloomfi eld et al., 2007b) but also for the analysis of injury risk in netball (Darnell et al., 2008, Williams and O’Donoghue, 2005). Other research analysing injury risk has used observational analysis of match events (Hawkins and Fuller, 1998, Robinson and O’Donoghue, 2008), with some work also classifying the risk of injury associated with each event (Rahnama et al., 2002). Further work is needed in many other sports to assess their potential for injury.
Reliability of methods
The reliability techniques used in performance analysis can be challenged for failing to detect poor reliability and also for falsely concluding that reli- able methods are unreliable. The main problem with the reliability methods that have been used is that there have been very few attempts to relate the reliability statistics to the analytical goals of the studies for which the systems are used. Very often a level of inter-operator agreement is set as a maximum percentage error of 5 or 10 per cent. However, there is no rationale given for this and the impact of a 5 or 10 per cent error on the investigation being conducted is unknown. Choi et al. (2007) synthetically introduced different severities of error into basketball data to determine the kappa values that would be associated with these. By determining the kappa values (or any other reliability statistic) that we would get for an acceptable level of error or the point at which errors would lead to the incorrect conclusion being drawn about the performance, we can identify threshold values for the reli- ability statistics that are more meaningful than the arbitrary values that have been used to date. O’Donoghue (2007b) showed that some reliability statistics did not have construct validity in that values were obtained when comparing completely different performances that exceeded those obtained
when the same performance had been analysed twice by an expert observer.
Therefore, there is a great deal of work that needs to be done urgently to produce reliability assessment procedures that are themselves valid assess- ments of system reliability in relation to the analytical goals of the studies for which they are to be used.
Analysis of technique
Detailed analysis of technique remains an important area of research in performance analysis of sport. Lees (2008) described ‘event skills’ as skills that are themselves sports events, such as the long jump. Optimal techniques for maximising performance outcomes of event skills is an area that needs to be applied to different sports, with improving technology allowing more effi cient data gathering which, in turn, allows more subjects to be included in studies. Lees (2008) described ‘minor skills’ as skills that are performed repeatedly during competition, for example the service in tennis. There may be different types of serve based on pace, placement and the application of spin. Repeated skills in games such as tennis cannot apply a single optimal technique as the opponent will be able to anticipate the service. There are very few sports, if any, where every variation of each minor skill has been analysed for players at all ability levels. Therefore, there is a need for further research in this area.
‘Major skills’ are performed repeatedly but have a much greater impact on success in a sport than minor skills (Lees, 2008); these include the golf swing. When one considers the golf swing, there are a variety of techniques based on the distance a shot has to be played, among other factors. The variety of applications of major skills in many sports gives rise to a great deal of opportunities for original research.
Changing equipment and regulations in sport can render previous research into technique obsolete and so up-to-date research is always needed.
Variability in sports performance is an emerging research area, with biome- chanics investigations showing variability in successive strides when running (Heiderscheit et al., 1998) and walking (Dingwell et al., 1999).
Technical effectiveness
Technical analysis is concerned with how well skills are performed in sport.
Positive to negative ratios have been used to evaluate the skills of players in soccer (Gerisch and Reichelt, 1993, Olsen and Larsen, 1997, Rowlinson and O’Donoghue, 2009), while winner-to-error ratios have been used in racket sports (Murray and Hughes, 2001). A criticism of the use of positive- to-negative ratios is that all events must be judged as wholly positive or wholly negative. One only has to look at a soccer match any weekend to see within the fi rst 10 minutes of play a one-to-one situation where neither player fully achieved their objective. Another issue is that the degree of dif-
fi culty of the situation where the skill is performed is often not taken into consideration. Hughes and Probert (2006) used a quality rating of -3 to +3 to give greater precision to technical effectiveness in soccer. Much more work of this nature needs to be applied to many other sports. The effective- ness of players of different standards and genders, when performing in dif- ferent situations, is still a key research area.
Tactical patterns of play
Tactical analysis is one of the main purposes of performance analysis of sport and there are many sports where much more research is needed into patterns of play. Indeed, when one thinks of a particular sport such as tennis, there are a multitude of different tactical aspects such as service tactics (Unierzyski and Wieczorek, 2004), tactics on return of serve, tactics in dif- ferent game states (O’Donoghue, 2006a, 2007a, Scully and O’Donoghue, 1999), tactics on different court surfaces (O’Donoghue and Ingram, 2001) and opposition effects on tactics utilised (O’Donoghue, 2009b). Sports are played at a range of levels, from recreational to international and world class performance, and there are no sports where all aspects of tactics have been analysed at all levels of the sport. The tactics of winning and losing teams within matches is also a useful area of research but does suffer from the disadvantage that both teams in some matches might be very successful teams; for example world cup fi nalists in cricket, rugby and soccer. Therefore, an alternative approach is to analyse the tactics of successful teams and unsuccessful teams within tournaments based on fi nishing position (Taylor et al., 2008). For example, the quarter fi nalists of the soccer world cup could be compared with those teams that were eliminated after the group stage. The effects of gender (O’Donoghue and Ingram, 2001), venue (Taylor et al., 2008) and rule changes (Williams, 2008) are other factors that may have an infl uence on tactics.
Performance profi ling
Performance indicators in sports performance are not stable variables like anthropometric variables, and there are many sources of variability in sports performance including opposition effects, venue and score line (Taylor et al., 2008). Therefore, teams and individuals need to be represented by per- formances from multiple matches. A performance profi le in performance analysis is a collection of performance indicators that together characterise the typical performance. Hughes et al. (2001b) produced the fi rst technique for analysing multiple performances, but it did not produce a profi le that incorporated all of the performance indicators of interest into a single profi le. Instead, each performance indicator was dealt with in isolation to determine the number of matches required for the accumulating mean to fall within a tolerable level of error of the overall sample mean. O’Donoghue
(2005a) produced a profi ling technique that displayed the various perform- ance indicators of interest on a radar chart profi le that related the typical performance to the norms for the relevant population of performers. This technique also showed the spread of performances about the typical per- formance, allowing consistent and erratic areas of performance to be repre- sented. James et al. (2005) used a form chart to show the 95 per cent confi dence limits of performance indicators, allowing different groups of performers to be compared. These techniques can be used to produce pro- fi les of different levels of performance in different sports. The study of vari- ability in sports performance is still a major research topic that can also be investigated through the use of performance profi ling techniques.
While profi ling is important in understanding the performances of an opponent, in scientifi c research the use of profi ling maybe a hindrance. Where there are enough players involved in an investigation, it is not necessary to have each player represented by multiple performances. Where a single per- formance is used for each player, the performances of some players will underestimate the typical values of some performance indicators while others will have overestimates of the typical values of performance indicators. The additional variability caused by individual match effects actually means that the researcher can be more confi dent in any signifi cant results found. A pro- gramme of research has commenced into the impact of limited sample sizes, individual performances and limited reliability in performance analysis of sport (Ponting and O’Donoghue, 2009). This programme of research will determine the effects of such factors on the conclusions drawn from studies.
This research uses a fi ctitious population of performances from a fi ctitious sport called ‘pseudoball’. This allows the conclusions drawn from analyses of samples to be compared with the known truth based on the synthetically created population. Further research is needed in this area to understand where it is necessary to use multiple match profi les for performers and where individual performances will be suffi cient.
The effectiveness of performance analysis support
There is still scepticism about the effectiveness of performance analysis in enhancing sports performance. Studies to analyse the effectiveness of such support are impossible to control but are none-the-less very important. So far studies in fi eld hockey (Boddington, 2002), squash (Brown and Hughes, 1995, Murray et al., 1998), Gaelic football (Martin et al., 2004) and netball (Jenkins et al., 2007, Mayes et al., 2009) have provided mixed evidence on the effectiveness of performance analysis support. Where teams and players do improve, it is impossible to prove that it is as a direct result of the use of performance analysis. This has deterred many from undertaking such inves- tigations. However, the research is vitally important and hopefully over the next few years many studies of this type will be done so that the balance of