Table 8. Items chosen for the SAE object naming/identification task
Expressive Receptive Gesture
Practice item Dog Flower Push
1 Ball Bed Point
2 Baby Balloon Give
3 Truck Car Throw
4 Carrot Tomato Nods head “yes”
5 Leaf Neck Come
6 Build Drink
7 Paint House
8 Shoulder Nappy
9 Glasses Eat
10 Gloves Toothbrush
Table 9. Items chosen for the Afrikaans object naming/identification task
Expressive Receptive Gesture
Practice item Kat Hond Kyk
1 Maag Voet Gee
2 Bal Ballon Gooi
3 Doek Eet Skud kop “ja”
4 Wortel Trui Stoot
5 Trok Wortel Kom
6 Bou Kryt
7 Bord Vliegtuig
8 Klip Graaf
9 Verf Verf
10 Lamp/lig Ster
pertaining to the relevant languages. Furthermore, the protocol was devised specifically to be used as a guideline to create an object naming/identification task for each of the six South African languages. This will allow for the cross-linguistic comparison of information derived from the administration of the task, and has the potential to contribute to research on lexical development in South African languages by delivering information pertaining to the differences and/or similarities across the respective languages (Haman et al., 2017).
Understanding the principles of test design is important for SLPs working within a context like South Africa, but these principles are not routinely included as an aspect of our training. The introduction of the ITC, the incorporation of these principles and the further review of literature available on lexical assessment development principles, which was conducted in this research study, provides an opportunity for SLPs to learn more about test design. The ITC outlines principles, which were introduced earlier in chapter 2, that should not only be considered when developing an assessment but also when using an assessment in research. This research study focused on these principles and incorporated those which were relevant to the current research study in the following ways:
1) Permission was obtained to use the South African MB-CDIs in this study.
2) I am a native speaker of the languages which were the focus of this research study.
I am a qualified SLP with experience with the content of the test and process of testing (and now informed about assessment development principles).
3) The constructs that were being measured in both the new assessment task and the MB-CDI were being measured in the same linguistic groups.
4) The protocol can be used cross-culturally to create assessments which are still unique to each language but will allow for cross-linguistic comparison.
5) The protocol includes administration guidelines and instructions for the administrator: including an answer sheet, time indications and clear guidelines for participants on the purpose of the test and how it will be scored.
6) A pilot study was conducted with a participant native to the language and culture, which allowed for changes to be made in terms of instructions and other aspects prior to a larger scale administration.
7) Practice items were included and categorised as ‘easy’, and during administration of the practice items, I established rapport and ensured the participant knew what was expected of them.
8) Manual administration was offered to participants who were not familiar with and/or did not have the resources for digital administration/completion of the test.
9) The sample was chosen according to relevant characteristics, and its size was appropriate for a validation study.
Read and Chapelle (2001) discussed the development of vocabulary tests and describe in detail guidelines used and a framework for use in vocabulary test development. They suggest that the purpose of a test is directly related to the design of the test, and clearly outline the questions to be asked and the steps to be followed when designing a test. As mentioned earlier in this chapter, the protocol designed in this research study aligns with this framework described by Read and Chapelle (2001), suggesting that the object naming/identification task measures vocabulary independently, yet it forms part of the assessment of the validity of the MB-CDI. Furthermore, the items included in the object naming/identification task were carefully chosen according to guidelines outlined in the protocol.
Read and Chapelle (2001) and Scott et al. (2008) suggest how test scores are presented – and to whom – plays a vital role in the test design. In this study, the test scores were presented as multiple scores, with scores presented for each domain as well as a percentage for the overall score for each participant, and each component was assessed and results were combined, tabulated and correlated during the analysis process. The audience for the object naming/identification task was identified as young children, and the protocol clearly outlines what is included in the task and what it sets out to measure. In terms of validation, the object naming/identification task can be described as a measure of the expressive language, receptive language and gestural abilities of young children, with each score being relevant in the validation process. The assessment has the ability to contribute to knowledge of language acquisition of young children within the South African contexts (Paradis, Emmerzael &
Duncan, 2010; Read & Chapelle, 2001).
The object naming/identification task developed from the protocol as a guide, proves similar to a vocabulary checklist developed by Prado et al. (2018). Their research study aimed to describe a method to develop the vocabulary checklist in new languages and to prove their validity in assessing early language development. The vocabulary checklist was developed based on the MB-CDI and they selected 100 words to be included by interviewing mothers on the expressive language of their children, specifically probing categories of the MB-CDI.
Similar to this research study, they then selected a sample of words to represent each category
and arranged them according to the difficulty of each word. To assess concurrent validity, the checklist was then administered to 29 children between the ages of 1; 5 and 2; 1, and results were compared to standardized language measures. Statistical analyses of the results indicated strong correlations between the assessments, proving the method of developing vocabulary checklists to be valid (Prado et al., 2018). This is significant in this study due to the similarities presented in the development of the protocol and the development of the vocabulary checklist.
There are few assessments available to assess the language of South African children and these assessments have often only been developed for use with one language. This makes it very difficult to make comparisons across languages (Potgieter, 2016). With this protocol, my aim was to put forward a theoretically well-founded ‘blueprint’ that could be used across multiple South African languages. This will contribute to the research on language acquisition within the South African contexts, theoretically and practically, as it will enable researchers from the Southern African MB-CDI team, who are working in the different languages, to follow the steps as set out in the protocol to create an object naming/identification task for each MB-CDI language. This task would then be unique and appropriate to their language of focus but it would be based on similar principles to ensure a level of standardisation across languages.
Given the vast differences in the country’s official languages, this is an ambitious goal. As introduced earlier in chapter 1, the official languages of South Africa are divided into two groups, the West-Germanic languages and the Bantu languages (Van der Merwe & Le Roux, 2014). These languages have different characteristics, some of which are unique to their group or sub-group (Van der Merwe & Le Roux, 2014; Mdlalo et al., 2019). Mdlalo et al. (2019) conducted a study regarding language differences which are imperative for SLPs to take into consideration, both in research and in clinical practice. In addition to this, authors have written about the pitfalls of simply translating between languages and found that language differences can easily be misinterpreted as errors (Mdlalo et al., 2019). Despite these challenges, it would be helpful, time- and cost effective to be able to make cross-linguistic comparisons without having to ‘reinvent the wheel’ each time. Therefore, the team of researchers who have been granted permission to develop MB-CDI adaptations for all South Africa’s official languages have set out to do so using the same methodology but different content, with each adaptation being tailored to be appropriate for the language of focus (Southwood et al., 2021). It is imperative that, within this research, one is attentive to finding a balance between the individuality of each of the languages and being effective and practical with the allocation of resources.
Contributions by this research to the development of South African MB-CDIs are self- evident. Given that no ‘gold standard assessment’ is available, the team of researchers working on the MB-CDI needed a measure which can be used for validation purposes. The protocol developed in this research study was developed with that goal in mind. Much like the development of the MB-CDI, this protocol serves to provide the same methodology for each task it is used to create, in a manner which will allow for the development of a unique task for each of the languages. The object naming/identification task, which was developed by following the protocol, would also serve as a useful independent screening tool. The MB-CDI is a very time-consuming measure, evident in the current research study as a number of participants were excluded from the study because they did not complete the MB-CDI fully.
This supports existing evidence which suggests that a shorter tool, that is valid and reliable, will be helpful and important (Mayor & Mani, 2019).