Intelligent Computer Assisted Language Learning (ICALL) for nêhiyawêwin:An In-Depth User-Experience Evaluation

Résumé

Les applications d'apprentissage intelligent des langues assisté par ordinateur (AILAO) sont une voie relativement nouvelle d'apprentissage des langues assisté par ordinateur (ALAO). L'ALAO permet aux apprenants d'une langue d'exécuter un vaste éventail d'exercices grammaticaux et de recevoir des commentaires rétroactifs quant à leurs réponses, en dehors des heures de cours. L'ALAO est essentiel à la production dynamique de ces exercices pour les langues autochtones polysynthétiques dont la morphologie est complexe. Afin de mieux comprendre les perceptions et les comportements des utilisateurs en contexte d'AILAO, des usagers de la nêhiyawêtân (une application d'AILAO de niveau universitaire qui s'adresse aux Cris des Plaines) sont appelés à évaluer en profondeur cette application. Cinq apprenants du cri des Plaines langue seconde sont enregistrés alors qu'ils utilisent la nêhiyawêtân dans l'exécution de divers exercices grammaticaux. Ils sont invités à formuler à voix haute leurs opinions, leurs réflexions et leurs observations. Par la suite, les réactions et les stratégies des usagers qui ont été observées sont enregistrées, ce qui révèle aux auteurs quelles sont les erreurs, les stratégies et les préférences des usagers potentiels et leur permet d'améliorer les commentaires rétroactifs et la conception de même que l'interface des modèles d'exercices proposés. Au surplus, les résultats des sondages menés par les auteurs et leurs observations mettent en lumière des facteurs socioculturels qui n'apparaissent pas dans les applications d'ALAO grand public relatives aux langues majoritaires. Cette évaluation, souhaitent les auteurs, servira de guide dans celle des futurs programmes d'AILAO relatifs aux langues autochtones et autres langues minoritaires.

Abstract

Intelligent computer assisted language learning (ICALL) applications for Indigenous languages are a relatively new avenue for computer assisted language learning (CALL). CALL allows language learners to practise a wide range of grammatical exercises and receive feedback on their answers outside of class time. ICALL is essential for dynamically producing these exercises for polysynthetic Indigenous languages with complex morphology. To better understand user perceptions and behaviours within an ICALL setting, an indepth user evaluation of nêhiyawêtân (a university-level ICALL application for Plains Cree) was initiated. Five second language learners of Plains Cree were recorded using nêhiyawêtân as they completed various grammatical exercises. They were encouraged to report their opinions, thoughts, and observations aloud. Subsequently, observed user reactions and strategies were recorded. This supplied us with potential user errors, strategies, and preferences that allowed us to improve answer feedback and the design and interface of the exercise templates. Moreover, the results of surveys and observations highlighted sociocultural issues that are not seen in mainstream CALL for majority languages. We hope that this evaluation will serve as a guideline for evaluating future ICALL programs for Indigenous and other minority languages.

Mots clés

AILAO, apprentissage des langues assisté par ordinateur, cri des Plaines, langues autochtones, méthodes qualitatives, pédagogie linguistique

Keywords

computer assisted language learning, ICALL, Indigenous languages, language pedagogy, Plains Cree, qualitative methods

[End Page 337]

This article presents the results of an in-depth user evaluation of a demonstration implementation of intelligent computer assisted language learning (ICALL) for Plains Cree, known by its speakers as nêhiyawêwin. The ICALL program is called nêhiyawêtân,1 meaning "Let's Speak Cree."2 To introduce nêhiyawêtân,3 we will discuss ICALL methods and how they differ from other forms of computer assisted language learning (CALL). Subsequently, we will briefly consider nêhiyawêwin and what makes this Indigenous language a good candidate for ICALL.

Intelligent computer assisted language learning (ICALL) falls under the umbrella of computer assisted language learning (CALL). CALL refers to any instance in which a computer is used to facilitate language learning (Stickler & Shi, 2016). This includes everything from interactive language-learning software with games and quizzes (Harvey, 2015; Heift & Rimrott, 2012; Junker & Torkornoo, 2012) to the use of preexisting Web platforms such as Wikipedia (Mak & Coniam, 2008) or Twitter (Lomicka & Lord, 2012) for language-learning purposes. What makes ICALL unique is that it makes use of computational models to dynamically analyze or generate word forms, instead of being dependent on static, pre-prepared word lists. There are many interactive CALL programs that quiz users on their knowledge of Indigenous languages; however, the majority of the programs test users by employing predetermined questions and answers with little flexibility (Harvey, 2015; Ho Chunk Nation, 2015; Navajo Language Renaissance, 2008; Rosetta Stone, 2015). ICALL, on the other hand, does not rely on the exhaustive enumeration of set questions and answers but rather is able to generate a profusion of unique questions and answers based on the [End Page 338] computational modelling of a grammar and relatively small set of exercise templates (Bisazza & Federico, 2016).

Plains Cree (nêhiyawêwin), spoken throughout the Canadian Prairies, is a member of the Algonquian language family (Wolfart, 1973). Moreover, Plains Cree is the language of the Cree-Montagnais-Naskapi dialect continuum that is spoken the farthest west (Wolfart, 1973). The exact speaker population is unknown and there is great variance among sources; however, it has previously been estimated at around 34,000 speakers within Canada, out of an ethnic population of around 53,000 (Ethnologue, 2016). Many of these speakers are middle-aged or older, and most of the younger generation acquires Plains Cree as a second language, if at all. This near-absence of intergenerational transmission (the severity of which can vary substantially from one community to another) is due to multiple, parallel forces arising from the process of colonization, among which residential schools played a major, devastating role. By the 1930s there were 80 residential schools across Canada, though some of the earliest pilot schools were tested much earlier (Blackburn, 2012). Thousands of children were removed from their families and cultures, subjected to abuse and neglect, and forbidden to speak their languages (Blackburn, 2012; Lomawaima, 1993; Zalcman, 2016). The last residential school closed as recently as 1996 (Blackburn, 2012). In addition, further urbanization and assimilation have also contributed to the loss of language and culture (Struthers & Peden-McAlpine, 2005). In effort to combat this loss, language revitalization efforts are increasing, and diversifying in approach.

Though a Standard Roman Orthography (SRO) exists for Plains Cree, the course materials at various educational institutions (both in university and community settings) do not always follow this standard. Since nêhiyawêtân is meant to complement these materials, for the most part the standardized orthographic conventions are used. However, the course-specific variations on the orthographical standard may also be accepted. As nêhiyawêtân in the first instance is based specifically on the course material of the Introductory Plains Cree course (NS 152) at the University of Alberta, community involvement has currently been limited to audio recordings. Language input was given mostly by native Cree-speaking scholars from various Canadian universities, namely Dorothy Thunder, Cree instructor at the Faculty of Native Studies, University of Alberta, and Jean Okimâsis, author of a prominent textbook on Plains Cree (Okimâsis, 2004) from the First Nations University of Canada, Regina, Saskatchewan, as well as Cree scholar Arok Wolvengrey, author of the most comprehensive Plains Cree/English dictionary (Wolvengrey, 2001), also at First Nations University of Canada. In addition, we received access to [End Page 339] an electronic version of the course book for the aforementioned introductory course (Hunter, n.d.), courtesy of the Faculty of Native Studies, University of Alberta. That being said, nêhiyawêtân is linked to other ongoing collaborative projects between the University of Alberta and Cree community organizations such as Miyo Wahkohtowin Education (Ermineskin Cree Nation), Maskwacîs, Alberta, where the foremost goal is to create recordings of all the word forms in the Maskwacîs Cree Dictionary (approximately 9,000 words) (Maskwachees Cultural College, 1997), as spoken carefully by multiple native speakers of Cree, alongside also eliciting and recording words to fill in potential lexical gaps as well as example sentences, and integrating these as part of an updated version of an online dictionary of Cree. As nêhiyawêtân is still in its demonstration stage, it is important that an evaluation reveal the major issues, to highlight its full capabilities when presented to the community. Though Miyo Wahkohtowin Education is informed of its progress, we hope that there will be more community engagement and potential for it to be adapted for use by the Cree communities in Maskwacîs and elsewhere in the future.

nêhiyawêtân relies on finite state transducer (FST) technology (e.g. Beesley & Karttunen, 2003) for computationally modelling Plains Cree morphology (described in Snoek, Thunder, Lo˜o, Arppe, Lachler, Moshagen, & Trosterud, 2014, and Harrigan, Schmirler, Arppe, Antonsen, Trosterud, & Wolvengrey, 2017). This computational morphological model central to ICALL programs is therefore not limited to a set list of common word forms but can dynamically produce any inflected forms needed (Antonsen, Johnson, Trosterud, & Uibo, 2013). Computational modelling is necessary for nêhiyawêtân due to the poly-synthetic nature and complex morphology of Plains Cree (Wolfart, 1973; Wolvengrey, 2011). In addition to the structure of the language itself, pre-existing pedagogical materials and university courses make the task of creating an ICALL program less daunting. Limited time, lack of standardized use of the writing system, and social constraints all hinder the development of CALL applications for endangered languages (Ward & Genabith, 2003). These issues have undoubtedly affected the creation of nêhiyawêtân to varying degrees as well.

Background

In this section we will give a brief explanation of the origins of nêhiyawêtân in addition to a description of the functions and exercises evaluated within the application itself. The reasoning behind ICALL answer feedback functions will be explained further. Moreover, a general comparison of existing CALL applications for Indigenous languages will be [End Page 340] presented. Finally, we will cover various methods for evaluating ICALL applications and how they apply, or do not apply, to this study.

nêhiyawêtân

nêhiyawêtân is a collaborative project between the Alberta Language Technology Lab (ALTLab)4 at the University of Alberta, and the Giellatekno5 research team at the University of Tromsø (UiT), the Arctic University of Norway. It is based on the Oahpa!6 application, the first version of which was created in 2008 (Antonsen, 2013). Oahpa! is a computer assisted language learning program for North Saami, an endangered language spoken in northern Europe (Antonsen, 2013). This Uralic language has complex morphology, which most learners have not encountered in languages learned previously (Antonsen, 2013). The Oahpa! framework is open source and can be ported to other languages (Uibo, Pruulmann-Vengerfeldt, Rueter, & Iva, 2015). The initial goal of the project was to create an interactive application, to replace textbooks in a university course, which would provide students with immediate feedback on answers outside of class time. Due to the urgency to create usable language-learning tools, the demonstration version of nêhiyawêtan was evaluated. The program, although functional for the purpose of demonstrating basic capabilities of the software, was still incomplete. This meant that not all possible morphological exercises and functionalities were implemented. However, with some functionalities absent, it allowed users to either take note or ignore their absence and explain how this affected their use of nêhiyawêtân. The evaluation is meant to reveal any major issues that users may encounter.

nêhiyawêtân currently has four main sections, though only three were tested during the evaluation: Leksa, Morfa-S, Morfa-C, and Numra. All sections retained their original Oahpa! names for this evaluation due to a lack of time and the scarcity of metalinguistic terms in Plains Cree. Jean Okimâsis and Arok Wolvengrey have since provided suggestions to improve the program's cultural relevancy, and this is being improved for newer versions of nêhiyawêtân. Only functions and exercise types that were evaluated in this study will be illustrated in this section, though a detailed description of nêhiyawêtân can be found in Bontogon (2016). To navigate the interface, a menu to select the desired part of speech was located on the left of the page, while the instructions for each exercise were featured to the right of the questions for all exercises. Additional drop-down menus to select inflection type, course-book chapter, and so on, were located centrally above the question sets.

Leksa (see Figure 1) requires the user to translate a list of words (in green for animate, black for inanimate, in the case of Plains Cree). This exercise can be done from Cree-to-English, English-to-Cree, Cree-to-French, [End Page 341] and French-to-Cree. The vocabulary can also be sorted by course unit, or by semantic category. After completing questions, users can submit answers for evaluation using the "test answers" function. Correct answers are shown in green, while incorrect answers are indicated in red, with a red X following the text box. The user then has the option to either correct their answers or reveal the correct answers. Leksa also gives users the option to practise words with audio files; however, the number of these audio prompts is limited for the time being. Since the ICALL program consists of dynamically generated forms, enumerating all equivalent recordings would be difficult and time consuming. To augment the audio function we intend to utilize speech synthesis to produce audio output for future versions of the application.

Figure 1. Leksa (vocabulary testing).
Click for larger view
View full resolution
Figure 1.

Leksa (vocabulary testing).

Numra, like Leksa, requires the user to provide translations (see Figure 2). This exercise focuses on cardinal numbers, time, dates, and money. Users can choose to practise numerals-to-words or words-tonumerals. Answer feedback is not currently available in Numra but will be added in future versions. Due to similarities with Leksa, the limited time available for the evaluation sessions, a lack of answer feedback, and participant constraints, Numra was excluded from the evaluation. [End Page 342]

Figure 2. Numra: The "show answers" function in an exercise on time.
Click for larger view
View full resolution
Figure 2.

Numra: The "show answers" function in an exercise on time.

Morfa-S (see Figure 3) allows the user to practise producing inflected forms with limited to no context. These exercises focus on nouns (plural, diminutive, locative, possessive) or verbs (past, present). These exercises have the same functions as the last, but they also feature answer feedback to give the user helpful hints for correcting their answer.

Figure 3. Morfa-S: Using the "help" function in a noun inflection exercise.
Click for larger view
View full resolution
Figure 3.

Morfa-S: Using the "help" function in a noun inflection exercise.

[End Page 343]

In Morfa-C (see Figure 4), the user can again select from nouns (plural, locative, possessive) and verbs (past, present). This exercise type differs from Morfa-S in that the user practises these paradigms in a discourse context within a question-answer setting.

Figure 4. Morfa-C: Using the "help" function in an exercise involving verb inflection in the context of a sentence.
Click for larger view
View full resolution
Figure 4.

Morfa-C: Using the "help" function in an exercise involving verb inflection in the context of a sentence.

All exercises (excluding Leksa and Numra) contain the option to click on Cree words for the English translation. This function provides the user with the definition of the selected word. Currently, this function is available only for words in what was deemed the "base" form (i.e., non-inflected nouns or third-person singular present independent verbs) for the purposes of this exercise. When a word is selected, its definition appears under the exercise instructions.

Answer feedback in ICALL

One of the most important functions (though somewhat rare in applications for endangered languages), is effective answer feedback. In language learning overall, it has been found that metalinguistic feedback can be helpful for adult second language learners initially (Carroll & Swain, 1993; Li, 2014; Shintani & Ellis, 2013; Stafford, Bowden, & Sanz, 2012). More specifically, ICALL applications can be used to raise certain aspects of grammatical awareness through metalinguistic feedback (Nagata & Swisher, 1995; Stafford et al., 2012). Most endangered language CALL involves showing only whether the answer is correct or not. This is the typical "wrong, try again" model, which does not give the user any metalinguistic information to illustrate why [End Page 344] the answer is incorrect (Nagata, 1993, p.13). With nêhiyawêtân, an effort has been made to ensure that users have adequate metalinguistic feedback. However, this is difficult to achieve without having personalized feedback, although a simple fill-in-the-blank exercise can provide fairly specific and accurate feedback on mistakes (Heift & Rimrott, 2012). We hope that the application will become more integrated and comprehensive to support semantics, pragmatics, cultural knowledge, and social abilities. This will be achieved by using a combination of the functions previously discussed.

CALL application comparison

To provide information on how nêhiyawêtân generally compares to the methods of other CALL programs for Indigenous languages, we have undertaken a brief examination of other applications. For Indigenous languages in the context of Canada, there are very few fully developed CALL programs available for learners. To get a complete sample of what CALL programs are available for Indigenous languages, we will be looking at North America's progress as a whole. Most existing applications for Indigenous languages in North America are interactive phrase/wordlists or dictionaries, usually with either audio or pictures (FirstVoices, 2013; Piikani Societies, 2015; Roberts, 2015; Sovereign Oneida Nation of Wisconsin, 2015). In some cases the entire interface, including the phrase/wordlist, is in English, while the only trace of the Indigenous language is auditory.

Other applications have options for both teaching and testing language knowledge (Harvey, 2015; Ho Chunk Nation, 2015). Some applications even allow users to record their voices and compare their pronunciation to a recording of a speaker. Generally, users are given audio and then must select the correct picture, or vice versa. Another method uses the written orthography and users must select the correct picture, audio, or whatever is applicable. These applications are similar to Rosetta Stone, a mainstream CALL application for language learning that has begun to make programs for endangered language communities as well (Navajo Language Renaissance, 2008; Rosetta Stone, 2015). In contrast to the other applications, after a lesson is finished, the user is given an evaluation of the number of correct and incorrect answers, as well as the number of those skipped and not seen. This is the first form of tracking feedback seen in any of these programs. The applications previously covered feature only red crosses and green check marks to indicate correct and incorrect answers, if at all.

Another minority language program is run by the First Nations Language Centre at Simon Fraser University (SFU) (Simon Fraser University, 2015). [End Page 345] What makes this project immediately different from the others is that the user must log in to the website. This allows both the user and their course instructor to track their progress. The login screen lets the user view which modules they have completed and their performance for each one. There is also the option to practise sentence construction. Each module has sentence structure thoroughly explained which the user can practise. In addition, there is a speakingpractice task in which users can record themselves and compare it to the proper pronunciation. Finally, there is passage translation practice. Although this program has a lot more focus on literacy than the other applications, programs, and websites previously mentioned, it is still generally heavier on orality than nêhiyawêtân.

nêhiyawêtân is meant to free up class time to practise speaking the language by providing students with the opportunity to practice word structure and grammatical paradigms on their own while they still have access to feedback on their answers. It is also the first of the aforementioned publicly accessible applications to focus on literacy and grammar. This is a big difference from previous CALL endeavours for endangered languages that focus mainly on orality. The existing language course devotes much of its time to practising literacy. Our hope is that use of nêhiyawêtân outside of class time will allow for more time to practise speaking the language during class, when other students are readily available to practise conversational speech. Finding opportunities to practise oral communication outside of class time can be more difficult. By focusing on literacy, the program helps to create new domains for language use that orality does not. This creates more opportunities for students to incorporate the language into their daily lives. Since much of our daily communication now relies on texts, e-mails, social media, and the like, it is essential that students are given tools to engage in "seamless language learning" (Cru, 2015; Keegan, Mato, & Ruru, 2015; Wong, Sing-Chai, & Poh-Aw, 2017). Moreover, nêhiyawêtân was originally created to cater to a university-course context (as a starting point), while the other programs were created for use within a community context. The contrast between community applications focusing on orality and institutional applications focusing on literacy undoubtedly stems from the oral tradition of Indigenous languages in North America compared to the method of "western schooling" (Burnaby & Philpott, 2007). Currently, the source code for nêhiyawêtân is open source and free for language teachers to use. Although it can be made and adjusted to fit the needs of other languages, the software is not easily created for language communities without extensive knowledge of coding, including but not limited to [End Page 346] computational modelling, python scripts, XML, and HTML. Unfortunately, the level of coding knowledge needed to create an application like nêhiyawêtân is much higher than the average language teacher possesses. The goal is to create an easy-to-use format for community language instructors to input course materials and create exercises, without losing the focus on grammatical paradigms, and relevant answer feedback.

Previous evaluation methods

In this study, we attempted to identify user needs and reactions through in-depth evaluations. We first surveyed previous evaluation methods, and then a combination of these methods (both qualitative and quantitative) was used to evaluate nêhiyawêtân. Antonsen (2013) provides a quantitative analysis of Oahpa! question-and-answer drills. This was done using data collected by the application, which provided an elaborate error analysis. This is not yet possible in the case of the nêhiyawêtân demonstration version, as it lacks both the technology and a populated user base. For these reasons, this study takes a more qualitative approach. Previous studies have compiled user-experience surveys using pre- and post-CALL questionnaires, interviews, and observations of both on-task and off-task behaviour (Son, 2007). This study was conducted in a similar manner. Instead of an interview, participants were encouraged to think out loud during their use of the CALL application, to create an open dialogue between the participant and the researcher and to facilitate cooperative evaluation methods (Hagdahl, 2000; Monk, Davenport, Haber, & Wright, 1993).

Overall, conducting these user evaluations helped to determine whether nêhiyawêtân partially or fully meets the needs of learners, and if the students are using it as intended. In addition, this study addressed the problem of social constraints (Ward & Genabith, 2003). Moreover, it shed light on the compatibility of nêhiyawêtân and the course it is meant to complement. Through an evaluation of the program in its demonstration stage, problems can be solved early on. Nesbitt (2013) concluded that mid-design user input could help improve application versatility to meet a greater variety of student needs. This is important for the development of nêhiyawêtân, because although it was created to be used in an academic context, many beginner heritage speakers, and even some fluent speakers, enrol in these university-based courses. Uncovering the strengths and weaknesses of the program at various levels of completion will help to create a more versatile program and a broader spectrum of use for students of different backgrounds and competencies. [End Page 347]

Methods and participants

A previous study (Bontogon, 2016) of nêhiyawêtân was conducted to uncover potential linguistic errors made by users. The resulting error analysis aided in the improvement of error feedback that is offered to users after submitting an incorrect answer. Moreover, this self-study provided greater insight into the potential perspectives and behaviour of users. This information was used for effectively prompting users to reveal their opinions and perceptions of various application functions. In addition, these observations became the foundation for the user survey issued in the principal user-evaluation study. The user-evaluation methods and results are the main focus of this section.

The study design originated from a checklist inspired by Hagdahl (2000, p. 33). It highlighted "practical details" to consider when proposing a design. Additional details regarding each checklist item and the motivation for each detail were included. Reasons for experimental design choices, study alterations, and other supplementary notes provided a valuable resource for the researcher to refer back to. For example, "practical details" include which tasks should be completed during each session, the number of participants, how the researcher should respond to questions, and so on. Then, for each "practical detail," a record of what the researcher intends, and a motivation for this intention, is created. An example of this checklist can be found in Bontogon (2016). We hope that this checklist will provide a starting point for future evaluations of CALL programs for endangered and Indigenous languages.

Participants

Participants comprised five native English speakers, all of whom were learners of Plains Cree (four female, one male). Four of the five students were L3 learners of Plains Cree, while the fifth was an L2 learner. The four L3 learners had a varying L2 (e.g., French, Spanish, and Latin). All five participants were sequential bilinguals. Three out of the five participants had visual impairments that were fully corrected by glasses. The participants requiring glasses were wearing them at the time of the evaluation. One out of the five participants had a learning disability regulated by medication and therapy. Though this group of participants is small, they should reveal most of the major program issues (Hagdahl, 2000) and demonstrate how users interact with the tool; however, this group is not necessarily representative of the general population of students taking these beginner language courses. Nevertheless, it has been suggested that a small evaluation be conducted in the demonstration phase of a program and then a re-evaluation should take place with more participants after major flaws have been corrected [End Page 348] (Hagdahl, 2000; Monk et al., 1993). This smaller evaluation allowed for a deeper analysis of the program, as all user–program interactions over the duration of approximately an hour per participant were recorded. To conduct the evaluation on a larger scale would require a modified study outline (Monk et al., 1993).

Regarding cultural identity, three participants most strongly identified as Cree, while the other two most strongly identified as Canadian (though one of these participants also identified as Métis). The starting age of acquisition ranged from 19 to 27 years old. The education levels were fairly similar among these participants, as all had completed some university studies; however, one of the participants had just recently begun graduate studies (M = 14.9 years of formal education, SD = 2.7, range = 13–19.5 years). Plains Cree exposure was reported by participants as being limited mostly to reading and language labs or self-instruction. The participants communicated that the most important factors contributing to learning Plains Cree were reading and interacting with friends and family. This presents an opportunity to practise the language outside of class time, which can increase engagement with the course materials, thus having a positive effect on the student's ability to learn (Kuh, 2009).

Since nêhiyawêtân is based on course materials specific to an introductory Cree course (NS 152) offered in the Faculty of Native Studies at the University of Alberta, only students who had taken this course were eligible to participate. A recruitment e-mail was forwarded to students by the course instructor, Dorothy Thunder. Interested students then made contact with the researcher directly. By showing interest through contacting the researcher, the students would not feel pressured to participate, as they might have done if they had been required to respond directly to their instructor. Due to the limited participant pool, all participants were offered a small monetary compensation for their time.

Study outline

In addition to the checklist, a study outline (Hagdahl, 2000) was modified to include both the Language Experience and Proficiency Questionnaire (LEAP-Q) (Marian, Blumenfeld, & Kaushanskaya, 2007) and a user-evaluation questionnaire designed specifically to gain insight into user opinions and perceptions of nêhiyawêtân. The user-evaluation questionnaire replaced Hagdahl's interview. The recording was intended to catch the users' initial reactions, while the survey would give the user time to construct a perceptive answer. Moreover, in comparison to an interview, a survey would give the participant more freedom to discuss the strengths and weaknesses of the application without having to [End Page 349] confront the researcher outright. Overall, this guideline was a useful tool for ensuring that all the necessary information was given and collected throughout the study. We hope that this will be a helpful guideline for others evaluating CALL programs for endangered and Indigenous languages.

The outline is briefly explained throughout the following paragraphs. Set-up involved running QuickTime Player Version 10.4 (833.7) on a MacBook Pro (13-inch, Mid 2012). QuickTime Player was used for screen-casting and audio-recording. To easily observe user activity, mouse clicks were set to show. A video-camera was also used to record three out of the five sessions. The last two sessions relied only on the QuickTime screen-casting, and the absence of the video-camera did not seem to have any adverse effects on the study. The three participants who were recorded by the video-camera all indicated on the user-evaluation questionnaire that they did not find the recording equipment intrusive; therefore, this should not affect the results.

One small change was made to the initial set-up of the study after the first participant requested scrap paper. This practice then became part of the set-up. Giving the students scrap paper made the environment similar to what they might experience outside of class time. Out of the five users, three opted to use the scrap paper.

When each participant arrived, the researcher explained that they would be working through a series of grammatical exercises using a web-based ICALL application and commenting on their experience. The participants were also encouraged to think out loud and instructed to express any thoughts they had on the application or any problems they encountered.

The LEAP-Q (Marian et al., 2007) was issued and participants were given the Canadian version pencil-and-paper format. This questionnaire added quantitative data to the study and provided an estimation of language proficiency.

During the task-completion portion of the evaluation, participants were again reminded to verbally express their thoughts as naturally as possible and given brief instructions to make the tasks clear. The participant was presented with the nêhiyawêtân start page and expected to navigate to the exercise type, part of speech, and type of inflection that would be practised. Once the user reached the correct exercise, they were to follow the instructions given by the application.

Tasks and user evaluation

Each participant went through the same set of exercise types. Morfa-S (plural and diminutive nouns), Morfa-C (plural, locative, and possessive nouns), and Leksa (Cree-to-English and English-to-Cree words) [End Page 350] were practised for five minutes per inflection or translation type. Morfa-S (past-tense animate intransitive verbs) and Morfa-C (presenttense animate intransitive verbs) were assigned for 10 minutes each. Numra was excluded from the evaluation in order to include beginner-level students in the participant pool. The inclusion of these students was necessary to properly evaluate nêhiyawêtân, since it is intended to complement the beginner courses. Numra tests students on translating numerals; however, numbers are covered in the winter term of the first-year course, while the evaluation took place in the fall. Greater time was allotted for verb inflection tasks, since they generally were more difficult for users and took longer for users to complete a set. The tasks were announced aloud after each exercise was complete, so that the user would not be worried about the number of exercises they had to complete. Within each exercise, the user was free to use the application as they pleased. This provided the researcher with insight into potential user strategies regarding navigation, character accessibility, potential responses to various question frameworks, and other aspects of design. Although times were designated for each exercise type, the actual time spent by participants on each type varied. Each of these exercises has different help feedback given by the 'help' function. Testing each exercise was designed to provide video documentation on how the users understood this feedback.

The user-evaluation questionnaire was issued immediately after the CALL session. The questionnaire went over functions that the user had seen during the task session and asked for their opinion. Participants were given the opportunity to express their perceptions of the pros and cons of the application, as well as what they would like to see in the application in the future. This was a useful way of discovering what prospective users found the most rewarding or troubling within the application. Most of the questions were specific to nêhiyawêtân, although some of the more general questions were inspired by Monk et al. (1993, p.11).

Study results and discussion

Navigation, general interface, and exercise templates

Regarding the navigation of the interface, all five participants managed to easily navigate through the various exercise types. Navigating within exercise types, however, was seemingly more difficult. Two out of the five users had trouble finding where to change the part of speech, which is located on the side menu. This could potentially be confusing because the majority of drop-down menus for the exercises are listed above the questions. The part of speech is the only option [End Page 351] listed on the side menu. When designing interfaces for ICALL programs, it is best that the layout remains as navigable as possible. The learner is already burdened with the task of learning the language, so adding additional stress unrelated to language content is not ideal. Unless functions were directly in the users' line of sight, they were often missed or ignored. For example, users rarely read exercise instructions but found a translation function by accidentally hovering the cursor over words. Since instructions are easily ignored, it is best that the layout of the exercises itself guides the user to interact with the page as instructed.

The translation function was easily located, and all five participants made use of it and found it helpful. The participants requested that the translation function be available for all words. This request is reasonable in order to maintain consistency for learners. Although there was a dictionary link accessible to learners on the side menu, this was not used because it was not a focus of the exercise page. In addition, it is burdensome for a user to have to leave the interface for help when learning a new language, especially since the translation function embedded in the interface demonstrates that the same technology can be effectively integrated within the application. This function could be easily improved by using an existing plug-in linked to the dictionary, or a similar functionality that does not require leaving the interface.

As previously stated, all five users found the translation function by accident, which suggests that the instructions were either hidden or unsatisfactory. Although it is encouraging that this function is relatively easy to find without instruction, it is concerning that the instructions were not read. Four out of the five users had difficulty finding the instructions. This is most likely because they are located to the far right of the questions instead of above or to the left. This is inconvenient for native-English speakers because they are accustomed to reading from top to bottom and from left to right. Two of the five participants suggested moving the instructions to a more noticeable location that fell directly in the users' line of sight, for example, above the question sets. Two participants also suggested that the instructions should make use of rich formatting such as using bold or italic fonts to draw the users' attention. Another user suggested including a sample answer to demonstrate the format that was expected for answer submission. These changes could make a significant difference pedagogically and are relatively easy to alter.

When examining the exercise templates themselves, some were not used as the designer had initially intended. The Morfa-C context-based questions were meant for users to practise grammatical paradigms in the context of a question-and-answer sequence. There were no user [End Page 352] complaints, and user performance was relatively high, at around four out of five questions correct on average. However, the users did not actually have to understand the context of the question to obtain the answer. In fact, only one out of five participants completely read the questions. Although this is not ideal, the exercise is still useful for drilling grammatical paradigms. This was not the case with Morfa-S verb exercises. During the evaluation the users reported difficulties with the exercise and did not fully understand what the questions were asking for. This was validated by the exercise scores, which had an average of one out of five questions correct. To facilitate student use of the application, this particular template issue must be resolved.

Content relevance and consistency

The main goal of this application was to provide students of an introductory university-level Cree course with immediate answer feedback on class material outside of class time. Three of the five students reported that the application adequately represented the course material, while the other two encountered unfamiliar material and thought it strayed from the material too much. Although four users observed these differences, two were strongly affected by the distinctions between the course book and the ICALL program. It is important for learners to have consistency when learning a new language, and users can find inconsistencies confusing and frustrating. Updated versions of nêhiyawêtân will attempt to eliminate these variances to streamline learning for new users.

In addition, the audio function could be expanded to allow users to listen to a variety of words and understand that they are the same even though the orthographic conventions may differ. Two out of the three users who were presented with the opportunity to use audio opted to use the function. One of the two users who never encountered the audio function suggested adding audio features. As it would be tedious and time-consuming to have speakers record thousands of possible questions that might be generated, a text-to-speech synthesizer could supply audio for words and even sentences without this expense. Overall it seemed that this would be useful in both a university and a community setting. When a community or university has limited teaching materials, it can force one tool to be multi-purpose. Junker & Torkornoo (2012) speak to the importance of adapting to the needs of communities when creating online tools with limited resources. Adding more audio will not only increase knowledge of proper pronunciation for learners but also provide fluent speakers with a reference to familiarize themselves with the orthography and increase literacy. [End Page 353]

Other suggested exercises include translation and morphological analysis exercises. This is relevant to the course curriculum, and much class time and many homework assignments are devoted to translation. Having these types of questions would help nêhiyawêtân to better accompany the course material and free up more class time for speaking practice.

Answer feedback and common user errors

The answer feedback function was much more noticeable than the translation function and was employed by all users. This could be because the "help" button appears in a central location after an erroneous answer has been submitted. This new icon undoubtedly draws the attention of users. All five users relied on answer feedback multiple times during each session. Three out of the five users explicitly stated that they enjoyed having the opportunity to self-correct after being given immediate feedback on their answers outside of class time. The answer feedback function was used 57% of the times it was available to users. With reference to user errors, the feedback was actually on topic 82% of the time. When answer feedback was ignored, this was due mostly to the user realizing their own mistake without the help of feedback. Although most students actually use ICALL system feedback when available, they are often overwhelmed by lengthy feedback messages (Heift, 2001). Due to user error variability, there could be multiple errors within a single answer, and it simply is not possible to provide help messages for all the errors simultaneously (Heift, 2003). Heift (2001) suggests a prioritized, sequential revealing of these errors. Consequently, this prompts students to fix only the errors outlined by the system. A possible solution for nêhiyawêtân is to use a tooltip function. By implementing this function the user is able to prioritize the messages they receive themselves by hovering their cursor over a word linked to additional hints. This would not only shorten the feedback message but also give the learner more control over the feedback received.

In response to the survey, one student who had found the error feedback messages confusing requested that the feedback give the user the correct answer, instead of a hint. This is not ideal, as Cobb and Stevens (1996) found that learners dependent on the application feedback do not learn as much as those answering questions through their own trial and error. Moreover, not allowing the learner to self-correct by immediately giving them the answer is not an efficient way to generate long-term retention of inflectional knowledge. Bell, Harless, Higa, Bjork, Bjork, Bazargan, and Mangione (2008) discuss "desired difficulties," which occur when learners are exposed to immediate mental taxation [End Page 354] that may then result in long-term retention benefits. It is important to note that when looking at such desired difficulties for nêhiyawêtân, we are focused more on difficulties due to content, not question format. The goal of the ICALL application is not for learners to retain knowledge of how to complete various question types. Future versions of nêhiyawêtân will attempt to provide precise and personalized metalinguistic feedback that still challenges the user to correct and formulate their own answer. Furthermore, the desired difficulties should be content-related to facilitate long-term retention of inflectional knowledge of Plains Cree in users.

To improve error feedback messages, an error analysis was performed. Disregarding Leksa (since it lacks answer feedback), we found that inflectional errors were the most common among users. This is encouraging, as the feedback messages contained only help referring to inflectional errors. It is also consistent with the amount of on-topic help discussed in the last paragraph. The second and third most common errors are typos and vowel-length errors respectively. Although these both fit broadly into the category of "typo," vowel-length errors have been separated since they are generally harder for learners to detect. In addition, nêhiyawêtân accepts vowel-length errors as correct, while it marks other typos as incorrect. Once personalized feedback is implemented, it will be important to create help messages for both typos and vowel-length errors. Currently, the system does not draw the users' attention to vowel-length errors, and this could be problematic for learners. Ideally, vowel-length errors would be marked in another colour, such as yellow, to notify the user that their answer is "almost correct." More generally, users should be given the opportunity to correct their errors after developing an understanding of why their answer was incorrect.

The qualitative student appreciation for the opportunity to correct their answers was reaffirmed by the quantitative number of times the "test answers" function was used. Students attempted to correct their answers 79% of the time. The 18% of the time when no correction attempts were made and the 4% of questions that were left blank are most likely a result of the translation exercise. This exercise did not have an error feedback function. In Leksa, when learners did not know the translation for particular words, they moved quickly through the exercise, leaving questions blank and not attempting to resolve their errors.

Recommendations and conclusion

This evaluation has provided us with valuable insight into the importance of navigability and layout of various functions and templates [End Page 355] within nêhiyawêtân, in addition to common user errors, behaviour, and perceptions. We hope that this study will be used as a resource for future evaluations of ICALL programs for Indigenous languages. The evaluation has identified various problems with the application, which have been considered and sorted into very high, high-, medium-, and low-priority recommendations. Very high priority recommendations should be completed immediately. Failure to do so may completely deter program use. For example, some slight inconsistencies regarding the error feedback and computational model must be repaired immediately. This constitutes the main function of the application and could affect the users' trust in its abilities. High-priority recommendations are an inconvenience to the user, so it is highly recommended that the instructions be relocated to a more visible place in order to increase usability, as most of the users had trouble finding them. Moreover, the format of Morfa-S questions should be updated to avoid confusion, and Morfa-C questions should force students to read the entire context of the questions. The program should test students on "desired difficulties" regarding course material, not on a particular question format (Bell et al., 2008). Medium-priority recommendations are important but not vital to the functioning and usefulness of the application. This would include adding more cultural information. While this is an incredibly important aspect for future versions of nêhiyawêtân, both the accompanying course and the textbook expose students to cultural information. Those recommendations of low importance are relevant to future versions of the program and therefore not immediately necessary. These recommendations include verb paradigm and vocabulary review, and audio incorporation. Currently students have their textbooks and class time to reference paradigms and vocabulary. In addition, the extensive incorporation of audio output is lacking, since the generation of many unique questions makes it difficult to record each question individually. A consideration for future versions would be to make use of speech synthesis; however, this could result in issues with authenticity.

In the most basic sense, ICALL presents Indigenous languages with a learning tool that had not previously existed. This advancement facilitates flexibility among Indigenous language-instruction techniques and provides instructors with an alternative means for teaching their materials. The feedback learners can now access, outside of class time, has the potential to allow instructors to completely reassign in-class time to other aspects of the language that had been limited or excluded previously. Teachers may choose to spend the majority of inclass time practising conversation, since this is harder to achieve [End Page 356] outside of the classroom. For it to gain authenticity as a trusted source, the language instructor and students must also approve of the application. To ensure this occurs, it is important for both instructors and students to evaluate the ICALL program in question. By obtaining feedback throughout the design process, many improvements can be made to create an accurate and practical application. Moreover, the application should be re-evaluated with more participants once the changes have been implemented.

Overall, practising morphology outside of class time is not only possible, but useful as well. User evaluations were generally positive, and the idea of ICALL for an Indigenous language was well received. We hope that the experiences of these participants will help others to design their own CALL projects for Indigenous and other minority languages. The feedback given with regard to the interface and design is something that can be perceived independently of the language and will readily apply to the design of other polysynthetic language applications, though the participants may not necessarily represent the general population. In future evaluations, university projects should be compared to those stemming from the communities directly, and collaborative work should be employed to ensure that the programs account for the diversity of user populations. There is still much to be evaluated when creating ICALL programs for the Indigenous languages spoken in Canada, and there are many contexts that must still be analyzed to gain a wider perspective on the matter. [End Page 357]

Correspondence should be addressed to Megan Bontogon, Department of Linguistics, University of Alberta, 116 St. and 85 Ave., Edmonton AB T6G 2R3; e-mail: bontogon@ualberta.ca.

Acknowledgements

This work has been funded by a SSHRC Partnership Development Grant (890–2013–0047), a Kule Institute for Advanced Study (KIAS, University of Alberta) Research Cluster Grant, and a Roger S. Smith award (University of Alberta). In addition, we are grateful to the Faculty of Native Studies, University of Alberta, for providing us an electronic version of the Introductory Cree Part I (NS 152) course book, as well as the Giellatekno research unit, UiT Arctic University of Norway, for having supported the initial development of nêhiyawêtân. Finally, we wish to thank native Cree speakers, in particular Jerry Roasting, and Miyo Wahkotowing Education in Maskwacîs, Alberta, for having provided us with the careful pronunciations of a sample of Cree words, which we have been able to incorporate into the demonstration version of nêhiyawêtân.

Notes

1. The demonstration version is accessible at http://altlab.ualberta.ca/nehiyawetan/

2. This name was suggested by Jean Okimâsis and Arok Wolvengrey from the First Nations University of Canada, Regina, Saskatchewan.

3. A circumflex on a vowel character, e.g. <â>, indicates a long vowel, i.e. /a:/.

6. The word oahpa is the second-person imperative form of "learn" in North Saami.

References

Antonsen, L. (2013, May). Constraints in free-input question-answering drills. In Proceedings of the second workshop on NLP for computer-assisted language learning at NODALIDA 2013, Oslo. Norway. NEALT Proceedings Series 17 (No. 086, pp. 11–26). Linköping University Electronic Press. Retrieved from http://www.ep.liu.se/ecp/article.asp?issue=086&volume=&article=2 Antonsen, L., Johnson, R., Trosterud, T., & Uibo, H. (2013, May). Generating modular grammar exercises with finite-state transducers. In Proceedings of the second workshop on NLP for computer-assisted language learning at NODALIDA 2013, Oslo, Norway. NEALT Proceedings Series 17 (No. 086, pp. 27–38). Linköping University Electronic Press. Retrieved from http://www.ep.liu.se/ecp/article.asp?issue=086&volume=&article=003
Beesley, K.R., & Karttunen, L. (2003) Finite state morphology. Stanford, CA: Center for the Study of Language and Information (CSLI).
Bell, D.S., Harless, C.E., Higa, J.K., Bjork, E.L., Bjork, R.A., Bazargan, M., & Mangione, C.M. (2008). Knowledge retention after an online tutorial: A randomized educational experiment among resident physicians. Journal of General Internal Medicine, 23(8), 1164–1171. 点击下载
Bisazza, A., & Federico, M. (2016). A survey of word reordering in statistical machine translation: Computational models and language phenomena. Computational Linguistics, 42(2), 163–205. 点击下载
Blackburn, C. (2012). Culture loss and crumbling skulls: The problematic of injury in residential school litigation. Political and Legal Anthropology Review, 35(2), 289–307. 点击下载
Bontogon, M. A. (2016). Evaluating nêhiyawêtân: A computer assisted language learning (CALL) application for Plains Cree. Education and Research Archive. Retrieved from 点击下载
Burnaby, B., & Philpott, D. (2007). Innu oral dominance meets schooling: New data on outcomes. Journal of Multilingual and Multicultural Development, 28(4), 270–289. 点击下载
Carroll, S., & Swain, M. (1993). Explicit and implicit negative feedback. Studies in Second Language Acquisition, 15(3), 357–386. 点击下载
Cobb, T., & Stevens, V. (1996). A principled consideration of computers and reading in a second language. In M.C. Pennington (Ed.), The power of CALL (pp. 115–136). Houston, TX: Athelstan.
Cru, J. (2015). Language revitalisation from the ground up: Promoting Yucatec
Maya on Facebook. Journal of Multilingual and Multicultural Development, 36(3), 284–296. 点击下载
Ethnologue. (2016). Cree, Plains. Retrieved from http://www.ethnologue.com/language/crk
FirstVoices. (2013). Apps. Retrieved from http://www.firstvoices.com/en/apps
Hagdahl, K. (2000). Checking the checker grammatifix: Analysis of the Swedish grammar checker from a user's point of view (Master's thesis, Royal Institute of Technology). Retrieved from http://docplayer.se/11526158-Checking-thechecker-grammatifix.html
Harrigan, A.G., Schmirler, K., Arppe, A., Antonsen, L., Trosterud, T., & Wolvengrey, A. (2017). Learning from the computational modeling of Plains Cree verbs. Morphology, 27(4), 565–598. 点击下载
Harvey, D. (2015). ACORNS overview. Southern Oregon University, Ashland, OR. Retrieved from http://cs.sou.edu/~harveyd/acorns/index.php
Heift, T. (2001). Error-specific and individualised feedback in a Web-based language tutoring system: Do they read it? ReCALL, 13(1), 99–109. 点击下载
Heift, T. (2003). Multiple learner errors and meaningful feedback: A challenge for ICALL systems. CALICO Journal, 20(3), 533–548. Retrieved from http:// journals.equinoxpub.com/index.php/CALICO/article/view/23248 Heift, T., & Rimrott, A. (2012). Task-related variation in Computer-Assisted
Language Learning. Modern Language Journal, 96(4), 525–543. 点击下载
Ho Chunk Nation. (2015). Ho Chunk [Android application].
Hunter, E. (n.d). Introductory Cree Part I [Course book]. Faculty of Native Studies, University of Alberta.
Junker, M.-O., & Torkornoo, D. (2012). Online language games for endangered languages: jeux.tshakapesh.ca, www.eastcree.org/lessons. Proceedings of EDULEARN 12: International Conference on Education and New Learning Technologies, Barcelona, Spain. Retrieved from http://www.marieodilejunker.ca/pdf/EDULEARN12-JunkerTorkornoo.pdf
Keegan, T.T., Mato, P., & Ruru, S. (2015). Using Twitter in an indigenous language: An analysis of te reo Māori tweets. Alternative: An International Journal of Indigenous Peoples, 11(1), 59–75. 点击下载
Kuh, G.D. (2009). The national survey of student engagement: Conceptual and empirical foundations. New Directions for Institutional Research, 2009(141), 5–20. 点击下载
Li, S. (2014). The interface between feedback type, L2 proficiency, and the nature of the linguistic target. Language Teaching Research, 18(3), 373–396. 点击下载
Lomawaima, K.T. (1993). Domesticity in the federal Indian schools: The power of authority over mind and body. American Ethnologist, 20(2), 227–240. 点击下载
Lomicka, L., & Lord, G. (2012). A tale of tweets: Analyzing microblogging among language learners. System, 40(1), 48–63.
Mak, B., & Coniam, D. (2008). Using wikis to enhance and develop writing skills among secondary school students in Hong Kong. System, 36(3), 437–455. 点击下载
Marian, V., Blumenfeld, H.K., & Kaushanskaya, M. (2007). The Language Experience and Proficiency Questionnaire (LEAP-Q): Assessing language profiles in bilinguals and multilinguals. Journal of Speech, Language, and Hearing Research, 50(4), 940–967. 点击下载
Maskwachees Cultural College. (1997). Nehiyaw Pîkiskwewinisa [Maskwacîs Cree Dictionary]. Maskwacîs, AB: Maskwachees Cultural College.
Monk, A., Davenport, L., Haber, J., & Wright, P. (1993). Improving your human-computer interface: A practical technique. London, England: Prentice Hall.
Nagata, N. (1993). Intelligent computer feedback for second language instruction.
Nagata, N., & Swisher, M. (1995). A study of consciousness-raising by computer: The effect of metalinguistic feedback on second language learning. Foreign Language Annals, 28(3), 337–347. 点击下载
Navajo Language Renaissance. (2008). Free demo. Retrieved from http://navajorenaissance.org/demo.html
Nesbitt, D. (2013). Student evaluation of CALL tools during the design process.
Okimâsis Jean (2004). Cree: Language of the Plains. Regina, SK: University of Regina Press.
Piikani Societies. (2015). Nitsi Poh Sin [Android application].
Roberts, A. (2015). Southern Tlingit 1 [Android application].
Rosetta Stone. (2015). Endangered languages. Retrieved from http://www.rosettastone.com/endangered/projects
Shintani, N., & Ellis, R. (2013). The comparative effect of direct written corrective feedback and metalinguistic explanation on learners' explicit and implicit knowledge of the English indefinite article. Journal of Second Language Writing, 22(3), 286–306. 点击下载
Simon Fraser University. (2015). Haida language. Retrieved from http://haida.intelligentlanguagetutor.com/
Snoek, C., Thunder, D., Lo˜o, K., Arppe, A., Lachler, J., Moshagen, S., & Trosterud, T. (2014, June). Modeling the noun morphology of Plains Cree. Proceedings of the Workshop on the Use of Computational Methods in the Study of Endangered Languages (ComputEL), Baltimore, MD. Retrieved from 点击下载
Son, J.B. (2007). Learner experiences in web-based language learning. Computer Assisted Language Learning, 20(1), 21–36. 点击下载
Sovereign Oneida Nation of Wisconsin. (2015). Oneida language application [Android application].
Stafford, C.A., Bowden, H.W., & Sanz, C. (2012). Optimizing language instruction: Matters of explicitness, practice, and cue learning. Language Learning, 62(3), 741–768. 点击下载
Stickler, U., & Shi, L. (2016). TELL us about CALL: An introduction to the Virtual Special Issue (VSI) on the development of technology enhanced and computer assisted language learning published in the System journal. System, 56, 119–126. 点击下载
Struthers, R., & Peden-McAlpine, C. (2005). Phenomenological research among Canadian and United States indigenous populations: Oral tradition and quintessence of time. Qualitative Health Research, 15(9), 1264–1276. 点击下载
Uibo, H., Pruulmann-Vengerfeldt, J., Rueter, J., & Iva, S. (2015). Oahpa! O˜pi! Opiq! Developing free online programs for learning Estonian and Vo˜ro. Proceedings of the 4th workshop on NLP for Computer Assisted Language Learning at NODALIDA 2015. NEALT Proceedings Series 26 / Linköping Electronic Conference Proceedings 114: 51–64. Retrieved from http://www.ep.liu.se/ecp/article.asp?issue=114&article=007&volume=
Ward, M., & Genabith, J. (2003). CALL for endangered languages: Challenges and rewards. Computer Assisted Language Learning, 16(2–3), 233–258. 点击下载
Wolfart, H.C. (1973). Plains Cree: A grammatical study. Transactions of the American Philosophical Society, 63(5), 1–90. 点击下载
Wolvengrey, A. (2001). nêhiyawêwin: itwêwina –Cree: Words (Bilingual ed.). Regina, SK: University of Regina Press.
Wolvengrey, A. (2011). Semantic and pragmatic functions in Plains Cree syntax. Utrecht, Netherlands: LOT.
Wong, L.-H., Sing-Chai, C., & Poh-Aw, G. (2017). Seamless language learning: Second language learning with social media. Comunicar, 25(50), 9–20. 点击下载
Zalcman, D. (2016). "Kill the Indian, save the man": On the painful legacy of Canada's residential schools. World Policy Journal, 33(3), 72–85. 点击下载

Share