On Thursday, February 27, I’ll be talking to the Faculty Learning Community on Language Learning in Online Environments here at Michigan State University. The topic is Concerto, a testing platform that I used to collect the data for my dissertation. It is made available by the Psychometrics Centre at the University of Cambridge. I’m using this post as a place to post useful links and information.
I didn’t use the adaptive functions of Concerto, although it is undoubtedly powerful. Instead, participants in my study received code numbers that determined which of four conditions they were shown for the experiment. Otherwise, the test was the same for every participant, except that the order of the questions was randomized. In addition to using Concerto to deliver tests and feedback, collect responses and response times, I also used it to collect demographic information and responses to other survey questions. What made Concerto ideal for my study was that I could program exactly when feedback was delivered, as well as the exact content of the feedback.
For my dissertation, Concerto was installed on a server at MSU (thanks, Brian Adams!), but it is also possible to use the free demo that the Psychometrics Centre hosts. I used that for my dissertation pilot, and I never had a problem.
From the Concerto homepage:
Concerto is an open-source testing platform that allows users to create various online assessments, from simple surveys to complex IRT-based adaptive tests.