Evaluation Toolkit
Abstract
Here we present a toolkit to evaluate robot programming concepts. This toolkit concerns in particular the intuitiveness of a programming interface and its robustness regarding the results from generated programs. We hope to engage robot programming developers in evaluating the acceptability of their approach, find strengths and weaknesses and finally improve the design of robots.
How to use
In general the study procedure with the proposed toolkit can be designed as follows:
First the programming system is shown to the participant, while the interviewer provides COM-E. Afterward the interviewer explains a task, which shall be programmed by the participant. In the next step, the user is programming the task, while the interviewer observes and rates him with PAC-U. The participant completes SSEE before he can watch the robot executing the generated program. The program itself is rated by the interviewer with PAC-R and SR, while the participant completes SAT. When there are several tasks, which is recommended to check different aspects of the programming system (e.g. control structures), this procedure restarts with the next task description. Finally the participant should complete COM-C and QUESI.
Questionnaires
- COM-E Questionnaire
- SSEE Questionnaire
- PAC-U and PAC-R Observation Sheets
- SAT Questionnaire
- COM-C and QUESI Questionnaire
Cite
Cite the following paper when using the toolkit:
E. M. Orendt, M. Fichtner and D. Henrich, "MINERIC toolkit: Measuring instruments to evaluate robustness and intuitiveness of robot programming concepts," 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, 2017, pp. 1379-1386.