Traditional product development uses several established methods and tools to organize its development phases and implement design aspects relevant. In rehabilitational product design like prosthetics, it is crucial to consider the needs of the users because they are literally connected to their prosthesis and will be accordingly affected by bad design decisions.
Able-bodied persons can complete tasks normally with the help of their limbs and sensorimotor body functions and the loss of body parts through amputation can cause severe physiological and psychological trauma [1]. In a rehabilitation context, psychological phenomena such as phantom limb pain or the rubber hand/foot illusion [2] show that adapting to a product which replaces a lost body part is a challenge to the body’s sensorimotor integrity. Therefore, approaches to include the user in the development of prosthetics by using methods like the quality function deployment [3] or concept simulators [4] have been proposed [5, 6].
Current state of prosthetic satisfaction research
How do currently available questionnaires deal with the difficulties mentioned above? The most commonly used questionnaires in prosthetics deal with impairments on the user’s quality of life and the psychosocial adjustments to life with a prosthesis [7, 8] as well as their impact on product satisfaction. They partially explore the presence of phantom limb pain and processes involving the body image [9] but their scales are not well suited to provide useful information for product development or the “human in the loop” approach. User satisfaction with the prosthesis if measured on one dimension, shows a positive correlation between product satisfaction and activity restrictions [10] which indicate an effect on satisfaction caused by multiple independent sources. This article proposes a questionnaire based on the latent trait model described in [11–13] aimed at prosthesis users. It shares similarities to other theoretical sources of amputee needs [9] but differs with regard to the implementation and interaction of technical and psychological factors.
Probabilistic inference
To facilitate understanding of the methodology of this paper, this section provides a short introduction to probabilistic inference. As opposed to frequentist inference, probabilistic inference involves the formulation and testing of hypotheses by establishing probability distributions of the values the variables and parameters in question can take on. This process begins with the formulation of a prior distribution. In a given area of research, an effect variable A has on variable B might be common knowledge because it has been found and replicated in a number of studies. Sometimes the found effect is smaller or larger than before but the distribution of these previously found effects can and should be integrated into the current analysis. Instead of assuming no knowledge about the effect under investigation when we start gathering data we can establish the prior distribution as probabilistic information about the size of the effect parameter. Given what we know up until this point in time we then start gathering and interpreting data, arriving at another distribution of values. In probabilistic terms, the parameter distribution of our data is called the likelihood. Bayes’ theorem offers the mathematically appropriate way to combine both distributions to arrive at a posterior estimate. Given both our prior knowledge about the proposed effect and the new data, the posterior distribution is the best estimate of the effect we are interested in investigating. Besides its purely data-analytical application current research provides evidence that some form of probabilistic inference underlies human perception and decision making. This is even true up to a point where sensorimotor events can best be described by using Bayesian inference [14]. In a scenario like in this paper where a complex statistical analysis takes place and there are many parameters to determine, Bayesian inference has a different set of requirements than frequentist inference would impose on the data. In addition, it makes obtaining a credible estimate of parameter values inside a statistical model relatively straightforward. Especially the notion and common confusion about confidence intervals becomes an issue in this study: If one is interested in designing a reliable measurement tool such as the proposed questionnaire, one needs to understand the probability of one item influencing one factor with relative certainty or learn about the most likely values of a regression parameter between one factor and another. In this scenario a regression parameter between factors describes how much the value of one factor increases or decreases depending on the value of another one. Frequentist analysis does not provide that information in an easily interpretable manner. “Highest Credibility Intervals” or “Highest Density Intervals” (HDI) [15] on the other hand do. In this paper, we utilize prior knowledge about parameters of the model by means of an expert study. But even experts cannot judge with absolute certainty, just like we cannot be completely certain about how large the value of a factor loading really is. This uncertainty is reflected in the width of an HDI, making the spectrum of probable values extremely large or relatively narrow. By gaining prior information about the parameter values of the proposed latent trait model, we try to state hypotheses about the psychometric evaluation of this questionnaire more precisely than before, making inferences drawn from data more reliable.
Psychometric evaluation via Bayesian inference
In order to psychometrically validate the latent trait model, two methodological steps are followed: firstly, the items are each assigned to one factor and the resulting model is tested by means of confirmatory factor analysis. Secondly, for each factor formed out of a linear combination of its corresponding items, the relationships between factors is assessed via linear regression.
By applying Bayesian inference methodology, one is able to assign prior distributions to each of the assessed parameters described above. Prior knowledge about each item’s factor loading as well as the regression coefficients between factors can be described by a prior distribution, implying hypotheses about the quantitative nature of the parameter. When there is a lack of prior information, one uses what is known as an uninformative prior distribution, characterized by a centered mean and high standard deviation in a distribution. There is an ongoing debate in statistical science about whether the use of informed priors is valid in scientific research, but just as in regular hypothesis testing, it is favorable to include every available source of prior information [16] into a statistical model. In order to assess the legitimacy of the chosen prior distributions, one can cross-validate the statistical model by comparing it to a completely uninformed alternative to see if one’s choice of prior distributions affected the outcome of the analysis [15].