User Experience of a Chatbot Questionnaire Versus a Regular Computer Questionnaire: Prospective Comparative Study (2024)

  • Journal List
  • JMIR Med Inform
  • v.8(12); 2020 Dec
  • PMC7752526

As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsem*nt of, or agreement with, the contents by NLM or the National Institutes of Health.
Learn more: PMC Disclaimer | PMC Copyright Notice

User Experience of a Chatbot Questionnaire Versus a Regular Computer Questionnaire: Prospective Comparative Study (1)

JMIRAboutSearchArchiveCurrent IssueSubmitEditorial Board

JMIR Med Inform. 2020 Dec; 8(12): e21982.

Published online 2020 Dec 7. doi:10.2196/21982

PMCID: PMC7752526

PMID: 33284125

Monitoring Editor: Christian Lovis

Reviewed by Roger Watson, Andrea Mahnke, Jared Shenson, and Tobe Freeman

Mariska E te Pas, MD,User Experience of a Chatbot Questionnaire Versus a Regular Computer Questionnaire: Prospective Comparative Study (2)1 Werner G M M Rutten, PhD,2 R Arthur Bouwman, MD, PhD,1,3 and Marc P Buise, MD, PhD1

1Anesthesiology Department, Catharina Hospital, Eindhoven, Netherlands

2Game Solutions Lab, Eindhoven, Netherlands

3Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands

Mariska E te Pas, Anesthesiology Department, Catharina Hospital, Michelangelolaan 2, Eindhoven, 5623 EJ, Netherlands, Phone: 31 627624857, Email: ln.siuhnekeizanirahtac@sap.t.aksiram.

Author information Article notes Copyright and License information PMC Disclaimer

Abstract

Background

Respondent engagement of questionnaires in health care is fundamental to ensure adequate response rates for the evaluation of services and quality of care. Conventional survey designs are often perceived as dull and unengaging, resulting in negative respondent behavior. It is necessary to make completing a questionnaire attractive and motivating.

Objective

The aim of this study is to compare the user experience of a chatbot questionnaire, which mimics intelligent conversation, with a regular computer questionnaire.

Methods

The research took place at the preoperative outpatient clinic. Patients completed both the standard computer questionnaire and the new chatbot questionnaire. Afterward, patients gave their feedback on both questionnaires by the User Experience Questionnaire, which consists of 26 terms to score.

Results

The mean age of the 40 included patients (25 [63%] women) was 49 (SD 18-79) years; 46.73% (486/1040) of all terms were scored positive for the chatbot. Patients preferred the computer for 7.98% (83/1040) of the terms and for 47.88% (498/1040) of the terms there were no differences. Completion (mean time) of the computer questionnaire took 9.00 minutes by men (SD 2.72) and 7.72 minutes by women (SD 2.60; P=.148). For the chatbot, completion by men took 8.33 minutes (SD 2.99) and by women 7.36 minutes (SD 2.61; P=.287).

Conclusions

Patients preferred the chatbot questionnaire over the computer questionnaire. Time to completion of both questionnaires did not differ, though the chatbot questionnaire on a tablet felt more rapid compared to the computer questionnaire. This is an important finding because it could lead to higher response rates and to qualitatively better responses in future questionnaires.

Keywords: chatbot, user experience, questionnaires, response rates, value-based health care

Introduction

Questionnaires are routinely used in health care to obtain information from patients. Patients complete these questionnaires before and after a treatment, an intervention, or a hospital admission. Questionnaires are an important tool which provides patients the opportunity to voice their experience in a safe fashion. In turn, health care providers gather information that cannot be picked up in a physical examination. Through the use of patient-reported outcome measures (PROMs), the patient’s own perception is recorded, quantified, and compared to normative data in a large variety of domains such as quality of life, daily functioning, symptoms, and other aspects of their health and well-being [1,2]. To enable the usage of data delivered by the PROMs for the evaluation of services, quality of care, and also outcome for value-based health care correctly, respondent engagement is fundamental [3].

Subsequently, adequate response rates are needed for generalization of results. This implies that maximum response rates from questionnaires are desirable in order to ensure robust data. However, recent literature suggests that response rates of these PROMs are decreasing [4,5].

From previous studies, it is clear that factors which increase response rates include short questionnaires, incentives, personalization of questionnaires as well as repeat mailing strategies or telephone reminders [6-9]. Additionally, it seems that the design of the survey has an effect on response rates. Conventional survey designs are often perceived as dull and unengaging, resulting in negative respondent behavior such as speeding, random responding, premature termination, and lack of attention. An alternative to conventional survey designs is chatbots with implemented elements of gamification, which is defined as the application of game-design elements and game principles in nongame contexts [10].

A chatbot is a software application that can mimic intelligent conversation [11]. The assumption is that by bringing more fun and elements of gamification in a questionnaire, response rates will subsequently rise.

In a study comparing a web survey with a chatbot survey the conclusion was that the chatbot survey resulted in higher-quality data [12]. Patients may also feel that chatbots are safer interaction partners than human physicians and are willing to disclose more medical information and report more symptoms to chatbots [13,14].

In mental health, chatbots are already emerging as useful tools to provide psychological support to young adults undergoing cancer treatment [15]. However, literature investigating the effectiveness and acceptability of chatbot surveys in health care is limited. Because a chatbot is suitable to meet the aforementioned criteria to improve response rates of questionnaires, this prospective preliminary study will focus on the usage of a chatbot [13,16]. The aim of this study is to measure the user experience of a chatbot-based questionnaire at the preoperative outpatient clinic of the Anesthesiology Department (Catharina Hospital) in comparison with a regular computer questionnaire.

Methods

Recruitment

All patients scheduled for an operation who visit the outpatient clinic of the Anesthesiology Department (Catharina Hospital) complete a questionnaire about their health status. Afterward there is a preoperative intake consultation with a nurse or a doctor regarding the surgery, anesthesia, and risks related to their health status. The Medical Ethics Committee and the appropriate Institutional Review Board approved this study and the requirement for written informed consent was waived by the Institutional Review Board.

We performed a preliminary prospective cohort study and included 40 patients who visited the outpatient clinic between September 1, 2019, and October 31, 2019. Because of the lack of previous research on this topic and this is a preliminary study, we discussed the sample size (N=40) with the statistician of our hospital and this was determined to be clinically sufficient. Almost all patients could participate in the study. The exclusion criteria included patients under the age of 18, unable to speak Dutch, and those who were illiterate.

Patients were asked to participate in the study and were provided with information about the study if willing to participate. After permission for participation was obtained from the patient, the researcher administered the questionnaires. As mentioned above, informed consent was not required as patients were anonymous and no medical data were analyzed.

The Two Questionnaires

The computer questionnaire is the standard method at the Anesthesiology Outpatient Department (Figure 1). We developed a chatbot questionnaire (Figure 2) with identical questions to the computer version. This ensured that the questionnaires were of the same length, avoiding bias due to increased or decreased appreciation per question. The patients completed both the standard and chatbot questionnaires, as the standard computer questionnaire was required as part of the preoperative system in the hospital. Patients started alternately with either the chatbot or the computer questionnaire, in order to prevent bias in length of time and user experience. During the completion of both questionnaires, time required to complete was documented.

Open in a separate window

Figure 1

Computer questionnaire.

Open in a separate window

Figure 2

Chatbot questionnaire.

The User Experience Questionnaire

After completion of both questionnaires, patients provided feedback about the user experience. Patients were asked to rate their experience by providing scores for both questionnaires with the User Experience Questionnaire (UEQ; Figure 3). The reliability and validity of the UEQ scales were investigated in 11 usability tests which showed a sufficiently high reliability of the scales measured by Cronbach α [17-19]. Twenty-six terms were shown on a tablet and for each term patients gave their opinion by dragging the button to the “chatbot side” or to the “computer side.” They could choose to give 1, 2, 3, or 4 points to either the computer or the chatbot in relation to a specific term. If, according to the patient, there was no difference between the computer and the chatbot, he or she let the button in the middle of the bar.

Open in a separate window

Figure 3

User Experience Questionnaire.

The UEQ tested the following terms: pleasant, understandable, creative, easy to learn, valuable, annoying, interesting, predictable, rapid, original, obstructing, good, complex, repellent, new, unpleasant, familiar, motivating, as expected, efficient, clear, practical, messy, attractive, kind, and innovative.

As much as 20 of the 26 items were positive terms, such as “pleasant.” The other 6 are negative terms, such as “annoying.”

Outcome Measures

The primary outcome measure of this research is the user experience score and the difference in score between the standard computer questionnaire and the chatbot questionnaire. Secondary outcome was duration to complete a questionnaire.

Statistical Analysis

Data analysis primarily consisted of descriptive statistics and outcomes were mainly described in percentages or proportions. The unpaired t test was used to quantify significant differences between men and women and for time differences, because the data were normally distributed. A P value of .05 or less was chosen for statistical significance. Data were analyzed with SPSS statistics version 25 (IBM). Microsoft Excel version 16.1 was used for graphics.

This manuscript adheres to the applicable TREND guidelines [20].

Results

The mean age of the 40 patients included, of whom 25 (63%) were women, was 49 (SD 18-79) years.

The average score per term was calculated and shown in Figure 4. The UEQ scores showed that patients favored the chatbot over the standard questionnaire. According to the graph, the patients prefer the chatbot for 20 of the 26 terms (77%), all of which are positive terms. The average values for the other 6 terms, which are the negative terms (23%), are shown to have a negative value. This indicates that on average the patients associated the standard questionnaire with negative terms.

Open in a separate window

Figure 4

Average User Experience Questionnaire (UEQ) scores per term and standard deviation. A score above 0 illustrates that the term fits best with the chatbot. A score below 0 illustrates that the term fits best with the computer.

In total, 1040 terms were scored. As much as 46.73% (n=486) of the user experience terms were scored positive for the chatbot, 47.88% (n=498) of the terms had preference neither for chatbot nor computer, and for 7.98% (n=83) of the terms patients preferred the computer.

Average time to completion of the computer questionnaire was 8.20 (SD 2.69) minutes; for the chatbot questionnaire this was 7.72 (SD 2.76) minutes. The questionnaire completed initially took on average more time to complete, as the data in Table 1 indicate.

Table 1

Time to completion (minutes).

CriteriaComputer questionnaire completion time (minutes), mean (SD)Chatbot questionnaire completion time (minutes), mean (SD)
Average time to completion of computer- and chatbot-based questionnaire (n=40)


All patients8.20 (2.6)7.72 (2.7)
Average time to completion for men (n=15) versus women (n=25)


Men9.00 (2.7)8.33 (2.9)

Women7.72 (2.6)7.36 (2.6)

P value.148.287
Average time to completion depending on computer first (n=20) or chatbot first (n=20)


Computer first9.25 (2.4)6.85 (2.1)

Chatbot first7.15 (2.6)8.60 (3.0)

P value.012.044

Open in a separate window

Time to completion differed between men and women, but did not reach statistical significance. Every patient completed the second questionnaire statistically significantly faster than the initial one (chatbot P=.044, computer P=.012), irrespective of which questionnaire was completed initially (Table 1).

Discussion

Principal Findings

In this prospective observational study, we evaluated the user experience of a chatbot questionnaire and compared it to a standard computer questionnaire in an anesthesiology outpatient setting. Our results demonstrate that patients favored the chatbot questionnaire over the standard computer questionnaire according to the UEQ, which is in line with the previous research by Jain et al [21], who showed that users preferred chatbots as these provide a “human-like” natural language conversation.

Another intriguing result, as seen in Figure 4, is that the highest score to the chatbot was given for “rapid.” However, the time to completion of the questionnaires did not differ between the computer questionnaire and the chatbot questionnaire. This indicates that a questionnaire answered on a tablet may give the perception of being faster than a standard model answered on a computer. In addition, by using more capabilities of a chatbot it is possible to shorten the questionnaire, possibly leading to higher response rates, as mentioned by Nakash et al [6].

The second questionnaire took significantly less time to complete than the initial one, as the contents are identical between the 2 questionnaires. This is not an unexpected observation. Although time to completion of the initial questionnaire was significantly different compared to that of the second questionnaire, bias in the results was minimized by alternating the order of questionnaires.

Comparison With Prior Work

Explanations for low response rates can be disinterest, lack of time, or inability to comprehend the questions. Furthermore, patient characteristics such as age, social economic status, relationship status, and those with preoperative comorbidities appear to have a negative influence on response rates, with the majority being nonmodifiable factors [22]. However, Ho et al [23] demonstrated that the method employed to invite and inform patients of the PROM collection, and the environment in which it is undertaken, significantly alters the response rate in the completion of PROMs. This means that, as expected in this study, there is a chance that response rates will rise by using a chatbot instead of a standard questionnaire.

Gamification

As described in the study by Edwards et al [7], response rates will rise when incentives are used. Currently, questionnaires are often lacking elements motivating the patient to complete them. The introduction of nudging techniques, such as gamification, can help. Nudging is the subtle stimulation of someone to do something in a way that is gentle rather than forceful or direct, based on insights from behavioral psychology [24,25]. In a recent study by Warnock et al [26], where the strong positive impact of gamification on survey completion was demonstrated, respondents spent 20% more time on gamified questions than on questions without a gamified aspect, suggesting they gave thoughtful responses [26]. Gamification has been proposed to make online surveys more pleasant to complete and, consequently, to improve the quality of survey results [27,28].

Limitations

There are some limitations to this research. First, as mentioned in the “Introduction” section, a chatbot can mimic intelligent conversation and is a form of gamification. In our study we had identical questionnaires and therefore did not explore how the chatbot could mimic intelligent conversation. However, this research demonstrates that only minor changes in the questionnaire’s design lead to improved user experience. Second, because both the tablet and the chatbot were different from the standard computer questionnaire, it is possible that the user experience was influenced by the use of a tablet rather than by the characteristics of a chatbot solely. Third, although the UEQ shows us that the patients appreciated the chatbot more than the computer, we did not use qualitative methods to understand what factors drove users to identify the chatbot as a more positive experience. Fourth, although we recommend the use of a chatbot in the health care setting to improve questionnaire response rate as seen in previous literature, we did not formally investigate this outcome.

Future Research

Because patients preferred the chatbot questionnaire over the computer questionnaire, we expect that a chatbot questionnaire can result in higher response rates. This research is performed as a first step in the development of a tool by which we can achieve adequate response rates in questionnaires such as the PROMs. Further research is needed, however, to investigate whether response rates of a questionnaire will rise due to alteration of the design. In future research it will be interesting to investigate which elements of gamification are needed to have beneficial effects such as higher response rates and higher quality of the answers as well.

Conclusions

Patients preferred the chatbot questionnaire over the conservative computer questionnaire. Time to completion of both questionnaires did not differ, though the chatbot questionnaire on a tablet felt more rapid compared to the computer questionnaire. Possibly, a gamified chatbot questionnaire could lead to higher response rates and to qualitatively better responses. The latter is important when outcomes are used for the evaluation of services, quality of care, and also outcome for value-based health care.

Abbreviations

PROMpatient-reported outcome measure
UEQUser Experience Questionnaire

Footnotes

Contributed by

Authors' Contributions: All authors contributed to the study conception and design. Material preparation, data collection, and analysis were performed by MP and WR. The first draft of the manuscript was written by MP and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Conflicts of Interest: None declared.

References

1. Australian Commission on Safety and Quality in Health Care. [2020-11-06]. https://www.safetyandquality.gov.au/our-work/indicators-measurement-and-reporting/patient-reported-outcome-measures. [PubMed]

2. Baumhauer JF, Bozic KJ. Value-based Healthcare: Patient-reported Outcomes in Clinical Decision Making. Clin Orthop Relat Res. 2016 Jun;474(6):1375–8. doi:10.1007/s11999-016-4813-4. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

3. Gibbons E, Black N, Fallowfield L, Newhouse R, Fitzpatrick R. Essay 4: Patient-reported outcome measures and the evaluation of services. In: Raine R, Fitzpatrick R, Barratt H, Bevan G, Black N, Boaden R, Bower P, Campbell M, Denis J-L, Devers K, Dixon-Woods M, Fallowfield L, Forder J, Foy R, Freemantle N, Fulop NJ, Gibbons E, Gillies C, Goulding L, Grieve R, Grimshaw J, Howarth E, Lilford RJ, McDonald R, Moore G, Moore L, Newhouse R, O’Cathain A, Or Z, Papoutsi C, Prady S, Rycroft-Malone J, Sekhon J, Turner S, Watson SI, Zwarenstein M, editors. Challenges, Solutions and Future Directions in the Evaluation of Service Innovations in Health Care and Public Health. Southampton, UK: NIHR Journals Library; 2016. May, [PubMed] [Google Scholar]

4. Hazell ML, Morris JA, Linehan MF, Frank PI, Frank TL. Factors influencing the response to postal questionnaire surveys about respiratory symptoms. Prim Care Respir J. 2009 Sep;18(3):165–70. doi:10.3132/pcrj.2009.00001. doi:10.3132/pcrj.2009.00001. [PMC free article] [PubMed] [CrossRef] [CrossRef] [Google Scholar]

5. Peters M, Crocker H, Jenkinson C, Doll H, Fitzpatrick R. The routine collection of patient-reported outcome measures (PROMs) for long-term conditions in primary care: a cohort survey. BMJ Open. 2014 Feb 21;4(2):e003968. doi:10.1136/bmjopen-2013-003968. https://bmjopen.bmj.com/lookup/pmidlookup?view=long&pmid=24561495. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

6. Nakash RA, Hutton JL, Jørstad-Stein EC, Gates S, Lamb SE. Maximising response to postal questionnaires--a systematic review of randomised trials in health research. BMC Med Res Methodol. 2006 Feb 23;6:5. doi:10.1186/1471-2288-6-5. https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-6-5. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

7. Edwards P, Roberts I, Clarke M, DiGuiseppi C, Pratap S, Wentz R, Kwan I, Cooper R. Methods to increase response rates to postal questionnaires. Cochrane Database Syst Rev. 2007 Apr 18;(2):MR000008. doi:10.1002/14651858.MR000008.pub3. [PubMed] [CrossRef] [Google Scholar]

8. Toepoel V, Lugtig P. Modularization in an Era of Mobile Web. Social Science Computer Review. 2018 Jul;:089443931878488. doi:10.1177/0894439318784882. [CrossRef] [Google Scholar]

9. Sahlqvist S, Song Y, Bull F, Adams E, Preston J, Ogilvie D, iConnect Consortium Effect of questionnaire length, personalisation and reminder type on response rate to a complex postal survey: randomised controlled trial. BMC Med Res Methodol. 2011 May 06;11:62. doi:10.1186/1471-2288-11-62. https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-11-62. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

10. Robson K, Plangger K, Kietzmann JH, McCarthy I, Pitt L. Is it all a game? Understanding the principles of gamification. Business Horizons. 2015 Jul;58(4):411–420. doi:10.1016/j.bushor.2015.03.006. [CrossRef] [Google Scholar]

11. A. S, John D. Survey on Chatbot Design Techniques in Speech Conversation Systems. ijacsa. 2015;6(7) doi:10.14569/ijacsa.2015.060712. [CrossRef] [Google Scholar]

12. Kim S, Lee J, Gweon G. CHI '19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. New York, NY: ACM Press; 2019. Sep 04, Comparing Data from Chatbot and Web Surveys: Effects of Platform and Conversational Style on Survey Response Quality; pp. 1–12. [Google Scholar]

13. Palanica A, Flaschner P, Thommandram A, Li M, Fossat Y. Physicians' Perceptions of Chatbots in Health Care: Cross-Sectional Web-Based Survey. J Med Internet Res. 2019 Apr 05;21(4):e12887. doi:10.2196/12887. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

14. Nadarzynski T, Miles O, Cowie A, Ridge D. Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study. Digit Health. 2019;5:2055207619871808. doi:10.1177/2055207619871808. https://journals.sagepub.com/doi/10.1177/2055207619871808?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%3dpubmed. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

15. Greer S, Ramo D, Chang Y, Fu M, Moskowitz J, Haritatos J. Use of the Chatbot. JMIR Mhealth Uhealth. 2019 Oct 31;7(10):e15018. doi:10.2196/15018. https://mhealth.jmir.org/2019/10/e15018/ [PMC free article] [PubMed] [CrossRef] [Google Scholar]

16. Tudor Car L, Dhinagaran DA, Kyaw BM, Kowatsch T, Joty S, Theng Y, Atun R. Conversational Agents in Health Care: Scoping Review and Conceptual Analysis. J Med Internet Res. 2020 Aug 07;22(8):e17158. doi:10.2196/17158. https://www.jmir.org/2020/8/e17158/ [PMC free article] [PubMed] [CrossRef] [Google Scholar]

17. Schrepp M, Hinderks A, Thomaschewski J. Applying the User Experience Questionnaire (UEQ) in Different Evaluation Scenarios. International Conference of Design, User Experience, and Usability; 2014; Heraklion, Crete, Greece. 2014. Jun, pp. 383–392. [CrossRef] [Google Scholar]

18. Laugwitz B, Held T, Schrepp M. Construction and Evaluation of a User Experience Questionnaire. In: Holzinger A, editor. USAB 2008: HCI and Usability for Education and Work. Berlin, Germany: Springer; 2008. pp. 63–76. [Google Scholar]

19. Baumhauer JF, Bozic KJ. Value-based Healthcare: Patient-reported Outcomes in Clinical Decision Making. Clin Orthop Relat Res. 2016 Jun;474(6):1375–8. doi:10.1007/s11999-016-4813-4. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

20. Des Jarlais CC, Lyles C, Crepaz N, TREND Group Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement. Am J Public Health. 2004 Mar;94(3):361–6. doi:10.2105/ajph.94.3.361. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

21. Jain M, Kumar P, Kota R, Patel SN. Evaluating and Informing the Design of Chatbots. DIS '18: Proceedings of the 2018 Designing Interactive Systems Conference; Designing Interactive Systems (DIS) Conference; June 11-13, 2018; Hong Kong. New York, NY: ACM; 2018. pp. 895–906. [CrossRef] [Google Scholar]

22. Schamber EM, Takemoto SK, Chenok KE, Bozic KJ. Barriers to completion of Patient Reported Outcome Measures. J Arthroplasty. 2013 Oct;28(9):1449–53. doi:10.1016/j.arth.2013.06.025. [PubMed] [CrossRef] [Google Scholar]

23. Ho A, Purdie C, Tirosh O, Tran P. Improving the response rate of patient-reported outcome measures in an Australian tertiary metropolitan hospital. Patient Relat Outcome Meas. 2019;10:217–226. doi:10.2147/PROM.S162476. doi:10.2147/PROM.S162476. [PMC free article] [PubMed] [CrossRef] [CrossRef] [Google Scholar]

24. Nagtegaal R. [A nudge in the right direction? Recognition and use of nudging in the medical profession] Ned Tijdschr Geneeskd. 2020 Aug 20;164 [PubMed] [Google Scholar]

25. Cambridge Dictionary. [2020-06-30]. https://dictionary.cambridge.org/dictionary/english/nudging.

26. Warnock S, Gantz JS. Gaming for respondents: a test of the impact of gamification on completion rates. Int J Market Res. 2017;59(1):117. doi:10.2501/ijmr-2017-005. [CrossRef] [Google Scholar]

27. Harms J, Biegler S, Wimmer C, Kappel K, Grechenig T. Human-Computer Interaction – INTERACT 2015. Lecture Notes in Computer Science. Cham, Switzerland: Springer; 2015. Gamification of Online Surveys: Design Process, Case Study, and Evaluation; pp. 219–236. [Google Scholar]

28. Guin TD, Baker R, Mechling J, Ruyle E. Myths and realities of respondent engagement in online surveys. Int J Mark Res. 2012 Sep;54(5):613–633. doi:10.2501/ijmr-54-5-613-633. [CrossRef] [Google Scholar]

Articles from JMIR Medical Informatics are provided here courtesy of JMIR Publications Inc.

User Experience of a Chatbot Questionnaire Versus a Regular Computer Questionnaire: Prospective Comparative Study (2024)
Top Articles
Latest Posts
Article information

Author: Margart Wisoky

Last Updated:

Views: 5421

Rating: 4.8 / 5 (58 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Margart Wisoky

Birthday: 1993-05-13

Address: 2113 Abernathy Knoll, New Tamerafurt, CT 66893-2169

Phone: +25815234346805

Job: Central Developer

Hobby: Machining, Pottery, Rafting, Cosplaying, Jogging, Taekwondo, Scouting

Introduction: My name is Margart Wisoky, I am a gorgeous, shiny, successful, beautiful, adventurous, excited, pleasant person who loves writing and wants to share my knowledge and understanding with you.