Journal of Clinical and Diagnostic Research Peer Reviewed

  • Loading metrics

Uncovering the relation between clinical reasoning and diagnostic accurateness – An analysis of learner'south clinical reasoning processes in virtual patients

  • Inga Hege,
  • Andrzej A. Kononowicz,
  • Jan Kiesewetter,
  • Lynn Foster-Johnson

PLOS

10

  • Published: October 4, 2018
  • https://doi.org/x.1371/journal.pone.0204900

Abstruse

Groundwork

Clinical reasoning is an important topic in healthcare training, assessment, and enquiry. Virtual patients (VPs) are a safe environment to teach, assess and perform research on clinical reasoning and diagnostic accurateness. Our aim was to explore the details of the clinical reasoning procedure and diagnostic accuracy of undergraduate medical students when working with VPs using a concept mapping tool.

Methods

Over seven months we provided access to 67 German and xxx English VPs combined with a concept mapping tool to visualize and measure the clinical reasoning procedure of identifying issues, differential diagnoses, recommended tests and handling options, and composing a summary statement near a VP. A concluding diagnosis had to exist submitted by the learners in lodge to conclude the VP scenario. Learners were allowed multiple attempts or could request the correct diagnosis from the arrangement.

Results

Nosotros analyzed ane,393 completed concept maps from 317 learners. Nosotros found significant differences between maps with a right last diagnosis on one or multiple attempts and maps in which learners gave upward and requested the solution from the organisation. These maps had lower scores, fewer summary statements, and fewer problems, differential diagnoses, tests, and treatments.

Conclusions

The different utilize patterns and scores between learners who had the correct final diagnosis on i or multiple attempts and those who gave up, indicate that diagnostic accuracy in the form of a right terminal diagnosis on the first try has to be reconsidered as a sole indicator for clinical reasoning competency. For the training, assessment, and inquiry of clinical reasoning nosotros suggest focusing more on the details of the process to reach a correct diagnosis, rather than whether it was made in the first attempt.

Introduction

Clinical reasoning educational activity and cess is a major attribute in both, healthcare didactics and research. Healthcare students have to larn this important skill during their education and keep to farther develop it in the workplace. The circuitous clinical reasoning process includes the application of knowledge to synthesize and prioritize information from various sources and to develop a diagnosis and management program for a patient. Various models and theoretical frameworks for clinical reasoning have been developed, including a circuitous model past Charlin et al.[1] or a more teacher-oriented model past Eva [2]. But, despite being a heavily researched topic, it is nevertheless not clear, how clinical reasoning is learned and how it can be finer taught or assessed [three]. Thus, a typical indicator to measure clinical reasoning skills is diagnostic accuracy, which is ofttimes defined and assessed every bit reaching the correct final diagnosis in a commencement attempt [iv].

Spider web-based virtual patients (VPs) are widely used to train students and healthcare professionals in clinical reasoning [v,6]. VPs provide a prophylactic environment in which learners tin develop their clinical reasoning skills at their ain pace and learn from diagnostic errors without harming a patient [7]. VPs are typically designed to unfold in a step-by step manner, revealing the information about a patient in a "serial-cue" format. However, evidence about the effectiveness of such an arroyo to learn clinical reasoning is lacking [8] and which design features of VPs optimally support the training of clinical reasoning is however not fully understood [ix,ten].

To address this unresolved event, we developed a concept mapping tool, which specifically captures the clinical reasoning process while learning with virtual patients and allows a detailed analysis of learners' reasoning processes [11]. The tool was conceptualized and designed based on a grounded theory exploration of the process of learning clinical reasoning and supports its specific steps [12]. We chose concept mapping every bit it accounts for the not-linearity of the circuitous clinical reasoning process and supports the relations of concepts with each other [13].

With this tool, our aim was to analyze use patterns in a real-earth educational setting to detect out more than nigh learners' clinical reasoning with virtual patients. Our hypothesis was that at that place are differences in the clinical reasoning processes between correctly and incorrect diagnosed VPs. Specifically, we wanted to explore the differences in the processes of learners who provided a correct diagnosis on their first attempt and those who required several attempts to reach a correct diagnosis.

Methods

Virtual patients and concept mapping tool

Nosotros created 67 VPs in High german and 30 in English language in the VP system CASUS, for a listing of VPs see S1 Table.

The VPs were combined with a concept mapping tool, which was designed to support the steps of the clinical reasoning process. Learners document their clinical reasoning process past adding elements (too known as "nodes") in four different categories—problems/findings, differential diagnoses they want to consider, tests they would similar to perform, such as a physical examination, laboratory tests or medical imaging, and handling options. Nodes can exist connected to signal relationships, for example a finding confirming a differential diagnosis (Fig ane).

Additionally, learners compose a short summary statement to summarize and prioritize the problems of the patient. Throughout the process, learners may brand a final diagnosis and if the diagnosis is incorrect, they may asking the correct solution from the system. Only, to conclude the scenario, learners must submit a final diagnosis. Errors, such as a premature closure are automatically detected by the organisation and reported back to the learner.

A physician created the VPs including the expert concept maps covering diseases relevant for medical students from a variety of specialties, such as internal medicine, neurology, or pediatrics. The VPs and maps were reviewed by experts for content accurateness.

Any time during the scenario learners can admission an expert map for comparison. Based on this good map the system automatically scores added nodes and final diagnoses, accounting for synonyms based on a Medical Discipline Heading (MeSH) list. The summary statements are scored based on the employ of semantic qualifiers [fourteen]. All learners' interactions with the tool are stored in a relational database. The detailed functionalities, scoring algorithms, and the development procedure of the tool have been described elsewhere [11]; Table 1 provides an overview of the variables. The selected variables are based on previous work of developing the concept mapping tool [11,12].

Participants and data collection

We provided access to two VP courses in German and English to undergraduate medical students. From January 1st until July 31, 2017 admission to the courses was gratis, merely registration or login via singleSignOn (Shibboleth, edugain) was required [16].

Data most the courses was sent to medical schools in Europe, appear at medical education conferences, and posted on the project's website. Additionally, all registered CASUS users were provided with the link to the new courses in their dashboard. An overview of the study design is shown in Fig two.

Data collected by the concept mapping tool were anonymous. No personal data, except for an encrypted unique identifier for each user, were transferred from the VP organisation to the concept mapping tool. If a learner completed a VP multiple times, nosotros only included the first session for our assay. Anonymized data is published in the Open Science Framework.

Data assay

We exported all collected data from the concept mapping tool into Statistical Analysis Software (SAS, SAS Institute Inc. 2013. SAS/STAT 13.1.) for further assay. Since the focus of this study is the cognitive actions of learners, the unit of analysis was the completed maps (i.e., having a last diagnosis) created by the learners for a VP, rather than the individual learner.

Most of the concept map data are at the time of the first submission of a last diagnosis; the number and scores of treatments, time on task and feedback clicks were analyzed at the end of a scenario.

Nosotros examined average differences in scores and utilize patterns using linear mixed modeling (LLM) and multinomial logistic regression using generalized estimating equations (GEE) [17] to account for the correlated errors associated with the nested construction of the information. We used correlations (pearson-production moment and betoken-biserial) to examine the patterns of associations between the number of nodes and scores and nowadays these results as a estrus map to focus on the broad patterns. Basic information on the learners are recorded in the VP system CASUS upon registration. Only, in that location is no transfer of whatever personal data to the concept mapping tool.

Upstanding approval

We obtained ethical approval from the ethical commission at Ludwig-Maximilians Universität Munich, Frg (reference number: 260–15).

Results

Learner demographics

Overall, 858 undergraduate medical students enrolled in the two courses during the written report period (139 in English, 718 in German); 317 users (36.five%) completed at least one virtual patient with a concluding diagnosis. From these 317 users 87 were male (27.4%), 168 female (53.0%) (62 missing values).

Completed maps

Overall, we recorded 1,393 completed concept maps during the report fourth dimension, which were created past 317 different users, from which 47.6% (due north = 151) completed one map, 13.9% (north = 44) completed two maps, and 38.5% (n = 122) completed three or more concept maps. We plant that in 59.0% (due north = 822) of the maps the correct final diagnosis was provided on the beginning attempt (Group C). For the maps that were not solved correctly on the first attempt, the correct final diagnosis was made after multiple attempts in 13.1% (n = 183) of the maps (Group W), and in 27.9% (n = 388) of the maps learners gave up and requested the correct solution from the system (Grouping S).

In group S, in 59.5% (n = 231) of the maps learners gave upward afterwards the first attempt and another 25.3% (due north = 98) after the second attempt; the maximum number of attempts was 17. In group W in 66.vii% (n = 122) of the maps, learners submitted the correct concluding diagnosis on the second attempt, and 15.ix% (n = 29) on the 3rd effort. Maximum number of attempts was seven.

38% (n = 122) of the learners submitted three or more maps belonging to more one group. Of these learners, we found that only 7.4% (n = 9) created maps that belonged solely in one of the three groups (e.yard., all maps in C, W, or S). Virtually created maps that belonged in two or three groups (45.9%, n = 56 and 46.vii%, northward = 57, respectively).

Employ patterns and scores

For the three groups of maps, nosotros saw differences in the use patterns (i.e, number of nodes and connections) and the scores earned for the specific clinical reasoning activities. In grouping S, the maps contained fewer problems, differential diagnoses, tests, treatment options, and connections than in groups C and West. Differences between group C and W were non meaning. For all three groups, the average number of connections was low compared to the expert maps (Fig 3).

thumbnail

Fig 3. Average number of elements—added nodes in each category and number of added connections—for the three groups and the good maps.

*significant divergence betwixt group C (correct diagnosis was made on first attempt) and S (right diagnosis provided by the system) (p<0.05), ** meaning difference between group South and groups C and Due west (correct concluding diagnosis was submitted later offset attempt).

https://doi.org/10.1371/journal.pone.0204900.g003

When looking into the details of the map development, maps in group S had significantly fewer summary statements, were scored lower in all categories, and learners in this group were less confident with their final diagnosis conclusion. As well, the expert map was consulted less frequently and learners spent less time on creating the maps (Table 2). The only meaning difference between the groups C and W, was a lower score for the differential diagnoses in group W.

thumbnail

Table ii. Average scores, confidence with final diagnosis, time on chore, and feedback requests by groups of concept maps—Grouping C (correct diagnosis was made on starting time try), grouping W (correct concluding diagnosis was submitted subsequently outset attempt) and group S (correct diagnosis provided by the system).

https://doi.org/10.1371/journal.pone.0204900.t002

Correlations

The correlations between the number of added nodes and scores in the four categories (problems, differential diagnoses, tests, treatments) were higher in group S than in groups C and W (Fig 4, S2 Table). For case, the correlation between the number of recommended tests and quality of the test (measured by scores) was quite high in grouping S (r = .97), and much lower in groups C and W (r = .50 and .48, respectively). Also, compared to groups W and C, the presence of a summary argument was related to the higher scores in group S for the differential diagnosis (r = .75), tests (r = .89), and had a moderate correlation with the numbers of problems, tests, treatment options, and differentials. Nosotros as well detected a large difference in correlations betwixt the groups for the number of clicks on the skilful map every bit feedback.

thumbnail

Fig iv. Correlations between variables in the three groups Group C (correct diagnosis was made on kickoff attempt), group W (right concluding diagnosis was submitted after first attempt) and group Due south (right final diagnosis provided by the system).

https://doi.org/10.1371/journal.pone.0204900.g004

Multinomial logistic regression

Nosotros used a multinomial logistic regression to understand the combined differences in apply patterns and scores between the 3 groups (S1 Table). Group Westward and S were compared to the reference group C. Controlling for other variables in the model, a premature closure was more than probable to occur in group W than group Due south. Recommending more tests was significantly more likely in grouping W, compared to group C. Lower numbers of feedback clicks, suggesting fewer treatment options, and a lower confidence in their final diagnoses was more prominent in group S, than in group C. Lower scores on the differential diagnoses and problem lists were more probable for groups W and S. Compared to group C, higher scores on tests occurred more with group Due west and lower summary statement scores were more evident in group S.

Give-and-take

The results of our report partially confirm our hypothesis that there is a significant difference in the clinical reasoning processes for learners. All the same, the relevant determinant is not the correct solution on the first attempt or subsequent attempts, only whether the right final diagnosis was made by the learners themselves (groups C or W) or whether the solution was requested from the system (group S). In the following nosotros will discuss the results in more detail.

Overall, the differences between the maps in groups C and W were small and non-meaning, whereas the maps in grouping S independent significantly fewer nodes and lower scores in all four categories (problems, differential diagnoses, tests, and handling options) as well as fewer connections compared to grouping C and Westward. A potential explanation could be that for some learners these VPs were more difficult, leading them to give up on finding the correct final diagnosis. Withal, learners spent less fourth dimension on these VPs and requested feedback from the expert less oftentimes than what we would take expected with more difficult VP scenarios. Some other explanation could be that for sessions in grouping Due south, learners might have been less motivated and engaged. However, the results prove that the maps of well-nigh learners were at to the lowest degree associated with 2 groups, suggesting that learners were mostly motivated to work with the VPs. Further research is needed to investigate the VP characteristics in the three groups and improve understand the reasons and the role of feedback in the clinical reasoning process.

Compared to the number of connections fatigued past the expert, the maps in all three groups included a very low number of connections (Fig three). We can but hypothesize most the reasons, which might exist a usability effect, a need for more instruction almost the importance of connections in concept maps, or challenges faced by the learners in reflecting why and how the nodes of their map are connected.

Overall, the scores in all categories were quite low, because those nodes where the learner has already seen the expert map are scored as a zero.

If learners gave upwardly on providing a final diagnosis (grouping S), a summary statement was equanimous significantly less often, and if it was composed, it was scored significantly lower based on the use of semantic qualifiers than the summary statements in groups C and West. Research has shown that composing a summary statement in both face-to-face teaching and virtual scenarios allows learners to organize and nowadays relevant aspects, and to practice using semantic qualifiers [eighteen,19], which are related to diagnostic accuracy [20,21]. Our study extends these findings by showing that for grouping S, composing a summary statement or a summary statement with adequate employ of semantic qualifiers is related to more nodes in all iv categories and higher scores on differential diagnoses and tests.

Interestingly for group C, the relationship between a summary statement limerick and the score for treatments is lower, and for group W nosotros observe a high correlation between the quality of the summary statement and the score for the problem list and the number of added treatments. Thus, nosotros can assume that the conscientious composition of a summary argument might be more beneficial for learners when they are struggling with structuring their thoughts and determining the correct last diagnosis.

A premature closure fault occurred significantly more oftentimes in group W, than in grouping South. At the aforementioned time grouping W was slightly less confident than grouping C and significantly more confident than group Southward. This finding adds quantitative information to a contempo mixed-method report, indicating that a variety of errors are made by medical students during their reasoning procedure [22]. Friedman et al. showed that final year medical students were less accurate and less confident in their diagnostic decisions compared to attending physicians [23]. Our written report further indicates that within the group of medical students there are significant differences in the level of confidence for VPs. This finding warrants farther exploration about the reasons for overconfidence, including attitudinal and cognitive factors [24]. Additionally, we accept an excellent opportunity to provide detailed feedback to learners to assistance them larn from errors and overconfidence in a safe environment, and to address the lack of a formal cognitive error and patient safe curriculum [25].

We are aware that our study has some limitations. First, due to the anonymous information collection we exercise not have any information on the learners who completed the VP scenarios. Thus, we cannot have into account whatsoever contextual and person-specific factors, such as motivation, level of expertise, or demographic information.

2nd, the data collection was intentionally not conducted in a controlled setting, but, using an approach, which is comparable to big information studies. The focus of big data studies is on studying user behavior and usage patterns, thus we believe information technology is an appropriate method for avoiding biases oftentimes involved in artificial controlled study settings, such as motivation or selection. 3rd, we carefully tracked all user actions with timestamps and did not detect any signs for technical problems that could crusade a learner to spend exceptionally more time on a VP. We also did not receive any support requests or complaints regarding technical problems. Even so nosotros cannot rule out that on rare occasions the time on task might have been prolonged due to technical bug.

Conclusions

Overall, our results indicate that diagnostic accuracy in the form of correctness of the final diagnosis in the commencement attempt should be reconsidered as a sole indicator of clinical reasoning competence. In our written report, the greatest departure in the clinical reasoning process was betwixt those learners who were able to identify a correct final diagnosis—no thing how many attempts it took versus those who gave up and requested the solution from the system.

"One shot" approaches focusing on the first attempt to provide a terminal diagnosis, are not patient-centered or realistic, even if they are widely used in VPs, clinical reasoning research studies, and training in general. In reality, a healthcare professional person would not stop diagnostics if their showtime diagnosis turned out to be wrong. Thus, for the grooming, cess, and inquiry of clinical reasoning nosotros suggest focusing more than on the details of the process to reach a correct diagnosis, rather than whether it was made in the offset attempt. In VP scenarios, learners often have to make a decision about the final diagnosis without having the opportunity to retry or request the solution from the arrangement. Consequently, it has non been possible to make the important distinction betwixt the learners giving up and those reaching the correct concluding diagnosis by revising their diagnoses.

Outlook

Our study successfully measures and visualizes the clinical reasoning procedure and the development of a final diagnosis. Furthermore, the use of concept mapping is an innovative approach to measuring the iterative and non-linear thought processes inherent in clinical reasoning [xiii].

Based on the results of this study nosotros will continue to develop the concept mapping tool including more than dynamic scaffolding and feedback elements to specifically support learners who take problems composing a summary statement and struggle to submit the correct final diagnosis. We concur with Berman et al. that VPs can exist used for enquiry that volition meliorate medical curricula [26]. To this stop, our approach of combining VPs with a structured clinical reasoning tool raises some important questions about clinical reasoning didactics, which should exist investigated farther.

To date, the VP courses have not been formally integrated into a curriculum. Thus, we intend to expand the courses and integrate them into healthcare curricula, especially longitudinal courses dedicated to clinical reasoning preparation and adopting a "mixed practice" of topics and specialties [2]. However, this may be challenging since often there is no structured clinical reasoning curriculum. This gap in instructional practice [viii] may be a place where VPs and the concept mapping tool could be a valuable component.

Supporting data

Acknowledgments

We would like to thank all educators promoting admission to these courses and all students who used the virtual patients and created concept maps. We also would like to thank all clinicians who critically reviewed the virtual patients and maps and gave input for improvement. Finally, nosotros would similar to give thanks Martin Adler for supporting the implementation of the study and implementing the singleSignOn mechanism to access the courses.

References

  1. 1. Charlin B, Lubarsky S, Millette B, Crevier F, Audétat MC, Charbonneau A, et al. Clinical reasoning processes: unravelling complication through graphical representation: Clinical reasoning: graphical representation. Med Educ. 2012;46(5):454–63. pmid:22515753
  2. ii. Eva KW. What every teacher needs to know about clinical reasoning. Med Educ. 2005;39(1):98–106. pmid:15612906
  3. 3. Durning SJ, Artino AR, Schuwirth L, van der Vleuten C. Clarifying Assumptions to Enhance Our Understanding and Assessment of Clinical Reasoning: Acad Med. 2013;88(four):442–eight.
  4. 4. Linsen A, Elshout Yard, Pols D, Zwaan L, Mamede S. Pedagogy in Clinical Reasoning: An Experimental Study on Strategies to Foster Novice Medical Students' Appointment in Learning Activities. Health Professions Education (2017); http://dx.doi.org/10.1016/j.hpe.2017.03.003
  5. 5. Berman NB, Autumn LH, Smith S, Levine D, Maloney C, Potts K, et al. Integration Strategies for Using Virtual Patients in Clinical Clerkships. Acad Med. 2009;84(vii): 942–9.
  6. 6. Hege I, Kopp Five, Adler G, Radon K, Mäsch G, Lyon H, et al. Experiences with different integration strategies of example-based e-learning. Med Teach 2007;29(viii):791–7.
  7. 7. Ellaway RH, Poulton T, Smothers Five, Greene P. Virtual patients come of age. Med Teach. 2009;31(viii):683–4. pmid:19811203
  8. 8. Schmidt HG, Mamede Southward. How to improve the teaching of clinical reasoning: a narrative review and a proposal. Med Educ. 2015;49(10):961–73.
  9. 9. Cook DA, Triola MM. Virtual patients: a critical literature review and proposed next steps. Med Educ. 2009;43(iv):303–11. pmid:19335571
  10. 10. Melt DA, Erwin PJ, Triola MM. Computerized Virtual Patients in Wellness Professions Education: A Systematic Review and Meta-Analysis: Acad Med. 2010;85(ten):1589–602.
  11. 11. Hege I, Kononowicz AA, Adler M. A Clinical Reasoning Tool for Virtual Patients: Blueprint-Based Research Study. JMIR Med Educ. 2017;3(2):e21.
  12. 12. Hege I, Kononowicz AA, Berman NB, Lenzer B, Kiesewetter J. Advancing clinical reasoning in virtual patients—development and application of a conceptual framework. GMS J Med Educ 2018; 15;35(1):Doc12
  13. 13. Durning SJ, Lubarsky S, Torre D, Dory V, Holmboe E. Considering "Nonlinearity" Beyond the Continuum in Medical Teaching Cess: Supporting Theory, Exercise, and Future Research Directions: Journal of Continuing Education in the Health Professions. 2015;35(three):232–43. pmid:26378429
  14. xiv. Smith Due south, Kogan JR, Berman NB, Dell MS, Brock DM, Robins LS. The Development and Preliminary Validation of a Rubric to Appraise Medical Students' Written Summary Statements in Virtual Patient Cases: Acad Med; 2016;91(1):94–100. pmid:26726864
  15. 15. Connell KJ, Bordage G, Chang RW. Assessing Clinicians' Quality of Thinking and Semantic Competence: A Training Manual. Chicago: Academy of Illinois at Chicago, Northwestern University Medical School, Chicago; 1998
  16. 16. Virtual Patient grade access in CASUS. http://crt.casus.net. Accessed Dec 10, 2017
  17. 17. Liang KY, Zeger SL. Longitudinal data analysis using generalized linear models. Biometrika; 1986;73, 13–22.
  18. xviii. Posel N, Mcgee JB, Fleiszer DM. Twelve tips to support the development of clinical reasoning skills using virtual patient cases. Med Teach; 2015;37(9):813–8. pmid:25523009
  19. nineteen. Braun LT, Zottmann JM, Adolf C, Lottspeich C, And so C, Wirth S, et al. Representation scaffolds meliorate diagnostic efficiency in medical students. Med Educ. 2017. 51(11):1118–26 pmid:28585351
  20. xx. Bordage G, Connell K, Chang RW, Gecht M, Sinacore J. Assessing the semantic content of clinical case presentations: Studies of reliability and concurrent validity. Acad Med 1997;72(10 suppl 1): S37–9.
  21. 21. Bordage Yard. Prototypes and semantic qualifiers: from by to present: commentaries. Med Educ 2007; 41(12):1117–21
  22. 22. Braun LT, Zwaan L, Kiesewetter J, Fischer MR, Schmidmaier R. Diagnostic errors past medical students: results of a prospective qualitative study. BMC Med Educ 2017; 17:191 pmid:29121903
  23. 23. Friedman CP, Gatti GG, Franz TM, Murphy GC, Wolf FM, Heckerling PS, et al. Do physicians know when their diagnoses are right? Implications for decision support and fault reduction. J Gen Intern Med. 2005;20:334–9. pmid:15857490
  24. 24. Berner ES, Graber ML. Overconfidence as a Cause of Diagnostic Mistake in Medicine. The American Periodical of Medicine. 2008;121(5):S2–23.
  25. 25. Kiesewetter J, Kager G, Lux R, Zwissler B, Fischer MR, Dietz I. German undergraduate medical students' attitudes and needs regarding medical errors and patient safety–A national survey in Germany. Med Teach. 2014;36(6):505–10. pmid:24597660
  26. 26. Berman NB, Durning SJ, Fischer MR, Huwendiek South, Triola MM. The Part for Virtual Patients in the Future of Medical Teaching: Acad Med. 2016;91(ix):1217–22.

parkeryoulthalater1997.blogspot.com

Source: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0204900

0 Response to "Journal of Clinical and Diagnostic Research Peer Reviewed"

ارسال یک نظر

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel