My Pegasus

 Log In



CART (0 ITEMS)

Ultrasound Student Assessment: Understanding the Approach

Research on How to Best Select Students for Ultrasound Programs (Part 2)

This is the second in a series of blogs related to methodology for selecting ultrasound students. In the first installment, we discussed the impetus for starting a research project on how to select ultrasound students. In this installment, I will describe the approach taken to assess the survey and to draw conclusions.

Now for some bad news and some good news. The bad news: to really understand the analysis, a little bit of statistics is required. If you are not too woozy from your head hitting the floor, I can now give you some good news. The good news: I have posted a really basic (and hopefully very easy to understand) mini lecture on the statistics needed at PegasusLectures.com. If you are not up to snuff on standard deviation and correlation, I strongly suggest that you view this tutorial sometime before reading the fourth installment. By the way, that same statistic review will probably be helpful for better understanding this installment and the third installment as well.

As discussed in the statistics tutorial, every statistical analysis is open for discussion, interpretation, and potentially, even argument. With that said, I believe that an honest statistical analysis should tell you up front the methods being employed, the assumptions made, the metrics used, and any potential weaknesses in the methodology. To many people, the idea of pointing out shortcomings before making your arguments and drawing conclusions is like trying to get a date with the person of your dreams by first drawing attention to all of your faults, foibles, imperfections, and flaws. But, since I believe in the expression “caveat emptor,” (or – let the buyer beware!) – here are all the warts and blemishes!

There were two facets of the analysis for the first phase of the research. The first facet was based on the opinions of instructors surveyed. This approach starts with the premise that after years of teaching, instructors should have some insight into what practices tend to work, and what characteristics make for the best students. Of course opinions can be wrong, so whereas there is a high probability that some of the perceptions reported are correct, or at least pointing toward “truth,” there clearly are no guarantees that this approach is meaningful.

The second facet of the analysis was statistically based, comparing the survey responses with a specific metric – the percentage of students who graduated per program. In other words, I was looking for positive correlation between admission behaviors of programs with higher graduation rates, and negative correlation between admission behaviors and programs with lower graduation rates. Continuing to follow the maxim caveat emptor, let’s discuss the pros and cons of the statistical approach.

The first drawback to this approach is that graduation rate is not likely a perfect metric for whether or not the students selected to education programs were ideally chosen. The first and most obvious issue is that reaching graduation by itself does not ensure that the student will perform well with patients in the clinical world. To illustrate this point, imagine two schools with very different standards. Assume School X has very rigid standards such that student A is dropped from the program, unable to meet the high requirements. Now imagine that the same student attended School Y, with lower standards. This same student, with the same skills potentially would graduate. Extrapolating from this model, we can see by using graduation rates as a metric, School Y would potentially look as if they were more successful at admitting appropriate students than School X, primarily because of a difference in standards, not because of “reality.”

So given this potential drawback you should now be saying “good point” and asking “so why would you use this potentially flawed metric?”. The answer (lest you think I am psychic – realize that I planted those questions in your head) is paradoxically both simple and complex. First, although perhaps not the best metric possible, it is at least a “reasonable” metric. Presumably, a school would like the entrance exam to eliminate those students who would not make it through the entire program and accept those students who would. Therefore, in the ideal world, every student admitted would graduate, and this metric, at least, makes sense. Secondly, this metric is very easily generated through a survey and is not remotely subjective.

So what is the bottom line? For the statistical analysis, you must start with the premise that having a high graduation rate is desirable. If you do not accept this premise, then most of the analysis will, by extension, not necessarily follow logically.

In the next installment, I will discuss some of the results of the survey and the analysis. If you did not review the tutorial on statistics, you might want to review it before the third blog in this series (Now for an analogy: statistics is to this lecture as water is to thirsty horse). Of course if statistics are considered child’s play to you, then please feel free to ignore this advice.

By the way, when I presented the results of Phase 1 at the educator’s tutorial at the annual conference held in Memphis, I allowed the audience to vote in prediction of some of the survey outcomes. The questions and choices will be viewable in the next blog article.

This entry was posted in Ultrasound Student Assessment and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *