Exam Scoring Castle Worldwide, MGMA’s professional testing partner, scores the exams exams. One point is granted for each correct answer. There is no penalty assessed for an incorrect answer; points are scored only for correct answers. The number of questions answered correctly (or total points) is a candidate’s “raw score.” A candidate’s raw score is converted to a scaled score, which is similar to scores provided for college entrance exams. The "Total Scaled Score" will determine whether a candidate has passed the exam. This scaled score is statistically derived from the candidate's raw score and can range from 200 to 800. The passing scaled score for the exam is 500. The passing score reflects the minimum amount of knowledge a committee of experts has determined to be appropriate for certification, according to established standard setting practices. MGMA, like many testing programs, uses multiple test forms to limit exposure of test questions. As new forms of the exams are introduced, a certain number of questions in each content area are replaced by new questions. While the new questions measure the same content of the questions they replace, it is very rare that they are of the exact same level of difficulty. These changes may cause one form of the exam to be slightly more or less difficult than another form. To adjust for these differences in difficulty, a statistical procedure called "equating" is used. The goal of equating is to ensure fairness to all candidates. In the equating process, the minimum raw score (number of correctly answered questions) required to equal the passing raw score on the base form is statistically determined. For instance, if the exam is determined to be more difficult than the base form of the exam, then the minimum raw score required to pass will be slightly lower than the passing scaled score. If the exam is a bit easier, then the passing raw score will be slightly higher than the passing raw score on the base form. Equating helps ensure that the passing scaled score represents the same level of knowledge, regardless of which form of the exam a candidate takes. Finally, these raw scores are translated onto the common scale so that the same scale score represents the passing standard on both forms. As the test is a criterion-referenced test, or one that compares performance against an established criterion as described above, your passing status is not determined by the performance of other candidates. Note: The passing point set for the exam cannot be appealed. To score one point below the passing point is to be unsuccessful on the exam; to score at the passing point or higher is to pass the exam. A score higher than the passing point is not an indication of a higher proficiency in the subject matter. Due to exam confidentiality it is MGMA policy to not disclose the values of the score band widths nor which questions were answered incorrectly. Scale Score and Score Reporting FAQ Why are scaled scores used? Scale scores provide a consistent scale of measurement, so that from one test administration to the next, the same scaled score represents the same level of knowledge. MGMA utilizes multiple exam forms and some forms may be more difficult than others. A scaled score ensures comparability in reporting across test forms. How is the scaled score computed? To calculate a scaled score, the raw score required to pass is first set equal to 500. An analogous situation is with temperature: 0 degree and 32 degrees both represent freezing on different temperature scales. Raw scores below the passing point are converted in linear fashion to scaled scores below 500; those above the passing point are similarly converted to scaled scores above 500. Is a scaled score the same as a percentage score? No. Calculation of a “percent correct” is a way to convert a raw score to another scale, but a scaled score is not the same as a percent correct. You could calculate percentage scores by dividing the number correct by the total, for example, a raw score of 9 in a category with 15 total items relates to 60% correct, but a scaled score of 600 does not correlate to 60% correct. Why not just use the number of items answered correctly? MGMA, like many testing programs, uses multiple test forms to limit exposure of test questions. While the different forms are built to the same test specifications and designed to be similar in difficulty, rarely are they able to be built to be exactly equal in the level of difficulty. Therefore, if two candidates take different forms of different levels of difficulty but get the same number of questions correct, the candidate with the more difficult form actually demonstrated a higher level of knowledge. Why don’t I get a percentage score on my report? Similar to a raw score, converting your score to a percentage score would not take the difficulty of the form into account. Why do I need a 500 to pass the test? The raw score required to pass relates to the number of correct answers that a minimally competent (borderline) candidate would be expected to provide. This is determined through the standard setting process by a panel of experts in medical practice management. This raw score is then set equal to 500 scaled score units. Different raw scores may be required of different test forms, because all examinations are not equally difficult. The scaled score of 500 required to pass indicates that, while a different number of correct answers may be required from one test form to the next, the passing point for all exams represents the same level of knowledge. Why won’t specific questions on the exam and/or answers to exam questions be discussed or released? Due to the security of the item bank and because exam questions can be used on various exam forms, exam questions will not be discussed with candidates and candidates may not have access to the exam or their answers. How should I interpret my performance on the category scores if I need to retake the exam? Please review all areas in the Body of Knowledge when preparing to retake exam, devoting additional study and review to those areas indicating “considerably lower”. However, it is important to note that domain level information is less reliable than the total score as it is based on fewer questions. It is important to review information in all of the domains as you do additional study and review. Neglecting to study information in domains in which your performance was strong could result in a lower domain score upon subsequent tests.