Managing the Competencies in Ophthalmology
Acknowledgement: This work was supported in part by an unrestricted grant from Research to Prevent Blindness, Inc., N.Y. , N.Y. A hard copy version of more detailed published versions of the materials contained on this website can be found in the following articles:
- Lee AG, Carter KC. Managing the new mandate in resident education: a blueprint for translating a national mandate into local compliance. Ophthalmology 111:1807-12, 2004.
- Lee AG. The impact of the new competencies in ophthalmology. Surv Ophthalmol 48:651-62, 2003.
Frequently asked questions
The ACGME in the late 1990s developed the six competencies based upon (16-17) an extensive review process including published curricula, reports, educational literature, surveys, and interviews; consultations with a representative cross section of the key external stakeholders (e.g., government, public and private foundations, corporate chief executives, presidents of universities, residency review committees, and focus groups of residents, program directors, practicing physicians, allied health personnel, and the public); and consensus building over time with the internal stakeholders (e.g., physicians, educators). The ACGME and the American Board of Medical Specialties (ABMS) adopted the same six competencies below (Table 3):*
- Patient care
- Medical knowledge
- Practice-based learning and improvement
- Interpersonal and communication skills
- Systems-based practice
The ACGME proposed that the competencies be implemented in three phases:
- Phase 1: Identification of general competencies applicable to doctors across all of the specialties in medicine
- Phase 2: Development of the measurement tools (the toolbox) to assess the six competencies
- Phase 3: Construction of a support network for graduate medical education using these measurements to improve the process.
Although written and oral examinations are the mainstays of knowledge assessment, ophthalmologists should be aware of alternative tools for evaluating residents. I have summarized briefly the contents of the toolbox available to educators with their associated strengths and weaknesses. (Table 4).*
The easiest and currently the most utilized measurement tool is the written examination. Written exams can cover a broad fund of knowledge from a core curriculum. Residents currently take the ophthalmology in service exam (the Ophthalmology Knowledge and Assessment Program) and graduates sitting for board certification take the written qualifying examination for the American Board of Ophthalmology. The results of the ABO written exam and the OKAP may be used as one benchmark for ophthalmology program effectiveness in one competency, medical knowledge.
The advantages of written testing are familiarity for tester and testee, ability to test large numbers of residents simultaneously, standardization of the testing format, ease of grading, objectivity, reproducibility, and psychometric validity. Multiple choice exams can be quantitatively scored that can be benchmarked internally and externally.
Short and long answer tests including essay format are more difficult to score than multiple choice testing and are more time intensive. The advantages over multiple choice tests are that more information is obtained in the examinee’s own words, more in-depth probing of decision making and fund of knowledge is possible; and there is less "rewarding" of guessing. In multiple choice format, the answer is by necessity contained in the test question. Short answer or essay formats force the examinee to create an answer rather than simply select one.
All of the written test formats have difficulty measuring qualitative aspects of patient-physician interaction, communication and interpersonal skills, and professionalism in a written examination format. In addition, there may not be a correlation of medical knowledge about surgery and actual surgical skills. Scott et al. found that the American Board of Surgery In-training examination measured knowledge but did not correlate with technical skill or operative performance. (21)
Oral examinations have intrinsic advantages over written tests.(17) Currently many programs employ intramural oral examinations at the conclusion of specific rotations. The American Board of Ophthalmology uses standardized case-based or patient management problem vignettes in their oral examination. The oral format advantages include a "face to face" interaction with the ability to probe the learner’s thought process, problem solving ability, and fund of knowledge, to gauge active participation in decision making, and to integrate visual aids and tester provided information in real time. Case vignettes including history, photos, and test results can be used as props for the testing. Checklists are often used in conjunction with oral testing to insure completion of specific responses.
The disadvantages of oral tests however are they are time consuming, require a lot of personnel to administer, and are difficult to standardize and grade. Case specificity, defined as performance on one case that may or may not be a predictor for different cases, is a limiting factor in case based oral testing. Oral examiners require formal training in test procedure to deter the inappropriate use of body language such as"cueing" or intra-test positive or negative feedback. Although an oral examination may allow assessement of communication skills and deeper probing of an examinee’s fund of knowledge and application of information, one potential disadvantage of oral examination is that it derives from the same fundamental core of information (i.e medical knowledge) as the written exam.
Standardized and simulated examination
Simulated examinations can use cadavers, anatomical models or mannequins, computer-based or virtual reality patient management problems, role playing, or other simulations to create a testing situation. The advantages of simulated exercises are the ease and convenience of models or computers. The "real life" advantages of a "hands on" standardized patient and the interaction with a life patient however are lost in simulations. (17, 22-33)
Simulations can also test the decision making process itself outside of medical content or context. Strategic Management Simulation is one such tool that has been employed to measure different components of decision making and function (e.g., speed of response, task orientation, initiative, breadth of approach to problems, planning capacity, and strategy) in response to complex task assignments. Satish et al. tested the simulator and found data that was similar to traditional faculty ratings of residents. The disadvantages of simulations are that they are time consuming to develop and grade. (23)
Standardized clinical exams provide reproducible menus for testing and assessment that can be multi-stationed encounters. Standardized examinations can include history and physical exam technique, communication and interpersonal skills, ability to summarize, interpret, and perform tests or procedures, formulate a differential diagnosis and plan. Standardized real or healthy but simulated patients can test for technique, appropriateness of interaction, completeness of specific tasks, and expected behaviors. The standardized patient method is particularly useful for assessing specific manual skills or techniques and has high accuracy (up to 90%), reproducibility and scoring validity.(26)
Objective structured clinical examination (OSCE)
Several forms of an Objective Structured Clinical Examination (OSCE) exist and have been tested and validated in many specialties. (17, 27-31) Checklists can be employed to document and score specific behaviors and tasks. The use of checklists in this setting reduces grader variability and can produce better inter-rater reliability.
Up to 14- to 18 test items (and up to 6 hours of testing) may be required however to achieve reliable and reproducible results. (19) The disadvantages of the standardized clinical examinations including the OSCE are the development and implementation costs, preparation and testing time, cost, physical space and logistical issues, and personnel and evaluator training requirements. Training and maintaining adequate numbers of standardized or simulated patients can be difficult and costly. (26) Colliver et al. measured the reliability and efficiency of components of clinical competence using standardized patients and estimated that over 100 cases (range 45 to 170) would be needed to assess certain competencies with sufficient reliability (0.80) and validity. (24) This high case requirement could be a significant limiting factor in generating valid indicators of resident performance.
Chart review, chart stimulated recall, chart audit
Chart review and chart stimulated recall methods are auditing techniques that can be used to assess "real world" decision-making. (17) Retrospective chart audits can compare abstracted information to pre-existing protocols in an explicit review process including differential diagnoses, diagnostic testing, interpretation of results, and treatment. Chart reviews can measure the "real world " medical documentation skills for completeness, problem solving, and cost effectiveness. The American Board of Ophthalmology currently employs a self-selected chart review process for specific diagnoses as part of their re-certification process. Sample sizes of between 8 and 10 records are probably sufficient for reliability testing of a diagnosis or procedure and one study reported that 6-8 random office chart selections might be a sufficient sample to evaluate patient care. (17)
In chart stimulated recall, the charts are selected to be representative for content. The physician examiner can use the chart data to probe the learners’ knowledge base, decision making, application and interpretation of "real patient" test results, and the rationale for testing and treatment options. Reliability scores of between 0.65 and 0.88 can be obtained with this technique. (17) The disadvantages include issues of patient confidentiality, the process can be time consuming, extensive training is required for the evaluator, and the scoring can be difficult to quantify or standardize across patients. In addition, the examinee’s record has no external benchmark or "gold standard" for the accuracy of the physical examination findings or the true outcome of the patient.
Case presentations, case-based learning
Case presentations (e.g. grand rounds) provide information about organizational and communication skills, synthesis and presentation of data, and problem solving. (17, 35) Most programs require residents to present at intramural conferences and grand rounds but few programs assess their performance or provide feedback to presenters. Case presentations can be time consuming, are difficult to standardize and grade, and may not be representative of "real time" patient care abilities. Case-based or problem-based learning and interactive grand rounds can provide information about individual resident communication and interpersonal skills, self-directed learning, and problem-solving ability. (34) Case specficity is a limiting factor however in using grand rounds as a gauge of resident competence.
Evidence-based medicine and literature reviews
"Journal clubs" can provide information about evidence-based medicine, literature review skills, epidemiology and statistics knowledge base, and application of practice based and systems based learning. On-line or virtual "journal clubs" can allow multiple residents to review selected articles as well as provide interactive feedback and built-in assessment. Most residency programs have some form of journal club but few record performance results or provide feedback on the results. Measurement of the application of these skills should be incorporated across other tools.
Qualitative reviews and global ratings
Qualitative reviews and global ratings are the most commonly used tool in residency programs. The advantages include widespread acceptance and familiarity with the technique, ability to assess interpersonal and communication skills, professionalism, and aspects of patient and systems-based care. (17,35-40) Global ratings however judge general abilities rather than specific behaviors as in the checklist format. The supervisor ratings are generally retrospective over a period of time (e.g. one rotation). Global ratings are easy to use, can provide be used to provide feedback, and can be semi-quantitative but are highly subjective. The disadvantages of rating forms are significant. Graders often avoid the high and low end of the scale leading to grade inflation and evaluators tend to provide more accurate but less lenient in ratings when the evaluation provides formative feedback rather than summative feedback. (41) This particular problem becomes more apparent when a "face to face" discussion of the evaluation score is required and when the score is used for a summative "high-stakes","go-no go", promotion decision. There is often dissociation between written comments especially negative comments and actual scores. Inter-rater and intra-rater subjectivity and bias, especially if graders are untrained, can lead to "easy graders" and "hard graders". Variability of resident clinical experience, limited numbers of direct observations,"luck of the draw", and difficulty with reproducibility and reliability significantly limit the utility of qualitative evaluations in measuring the competencies.
Global evaluation forms can be used to assess a number of competencies simultaneously. The caveat however is in overreliance upon this single tool. Overly onerous, time-consuming, or too lengthy global evaluation forms that simply tack on the new competencies as a "laundry list " to an existing rotation evaluation form probably will not be sufficient to meet the need. This type of "super-form" may place an additional and probably unrealistic burden for such a high stakes encounter on rotation supervisors to subjectively assess "too much in too little time". In addition, rotations are often designed to meet clinical service needs rather than educational needs and measurement of service content may not have construct validity. (43-44) Documenting the reliability and validity of these forms is critical from a credentialling and possibly legal standpoint if they are to be used in a summative and high stakes decision like promotion.The decision not to graduate a resident based on the results of these "super-forms" especially if they are collected or completed in a cursory or haphazard fashion is not likely to be accepted as adequate. Conversely, attendings with little opportunity for or documentation of direct observation of a resident may result in the graduation of residents with substantial deficiencies. It would be difficult to develop a checklist that could reliably assess intangible competencies such as ethical standards or interactions with colleagues and support staff.
One variation on the qualitative review method is known as the 360-degree evaluation. (17) The "360" includes multiple evaluations from multiple individuals (e.g. nurses, allied health professionals, peers, supervisors, patients, patient families) within the sphere of operation of the person being evaluated. This is in distinction to the typical supervisor or preceptor qualitative review method. Usually a survey or questionnaire is used to collect data on performance in areas that are not amenable to written testing (e.g. teamwork, communication, management style, decision-making). Reliability for a "360" may be as high as 0.90. (17) Patient surveys may assist in the evaluation of general qualitative aspects of personal interaction, communications skills, courtesy, interest, and empathy. Reliability may be as high as 0.90 for a properly performed institutional patient survey. These patient surveys may however require multiple responses (20 to 40) to be reliable and are time intensive to collect and report. (17)
Direct observation in an apprenticeship model is the basis for most global evaluations in ophthalmology residency (e.g. rotations). (42) Unless faculty members are willing and able to supervise residents through all stages of patient encounters it may be difficult to judge the resident’s skills in these areas accurately. Hauer described one method of direct observation, clinical evaluation exercise, that documents in a standardized manner a single observed student history and exam during a real patient encounter from start to finish. (42) Video (e.g., closed circuit television) or audio-taped review of surgical cases or patient encounters provides the advantages of direct observation with less time constraint and the artificial environment created by a supervised exam. Specific skills can be examined and reviewed in a non-judgmental and non-time pressured environment. The disadvantages include cost, training of personal, artificial environment created by recording, equipment and confidentiality issues.
A portfolio can collate and document evidence of resident learning and achievement. (17) Attendance records of resident participation in educational activities can easily be incorporated into the portfolio. Attendance records however do not measure ability, motivation, or participation. The portfolio might include any of the following:
- A written learning plan
- Lists of critical or sentinel patient care events (positive or negative)
- Written and oral test scores
- Records of continuing medical education
- Logs of number and type of procedures, surgical operations, or individual patient encounters
- Documentation of completion of independent learning
- Global ratings results
- Video or audio-tape recordings
- Patient letters of advocacy or commendation
- Examples of ethical dilemmas solved
- Counseling sessions with patients.
Surgical logs are required by many programs and the Residency Review Committee. They may be electronic and computerized systems can aid in collection and maintenance of databases. Logs however can be inaccurate, incomplete, or even falsified. Surgical logs like attendance records only provide quantitative information on "how many procedures" rather than qualitative assessment data of "how well were the procedures done". (46) Regular review of the surgical and medical logs and the learner portfolio can insure appropriate content, amount, and timing of learning. (46-48) The portfolio can include formative and summative data and is one of the most useful tools for collecting in a quantitative manner data on competencies that are difficult to measure through other means such as practice-based improvement, professionalism.
The specific steps in the implementation plan include the following (16-17):
Step 1 : Programs will be asked to develop specific curricula and written goals and objectives for teaching the competencies.
Step 2 : The programs should be developing and using an assortment of different evaluation tools ("the tool box") to assess resident competence in the various areas.
Step 3 : After using the tools for a period of time (testing phase), programs will need to develop means to document the reliability and validity of the assessment tools. There will be a need to use progressively better tools (and discard poorer tools) and provide evidence of the use of evaluation data in assessing the competencies.
Step 4: Programs will be required to link educational outcomes with overall program improvement including assessments that improve aspects of the program that apply to the external (other than the residents) stakeholders (e.g. patient surveys, employers, graduates). (16,17)
Step 1: Blueprint for curriculum development, learning objectives, and organizational goals ("Define what needs to be tested") (33)
The curriculum, objectives, and assessment strategy should be defined at the onset. We have to move away from the traditional curriculum that is based upon "learning what teachers choose to teach" to the new competency-based curriculum based upon "learning what they need to know to take care of patients". (52) The purpose of the process, specific learning objectives and goals, and a timeline have to be documented in an advance blueprint before implementation and "construction" of the tools begins.
This blueprint should include a test matrix of the "best tools" for content and possibly even benchmarks for individual programs to follow. (53) The new curriculum will have to work within the existing time and scheduling constraints. The competencies should be integrated and "piggy-backed" on to existing structure rather than adding a new curriculum into an already over-crowded and time limited 36-month schedule. Existing curriculum materials and rotation schedules should be adapted to meet the needs of the competencies rather than creating overlapping and duplicitous material. Lee et al previously described some guidelines for designing an optimum resident rotation schedule in ophthalmology and the competencies need to be incorporated into the rotation schedule design. (43) Lee and Koch surveyed residents at one training program (Baylor College of Medicine) and identified clinically important skills rated by residents and correlated them with the level of training received. (44) There was significant discrepancy in the curriculum requirements, level of training, and the subjective rating of clinical importance in several items including managed care, practice management. Thus, curriculum design should take into account the clinical value and prioritization of content, a formal needs assessment, and the requirements of the learners in order to maximize the learning within a fixed training period. (46)
A novel solution to the time limit imposed by the 36-month training period is to develop flexibility in the current "rotation-based" format. Although the 36-month training period would remain the same, residents might rotate through more or less quickly through the core rotations based upon performance. (54)
There needs to be explicit "buy in" of the concept and defined accountability before proceeding. Participants including both educators and learners need to know the background, historical perspective, driving forces, rationale, and motivation for the reinvention process. There needs to be a clear mandate for change and "buy in" for the concept of reform from the top of the academic leadership hierarchy from deans and chairmen down to individual faculty and learners. Complete acceptance of the blueprint must be established before proceeding. Departments and their faculties must assume "ownership" and accountability for the process on an individual resident level. Teaching and learning should become the responsibility of the individual learner, the faculty, the program director, and the department. Most importantly, the burden should not fall completely on the program director.
Step 2: Implementing the blueprint, developing and using the tools
Don’t reinvent the wheel. Many organizations have already devoted considerable time and effort to the development of non-subspecialty specific curricula that could be modified for use by ophthalmology training programs. There is no need for duplication of effort and many of the competencies are generalizable across disciplines. Many medical schools and some residencies have already adapted some tools (e.g. OSCE) for assessment for medical students and residents. (28,29,38) In addition, program requirements for some of the competencies cross subspecialty areas and can be met on an institutional basis.
Existing curricula should be modified to reflect the competencies. Lee and Koch developed one curriculum for ophthalmology (the Resident Progress Notebook) that was limited to a "laundry list" format. (44) A separate surgical curriculum was based upon more specific learning objectives (the Surgical Resident Progress Notebook). The Association of University Professors in Ophthalmology also developed a broad-based ophthalmology curriculum. These curricula in ophthalmology require modification to meet the specific requirements of the competencies. A clearinghouse should be established to promote sharing of information, expert advice, and collaborative discussion among educators. The ACGME has already suggested "best methods" for evaluation of the competencies and selected specific tools from the toolbox. They ranked the desirability of the various testing methods for each competency. (17)
Match the right tool to the right job.
A program matrix should match the individual preferred testing tools to specific competencies. Using one tool for every competency such as a super global rating form, over-reliance upon a single observer like the program director, using a single "high stakes" encounter such as the written qualifying exam, or using unreliable, invalid, or "wrong tools" for the job are not valid means of meeting the mandate.
Using models from other specialties will save time and energy. One sample matrix for meeting the seven competencies is below:
- Patient care OSCE and patient surveys
- Medical knowledge Written and oral exams
- Practice based learning Record review, chart audit, portfolios
- Interpersonal skills OSCE, direct observation, patient surveys
- Professionalism OSCE and 360 degree global ratings
- Systems-based practice 360 degree global ratings
- Surgery Surgical OSCE, video review and portfolio
Use high tech tools ("enabling technologies").
The use of computer technology to assist in the delivery, documentation, and analysis of information will save time and money. (48-51) Programs in ophthalmology already are having difficulty meeting the challenges of an expanding curriculum within a fixed 36-month training period. The body of knowledge in medical and surgical ophthalmology continues to increase but the training period remains at 36 months and the lecture time available during that restricted time period is precious. The use of technologies such as the internet that residents can access at any time or on their own time (weekends, nights) could free programs of the burden of forcing more information into an overcrowded weekly lecture schedule. One such site, (www.gme-core.org), provides excellent Web-based educational modules on the GME core curriculum.
Educate the educators (Develop a core teaching faculty).
Subspecialty faculty members are currently well trained in the "science of medicine" but not the "science of education" including testing and evaluation. Training focusing on education will be essential for the residency directors and faculty members who will be expected to implement the competencies. These individuals need to organize, meet, share information and"share the tools". There will be a need for training of faculty how to teach ("teach the teacher") and how to evaluate ("evaluate the evaluators"). McAvoy et al. described the training program for assessors used in the United Kingdom by the General Medical Council. (47) The program includes learning the basic principles and tools for assessment, supervised practice in assessment methods during a peer review visit, use of actors simulating third party interviewees and trained doctors role playing the assessee, and specific training in report writing and legal issues. Developing and implementing the competencies for the residents implies a parallel process of faculty competencies in testing and assessment and faculty have to be taught how to perform the testing.
Dissemination and sharing of information is critical and stimulates collaboration between groups working towards solving similar problems.(55-60) Although individual program needs will require modification of the specific tools in the toolbox, it will be important that tools that work be shared among all the program directors. A cadre of dedicated educators will be necessary to carry the process over from year to year. The turnover rate for program directors including ophthalmology is quite high with a relatively short training director tenure of only 2-3 years. The typical program director is often the most junior member of a faculty, usually begins their tenure without formal training in education or evaluation, is often uncompensated for the extra time commitment, is being asked to do "more and more with less and less", and is often given "requirements without resources". These trends will have to be reversed if the competency effort is to succeed.
Step 3: Testing the tools
Keep the tools sharp and working.
Modification, testing of reliability and validity, and improvement in the tools themselves will be an ongoing process. Part of the ACGME mandate is not just the development of tools but proof that they actually work by linking results with evidence that they improve the educational process. (16,17) We will, in a sense, need tools to measure the efficacy of the tools. The tools will have to be evaluated for reliability, validity, and feasibility.
Step 4: Linking the tools to improvements in the educational process
Development of new tools and discarding old tools
The tools that work or don’t work should be evaluated in the context of the original blueprint. The traditional model has relied too heavily on two tools with significant disadvantages, the written multiple choice exam and the qualitative global preceptor review. These are familiar but insufficient tools and new tools will have to be developed. In the new model, "paper and pencil" testing and evaluation forms will have to give way to innovative assessment methods that infer competence by more closely resembling "real world" patient encounters. These new tools should replace outdated, inefficient, or unwieldy tools. The cycle of assessment and measurement with re-assessment and re-measurement should be repeated over time leading to a self-sustaining process of improvement. (61-62)
- O’Neil EH and the Pew Health Professions Commission. Recreating Health Professional Practice for a New Century. The fourth report of the Pew Professions Commission. San Francisco , CA : Pew Health Professions Commission, 1998.
- Jacobs MO, Mott PD. Physician characteristics and training emphasis considered desirable by leaders of HMOs. J Med Educ 1987;62:725-731.
- Leach D. The ACGME competencies. Substance or form? J Am Coll Surg 2001;192:396-398.
- Council on Graduate Medical Education. Summary of Resource Paper. Preparing Learners for Practice in a Managed Care Environment. Rockville , MD. Health Resources and Services Administration, 1997. (www.cogme.gov/resource.htm):(Last viewed 6/5/02 )
- American Academy of Family Physicians. Recommended Curriculum Guidelines for Family Practice Residents. Care Management. Leawood , KS , AAFP, 1996. http://www.aafp.org/edu/guide (Last viewed 6/5/02)
- Association of American Medical Colleges (AAMC). Contemporary Issues in Medicine-Medical Infomatics and Population Health: Report II of the Medical School Objectives Project. Acad Med 1999;74:130-141.
- Federated Council for Internal Medicine. Task Force on the Internal Medicine Residency Curriculum. Graduate Medical Education in Internal Medicine. A resource guide to curriculum development. 1997, Philadelphia , PA. FCIM, 1997.
- Informatics Panel and the Population Health Perspective Panel. Contemporary issues in medicine. Medical informatics and population health: Report II of the medical school objectives project. Acad Med 1999;74:130-141.
- Klessig JM, Wolfsthal SD , Levine MA, et al. A pilot survey study to define quality in residency education. Acad Med 2000;75:71-73.
- Parboosingh J. Credentialing physicians: Challenges for continuing medical education. J Cont Med Educat Health Profess 2000;20:188-190.
- Reisdorff EJ, Hayes, OW, Carlson DJ, Walker GL. Assessing the new general competencies for resident education. A model from an emergency medicine program. Academic Med 2001;76:753-757.
- Trunkey D, Botney R. Assessing competency: A tale of two professions. J Am Coll Surg 2001;192:385-395.
- Undergraduate Medical Education for the 21 st century. A national demonstration of curriculum innovations to keep pace with a chaning health care environment. Health Resources and Services Administration and American Association of Colleges of Osteopathic Medicine. www.aaocom.org
- Yedidia MJ, Gillespie CC, Moore GT. Specific clinical competencies for managing care. JAMA 2000;9:1093-1098.
- Halpern R, Lee MY, Boulter PR, Phillips RR. A synthesis of nine major reports on physicians’ competencies for the emerging practice environment. Acad Med 2001;76:606-615.
- Accreditation Council for Graduate Medical Education (ACGME) General competencies. www.acgme.org. 2002. (Last viewed 6/5/02 )
- Accreditation Council for Graduate Medical Education (ACGME) Outcome project. Toolbox of assessment methods. A product of the joint initiative of the ACGME outcome project of the ACGME and ABMS. www.acgme.org, Version 1.1, 2000. (Last viewed 6/5/02 )
- Reinhardt UE. Academic medicine’s financial accountability and responsibility.JAMA 2000;284:1136-1138.
- Newble DI, Swanson DB. Psychometric characteristics of the objective structured clinical examination. Med Educ 1988;22:325-334.
- Newble DI, Hoare J, Elmslie RG. The validity and reliability of a new examination of the clinical competence of medical students. Med Educ 1981; 15:46 -52.
- Scott DJ, Valentine RJ, Bergen PC, et al. Evaluating surgical competence with the American Board of Surgery In-Training Examination, skill testing, and intraoperative assessment. Surgery 2000;128:613-622.
- Vu NV, Marcy MM, Colliver JA, et al. Standardized (simulated) patients’ accuracy in recording clinical performance check-list items. Med Educ 1992;26:99-104.
- Satish U, Streufert S, Marshall R, et al. Strategic management simulations is a novel way to measure resident competencies. Am J Surg 2001;181:557-561.
- Colliver JA, Vu NV , Markwell SJ, et al. Reliability and efficiency of components of clinical competence assessed: five performance-based examinations using standardized patients. Med Educ 1991;25:303-310.
- Stillman P, Swanson D. Ensuring the clinical competence of medical school graduates through standardized patients. Arch Intern Med 1987;147:1049-1052.
- Tamblyn RM, Klass DJ, Schnabl GK, Kopelow ML. The accuracy of standardized patient presentation. Med Educ 1991;25:100-109.
- Harden RM, Stevenson M, Downie WW, Wilson GM. Assessment of clinical competence using objective structured examination. Br Med J 1975;1:447-451.
- Jain SS, DeLisa JA, Campagnolo DI. Methods used in the evaluation of clinical competency of physical medicine and rehabilitation residents. Am J Phys Med Rehabil 1994;73:234-239.
- Jain SS, Nadler S, Eyles M, et al. Development of an objective structured clinical examination (OSCE) for physical medicine and rehabilitation residents. Am J Phys Med Rehabil 1997;76:102-106.
- Matsell DG, Hus NMW. Reliability and validity of the objective structured clinical examination in paediatrics. Med Educ 1991;25:293-299.
- Petrusa RT, Blacvkwell TA, Ainsworth MA. Reliability and validity of an objective structured clinical examination for assessing the clinical performance of residents. Arch Intern Med 1990;150:573-577.
- Winkel CP, Reznick RK, Cohen R, Taylor B. Reliability and construct validity of a structured technical skills assessment form. Am J Surg 1994;167:423-27.
- Newble D, Dawson B, Dauphinee D, et al. Guidelines for assessing clinical competence. Teach Learn Med 1994;3:213-220.
- Schmidt HG, van der Molen HT. Self-reported competency ratings of graduates of a problem-based curriculum. Acad Med 2001;76:466-468.
- Gray J. Global ratings scales in residency education. Acad Med 1996;71:S55-63.
- Headrick LA, Neuhauser D, Schwab P, et al. Continuous quality improvement and the education of the generalist physician. Acad Med 1995;70:S104-109.
- Matthews DA, Feinstein AR. A new instrument for patients’ ratings of physician performance in the hospital setting. J Gen Intern Med 1989;4:14-22.
- Mavis BE, Cole BL, Hoppe RB. A survey of student assessment in U.S. Medical Schools: The balance of breadth versus fidelity. Teach Learn in Med 2001;13:74-79.
- Noel G, Herbers JE, Caplow M, et al. How well do Internal Medicine faculty members evaluate the clinical skills of residents. Ann Intern Med 1992;117:757-65.
- Stritter FT, Bland CJ, Youngblood PL. Determining essential faculty competencies. Teach Learn Med 1991;3:232-238.
- Quattlebaum TG. Techniques for evaluating residents and residency programs. Pediatr 1996;98:1277-1283.
- Hauer KE. Enhancing feedback to students using the mini-CEX. Acad Med 2000;75:524.
- Lee AG, Pelton RW, Koch DD. Designing an optimum rotation schedule for an ophthalmology residency. Acad Med 1997;72:564.
- Lee AG, Koch DD. The Baylor Survey of ophthalmology residency graduates: Rationale for reform in residency training. Ophthalm Surg Lasers 1998;29:649-657.
- Adler LM, Ware JE, Enelow AJ. Changes in medical interviewing style after instruction with two closed circuit television techniques. J Med Educ 1970;45:21-28.
- Lee AG, Koch DD. A standardized curriculum and evaluation system for an ophthalmology residency. Acad Med 1998;73:447-448.
- McAvoy PA, McCrorie P, Jolly B, et al. Training the assessors for the General Medical Council’s performance procedures. Med Educ 2001;35 :(Suppl 1):29-35.
- Folberg R, Antonioli DA, Alexander CB. Competency-based residency training in pathology. Challenges and opportunities. Hum Pathol 2002;33:3-6.
- Gilbert S, Davidson JS. Using the world-wide web to obtain feedback on the quality of surgical residency training. Am J Surg 2000;179:74-75.
- Steckler A, Farel A, Bontempi JB, et al. Can health professionals learn qualitative evaluation methods on the World Wide Web? A case example. Health Educ Research 2001;16:735-745.
- Civetta JM, Morejon OV, Krton OC, et al. Beyond requirements. Residency management through the internet. Arch Surg 2001;136:412-417.
- Yip H-K, Smales RJ, Newsome PRH, et al. Competency-based education in a clinical course in conservative dentistry. Br Dental J 2001;191:517-522.
- Lee AG. Benchmarks for ranking and improving ophthalmology residency programs. Arch Ophthalmol 2001;119:1395-6.
- Long DM. Competency-based residency training: the next advance in graduate medical education. Acad Med 2000;75:1178-1183.
- Murray E, Gruppen L, Catton P, et al. The accountability of clinical education: its definition and assessment. Med Educ 2000;34:871-879.
- Mulvey HJ, Ogle-Jewett EA, Cheng TL, Johnson RL. Future of Pediatric Education II (FOPE II) Project, American Academy of Pediatrics. Pediatric Residency Education. Pediatrics 2000;106:323-329.
- Cantor JC, Baker LC, Hughes RG. Preparedness for practice. Young physicians views of their professional education. JAMA 1993;270:1035-1040.
- Ilott I, Bunch G. Competencies of basic surgical trainees. Ann Roy Coll Surg Engl 1998;80:14-16.
- Martin M, Vashisht B, Frezza F, et al. Competency-based instruction in critical invasive skills improves both resident performance and patient safety. Surgery 1998;124:313-317.
- Fowell SL, Southgate LJ, Blight JG. Evaluating assessment: the missing link? Med Educ 1999;33:276-281.
- Waddell DL, Measurement issues in promoting continued competence. J Cont Educ Nurs 2001;32:102-138.
- Lee AG, New mandate in resident education: reply. Ophthalmology 2005;112:1324.
- Golnik KC, Lee AG, Carter K. Assessment of ophthalmology resident on-call performance. Ophthalmology 2005;112:1242-1246.
Method of literature search
A Medline (PubMed) literature search was performed using the search terms, "competency-based education’ and "competencies" (2296 articles). The search was limited to the years 1996-2002 (1096 articles) and further limited to "human", "abstract", and "English-language" articles (774 articles). Letters to the editor were included only if they added significant new information. Pre-1996 articles were included if they added historical context or additional valuable information. Abstracts were reviewed and selected papers (62 articles) were obtained from the initial literature search based upon applicability to residency education in ophthalmology. Web sites and non-Medline citations were obtained from selected bibliographies.