Thursday, December 9, 2010

Congratulations!

Several OPE research associates had successful comps last month and will be commencing their dissertation research.  Min Zhu's topic is the impact of rater effects on inter-rater reliability estimates, and Grant Morgan is studying the performance of latent class cluster models in real-world conditions.

Monday, December 6, 2010

Leigh's AEA Reflections

The AEA conference offered an opportunity to see a new city (I was born only 250 miles away in Dallas, but had never been to San Antonio), reunite with colleagues from across the country, spend quality time with my officemates, and hear about a variety of exciting evaluation techniques. While I was only in San Antonio for two days, I made the most of the time. I attended one session that featured Michael Quinn Patton discussing a systems/strategy approach to evaluation based on the work of Henry Mintzenberg who published Tracking Strategies: Toward a General Theory of Strategy Formation (2008). Rather than evaluating programs or activities, this is a method to examine strategies and ideas. One example highlighted evaluating end-of-life care for terminally ill patients. The evaluators used a historical approach to review type of care received in an effort to determine the systems approach of the organization. The upcoming issue of New Directions for Evaluation will highlight this type of evaluation. In another session, we learned about attempts in New Zealand to develop a set of evaluator competencies. The competencies center around cultural awareness and respect as emphasized in the Treaty of Waitangi (learned some history too). Two discussants furthered the conversation highlighting issues with centering evaluations around cultural competencies (potential of becoming one-dimensional or just including rhetoric about inclusion) and the importance of useable evaluation reports that provide recommendations for improvement. Min, Tara, Ching Ching, and I also met unexpectedly at a session on evaluating higher education coursework that was quite entertaining. The presenter highlighted real evaluation attempts and efforts to improve from within his institution that were often unflattering (if you give too many As, give fewer As). Grant and I presented during the final session of the conference on Saturday afternoon. While we had a small group, we had a spirited discussion and we have even had requests for more information about our work. Enjoyable two days!

Grant's AEA Experience

At AEA’s Evaluation 2010 conference in San Antonio, TX, I presented or co-presented three studies: 1) Coding Open-Ended Survey Items: A Discussion of Codebook Development and Coding Procedures, 2) Estimating Rater Consistency: Which Method Is Appropriate, and 3) Understanding Student Mastery of Higher Education Curriculum Standards. Each presentation was accompanied by a series of thought-provoking questions that resulted in rich dialogue between the audience and me (and co-presenters where applicable). In addition, I attended sessions on the use of multiple comparisons, Rasch modeling, and latent class analysis. The multiple comparison session was presented by Roger Kirk, who authored Experimental Design: Procedures for Behavioral Sciences ( http://www.amazon.com/Experimental-Design-Procedures-Behavioral-Psychology/dp/0534250920/ref=sr_1_6?s=books&ie=UTF8&qid=1291062847&sr=1-6), and the Rasch session was presented by Christine Fox, who authored Applying the Rasch Model: Fundamental Measurement in the Human Sciences (http://www.amazon.com/Applying-Rasch-Model-Fundamental-Measurement/dp/0805854622/ref=sr_1_1?s=books&ie=UTF8&qid=1291063163&sr=1-1). I have used both texts and was therefore pleased to have gotten the opportunity to meet them. My interaction with other audience members following these sessions resulted in ideas for future study regarding reliability estimation and latent class clustering. In fact, Dr. Robert Johnson, Min Zhu, and I are currently preparing a manuscript based on the rater reliability study that was presented at AEA. I was also able to touch base with my editor in order to provide her with a brief update on the progress of the book I am currently co-authoring. Overall, I feel the Evaluation 2010 conference was a success.

Friday, December 3, 2010

Posters!

Several posters now hang in the hallway outside the OPE office here on the Garden Level of Wardlaw College.  Come down and take a look!


Thursday, December 2, 2010

Ashlee'a AEA Impressions

I always find AEA to be a conference that both nourishes and energizes me professionally. This year, I kicked off my conference experience in San Antonio with a one-day professional development workshop on Utilization-Focused Evaluation with the revered evaluation thinker Michael Quinn Patton. Through the course of the day, he provided much direction about designing evaluations with results that get used rather than producing reports that “gather dust on a shelf.” In the workshop, Dr. Patton provided convincing support for the utilization practices he suggests by giving examples from his own evaluation work. I left the workshop with a developing utilization toolbox and with the feeling that such evaluations “can be done” with thoughtful planning and diligent work.

During the conference, I had the fabulous opportunity to engage in conversations with Dr. Rodney Hopson and Dr. Henry Frierson, two leaders in the field of culturally responsive evaluation. Each of them gave me many things to ponder, and they also provided highly valuable advice for developing a career in evaluation. I also attended many sessions aligned with my interests in qualitative methodologies, many of which addressed the conference theme of quality as it pertained to qualitative methods. I was thrilled with the methods of data visualization used to demonstrate group collaboration over time in a presentation by a group of design students. Their work revealed some of the ways in which data might be more creatively visualized to effectively communicate evaluation findings to clients and even to better generate discussion about evaluation findings.

Finally, Min Zhu and I presented our study of how providing rubrics to teachers impacts performance assessment scores. We received positive feedback about our work from the audience as well as some thoughtful questions and suggestions that provided direction for further examining the relationships among rubric use, instruction, and student performance assessment scores.