Improving Healthcare Quality in Europe Characteristics, Effectiveness and Implementation of Different Strategies


Book Description

This volume, developed by the Observatory together with OECD, provides an overall conceptual framework for understanding and applying strategies aimed at improving quality of care. Crucially, it summarizes available evidence on different quality strategies and provides recommendations for their implementation. This book is intended to help policy-makers to understand concepts of quality and to support them to evaluate single strategies and combinations of strategies.




Higher Education Personnel System


Book Description







Government Auditing Standards - 2018 Revision


Book Description

Audits provide essential accountability and transparency over government programs. Given the current challenges facing governments and their programs, the oversight provided through auditing is more critical than ever. Government auditing provides the objective analysis and information needed to make the decisions necessary to help create a better future. The professional standards presented in this 2018 revision of Government Auditing Standards (known as the Yellow Book) provide a framework for performing high-quality audit work with competence, integrity, objectivity, and independence to provide accountability and to help improve government operations and services. These standards, commonly referred to as generally accepted government auditing standards (GAGAS), provide the foundation for government auditors to lead by example in the areas of independence, transparency, accountability, and quality through the audit process. This revision contains major changes from, and supersedes, the 2011 revision.




Assessing Organizational Performance in Higher Education


Book Description

The book provides a full complement of assessment technologies that enable leaders to measure and evaluate performance using qualitative and quantitative performance indicators and reference points in each of seven areas of organizational performance. While these technologies are not new, applying them in a comprehensive assessment of the performance of both academic and administrative organization in higher education is a true innovation. Assessing Organizational Performance in Higher Education defines four types of assessment user groups, each of which has unique interest in organizational performance. This offers a new perspective on who uses performance results and why they use them. These varied groups emphasize that assessment results must be tailored to fit the needs of specific groups, that “one-size-fits-all” does not apply in assessment. An assessment process must be robust and capable of delivering the right information at the right time to the right user group.







Ensuring Quality and Productivity in Higher Education


Book Description

A detailed review of the quality assurance and productivity oversight processes being applied today by agencies given the task of assessing and evaluating education and professional development activities, this book identifies what is working well, and what could be improved. Using the results of a Rand research study conducted, the authors present four successful approaches, key factors to consider and critical lessons learned about the assessment process. Using documentation of organizations engaged in assessment, interviews with experts, conferences, and site visits, the authors also examine the main task of assessment to focus on quality and productivity of specific providers and explore the overall purpose of such studies to provide a higher-level assessment of the system as a whole. They analyze these two main purposes of assessment as they impact stakeholder and system-level needs as well as provide opportunities for program-wide improvements. This book also discusses the emerging trend of corporate learning organizations, and demonstrates how such organizations are now indispensable tools in promoting communications among stakeholders and developing strong links between professional development programs and the system's basic mission. The authors analyze key similitaries and differences among the approaches studied and present four basic models of assessment and evaluation. Each model's strengths and specific applicable characteristics are classified with six crucial factors most important to consider when deciding what model might serve your system best. Three key steps in the process of assessment, regardless of the model selected or the system assessed, are detailed with the several lessons learned in the field concerning their successful application. Finally, for providers in professional development courses meeting the challenge of a lack of preexisting evaluation tools, guidelines for developing measures of learning outcomes are presented with their specific needs in mind.







Measuring Up


Book Description

In 2011, non-instructional employees comprised approximately 60% of the workforce at four-year, post-secondary institutions in the United States, according to the U.S. Department of Education (2011). While the performance of instructional staff at post-secondary institutions has been the subject of much empirical study, little is known about performance measures used with non-instructional staff. This quantitative case study of one public higher education institution’s performance management process fills a critical void by describing the staff workplace culture of that institution through its performance management practices. This study evaluated the management tool of the staff performance appraisal, which is typically a corporate process that has been adapted for higher education. These management tools and corporate terminology, such as customer service, have increasingly been incorporated into the higher education culture, and little is understood about their effects on this environment (Birnbaum, 2000; Szekeres, 2006). This study utilized employee performance appraisal and demographic data for non-instructional university staff from 2,401 university employees at a large, urban research institution located in the Southwestern United States. This staff performance appraisal was divided into four components: (a) job goals; (b) job responsibilities; (c) customer focus; and (c) competencies (Human Resources, n.d.a). There were three research questions of interest in this study, including: (1) Within a university setting, how are employee competencies valued by job title within colleges and divisions? (2) How are competencies of individual university staff valued in comparison with job responsibilities, manager responsibilities, job goals and customer service? (3) How is university staff customer service valued in higher education, and are there individual and college/division differences in customer service? Multiple correspondence analysis was used to answer research question 1. Findings included that, among non-manager employees (N=1,836), the first dimension accounted for 65.11% of adjusted inertia, or explained variance, while the second dimension accounted for 23.89% of adjusted inertia. For manager employees (N=565), the first dimension accounted for 86.57% of adjusted inertia, or explained variance, and the second dimension accounted for 8.26% of adjusted inertia. Visual data in symmetric plots illustrated similarities and differences across departments for competencies valued at this institution, and identified competencies that were outliers, or could be considered for elimination. Principal components analysis was used to answer research question 2. For non-manager employees, one factor had eigenvalues greater than 1.00, cumulatively accounting for 75.74% of the total variance, and all loadings were greater than .800. For managerial employees, one factor had eigenvalues greater than 1.00, cumulatively accounting for 74.17% of the total variance, and all loadings were greater than .731. To answer research question 3, a multiple linear regression was conducted to understand variables that predicted an employee’s customer focus score. The prediction model was statistically significant for non-supervisory employees (N=1,836), F(16, 1826) =24.27, p