Performance Evaluation: Metrics, Models and Benchmarks


Book Description

This book constitutes the refereed proceedings of the SPEC International Performance Evaluation Workshop, SIPEW 2008, held in Darmstadt, Germany, in June 2008. The 17 revised full papers presented together with 3 keynote talks were carefully reviewed and selected out of 39 submissions for inclusion in the book. The papers are organized in topical sections on models for software performance engineering; benchmarks and workload characterization; Web services and service-oriented architectures; power and performance; and profiling, monitoring and optimization.




Performance Evaluation: Metrics, Models and Benchmarks


Book Description

This book constitutes the thoroughly refereed proceedings of the SPEC International Performance Evaluation Workshop, SIPEW 2008, held in Darmstadt, Germany, in June 2008 . The 17 revised full papers presented were carefully selected out of 39 submissions for inclusion in the book. The papers are organized in topical sections on models for software performance engineering; benchmarks and workload characterization; Web services and service-oriented architectures; power and performance; and profiling, monitoring and optimization.




Performance Evaluation and Benchmarking


Book Description

Computer and microprocessor architectures are advancing at an astounding pace. However, increasing demands on performance coupled with a wide variety of specialized operating environments act to slow this pace by complicating the performance evaluation process. Carefully balancing efficiency and accuracy is key to avoid slowdowns, and such a balance can be achieved with an in-depth understanding of the available evaluation methodologies. Performance Evaluation and Benchmarking outlines a variety of evaluation methods and benchmark suites, considering their strengths, weaknesses, and when each is appropriate to use. Following a general overview of important performance analysis techniques, the book surveys contemporary benchmark suites for specific areas, such as Java, embedded systems, CPUs, and Web servers. Subsequent chapters explain how to choose appropriate averages for reporting metrics and provide a detailed treatment of statistical methods, including a summary of statistics, how to apply statistical sampling for simulation, how to apply SimPoint, and a comprehensive overview of statistical simulation. The discussion then turns to benchmark subsetting methodologies and the fundamentals of analytical modeling, including queuing models and Petri nets. Three chapters devoted to hardware performance counters conclude the book. Supplying abundant illustrations, examples, and case studies, Performance Evaluation and Benchmarking offers a firm foundation in evaluation methods along with up-to-date techniques that are necessary to develop next-generation architectures.







Quantitative Models for Performance Evaluation and Benchmarking


Book Description

The author is one of the prominent researchers in the field of Data Envelopment Analysis (DEA), a powerful data analysis tool that can be used in performance evaluation and benchmarking. This book is based upon the author’s years of research and teaching experiences. It is difficult to evaluate an organization’s performance when multiple performance metrics are present. The difficulties are further enhanced when the relationships among the performance metrics are complex and involve unknown tradeoffs. This book introduces Data Envelopment Analysis (DEA) as a multiple-measure performance evaluation and benchmarking tool. The focus of performance evaluation and benchmarking is shifted from characterizing performance in terms of single measures to evaluating performance as a multidimensional systems perspective. Conventional and new DEA approaches are presented and discussed using Excel spreadsheets — one of the most effective ways to analyze and evaluate decision alternatives. The user can easily develop and customize new DEA models based upon these spreadsheets. DEA models and approaches are presented to deal with performance evaluation problems in a variety of contexts. For example, a context-dependent DEA measures the relative attractiveness of similar operations/processes/products. Sensitivity analysis techniques can be easily applied, and used to identify critical performance measures. Two-stage network efficiency models can be utilized to study performance of supply chain. DEA benchmarking models extend DEA’s ability in performance evaluation. Various cross efficiency approaches are presented to provide peer evaluation scores. This book also provides an easy-to-use DEA software — DEAFrontier. This DEAFrontier is an Add-In for Microsoft® Excel and provides a custom menu of DEA approaches. This version of DEAFrontier is for use with Excel 97-2013 under Windows and can solve up to 50 DMUs, subject to the capacity of Excel Solver. It is an extremely powerful tool that can assist decision-makers in benchmarking and analyzing complex operational performance issues in manufacturing organizations as well as evaluating processes in banking, retail, franchising, health care, public services and many other industries.




Measuring Performance and Benchmarking Project Management at the Department of Energy


Book Description

In 1997, Congress, in the conference report, H.R. 105-271, to the FY1998 Energy and Water Development Appropriation Bill, directed the National Research Council (NRC) to carry out a series of assessments of project management at the Department of Energy (DOE). The final report in that series noted that DOE lacked an objective set of measures for assessing project management quality. The department set up a committee to develop performance measures and benchmarking procedures and asked the NRC for assistance in this effort. This report presents information and guidance for use as a first step toward development of a viable methodology to suit DOE's needs. It provides a number of possible performance measures, an analysis of the benchmarking process, and a description ways to implement the measures and benchmarking process.




Measuring Computer Performance


Book Description

Sets out the fundamental techniques used in analyzing and understanding the performance of computer systems.







Computer Architecture Performance Evaluation Methods


Book Description

The goal of this book is to present an overview of the current state-of-the-art in computer architecture performance evaluation. The book covers various aspects that relate to performance evaluation, ranging from performance metrics, to workload selection, to various modeling approaches such as analytical modeling and simulation. And because simulation is by far the most prevalent modeling technique in computer architecture evaluation, the book spends more than half its content on simulation, covering an overview of the various simulation techniques in the computer designer's toolbox, followed by various simulation acceleration techniques such as sampled simulation, statistical simulation, and parallel and hardware-accelerated simulation. The evaluation methods described in this book have a primary focus on performance. Although performance remains to be a key design target, it no longer is the sole design target. Power consumption and reliability have quickly become primary design concerns, and today they probably are as important as performance. Other important design constraints relate to cost, thermal issues, yield, etc. This book focuses on performance evaluation methods only. This does not compromise on the importance and general applicability of the techniques described in this book because power and reliability models are typically integrated into existing performance models. These integrated models pose similar challenges to the ones handled in this book. The book also focuses on presenting fundamental concepts and ideas. The book does not provide much quantitative data. Although quantitative data is crucial to performance evaluation, to understand the fundamentals of performance evaluation methods it is not. Moreover, quantitative data from different sources may be hard to compare, and may even be misleading, because the contexts in which the results were obtained may be very different - a comparison based on these numbe




Performance Benchmarking of Application Monitoring Frameworks


Book Description

Application-level monitoring of continuously operating software systems provides insights into their dynamic behavior, helping to maintain their performance and availability during runtime. Such monitoring may cause a significant runtime overhead to the monitored system, depending on the number and location of used instrumentation probes. In order to improve a system’s instrumentation and to reduce the caused monitoring overhead, it is necessary to know the performance impact of each probe. While many monitoring frameworks are claiming to have minimal impact on the performance, these claims are often not backed up with a detailed performance evaluation determining the actual cost of monitoring. Benchmarks can be used as an effective and affordable way for these evaluations. However, no benchmark specifically targeting the overhead of monitoring itself exists. Furthermore, no established benchmark engineering methodology exists that provides guidelines for the design, execution, and analysis of benchmarks. This thesis introduces a benchmark approach to measure the performance overhead of application-level monitoring frameworks. The core contributions of this approach are 1) a definition of common causes of monitoring overhead, 2) a general benchmark engineering methodology, 3) the MooBench micro-benchmark to measure and quantify causes of monitoring overhead, and 4) detailed performance evaluations of three different application-level monitoring frameworks. Extensive experiments demonstrate the feasibility and practicality of the approach and validate the benchmark results. The developed benchmark is available as open source software and the results of all experiments are available for download to facilitate further validation and replication of the results.