Benchmark (computing)

Print Print
Reading time 14:46

In computing, a benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it.[1] The term benchmark is also commonly utilized for the purposes of elaborately designed benchmarking programs themselves.

Benchmarking is usually associated with assessing performance characteristics of computer hardware, for example, the floating point operation performance of a CPU, but there are circumstances when the technique is also applicable to software. Software benchmarks are, for example, run against compilers or database management systems (DBMS).

Benchmarks provide a method of comparing the performance of various subsystems across different chip/system architectures.

Purpose

As computer architecture advanced, it became more difficult to compare the performance of various computer systems simply by looking at their specifications. Therefore, tests were developed that allowed comparison of different architectures. For example, Pentium 4 processors generally operated at a higher clock frequency than Athlon XP or PowerPC processors, which did not necessarily translate to more computational power; a processor with a slower clock frequency might perform as well as or even better than a processor operating at a higher frequency. See BogoMips and the megahertz myth.

Benchmarks are designed to mimic a particular type of workload on a component or system. Synthetic benchmarks do this by specially created programs that impose the workload on the component. Application benchmarks run real-world programs on the system. While application benchmarks usually give a much better measure of real-world performance on a given system, synthetic benchmarks are useful for testing individual components, like a hard disk or networking device.

Benchmarks are particularly important in CPU design, giving processor architects the ability to measure and make tradeoffs in microarchitectural decisions. For example, if a benchmark extracts the key algorithms of an application, it will contain the performance-sensitive aspects of that application. Running this much smaller snippet on a cycle-accurate simulator can give clues on how to improve performance.

Prior to 2000, computer and microprocessor architects used SPEC to do this, although SPEC's Unix-based benchmarks were quite lengthy and thus unwieldy to use intact.

Computer manufacturers are known to configure their systems to give unrealistically high performance on benchmark tests that are not replicated in real usage. For instance, during the 1980s some compilers could detect a specific mathematical operation used in a well-known floating-point benchmark and replace the operation with a faster mathematically equivalent operation. However, such a transformation was rarely useful outside the benchmark until the mid-1990s, when RISC and VLIW architectures emphasized the importance of compiler technology as it related to performance. Benchmarks are now regularly used by compiler companies to improve not only their own benchmark scores, but real application performance.

CPUs that have many execution units — such as a superscalar CPU, a VLIW CPU, or a reconfigurable computing CPU — typically have slower clock rates than a sequential CPU with one or two execution units when built from transistors that are just as fast. Nevertheless, CPUs with many execution units often complete real-world and benchmark tasks in less time than the supposedly faster high-clock-rate CPU.

Given the large number of benchmarks available, a manufacturer can usually find at least one benchmark that shows its system will outperform another system; the other systems can be shown to excel with a different benchmark.

Manufacturers commonly report only those benchmarks (or aspects of benchmarks) that show their products in the best light. They also have been known to mis-represent the significance of benchmarks, again to show their products in the best possible light. Taken together, these practices are called bench-marketing.

Ideally benchmarks should only substitute for real applications if the application is unavailable, or too difficult or costly to port to a specific processor or computer system. If performance is critical, the only benchmark that matters is the target environment's application suite.

Challenges

Benchmarking is not easy and often involves several iterative rounds in order to arrive at predictable, useful conclusions. Interpretation of benchmarking data is also extraordinarily difficult. Here is a partial list of common challenges:

  • Vendors tend to tune their products specifically for industry-standard benchmarks. Norton SysInfo (SI) is particularly easy to tune for, since it mainly biased toward the speed of multiple operations. Use extreme caution in interpreting such results.
  • Some vendors have been accused of "cheating" at benchmarks — doing things that give much higher benchmark numbers, but make things worse on the actual likely workload.[2]
  • Many benchmarks focus entirely on the speed of computational performance, neglecting other important features of a computer system, such as:
    • Qualities of service, aside from raw performance. Examples of unmeasured qualities of service include security, availability, reliability, execution integrity, serviceability, scalability (especially the ability to quickly and nondisruptively add or reallocate capacity), etc. There are often real trade-offs between and among these qualities of service, and all are important in business computing. Transaction Processing Performance Council Benchmark specifications partially address these concerns by specifying ACID property tests, database scalability rules, and service level requirements.
    • In general, benchmarks do not measure Total cost of ownership. Transaction Processing Performance Council Benchmark specifications partially address this concern by specifying that a price/performance metric must be reported in addition to a raw performance metric, using a simplified TCO formula. However, the costs are necessarily only partial, and vendors have been known to price specifically (and only) for the benchmark, designing a highly specific "benchmark special" configuration with an artificially low price. Even a tiny deviation from the benchmark package results in a much higher price in real world experience.
    • Facilities burden (space, power, and cooling). When more power is used, a portable system will have a shorter battery life and require recharging more often. A server that consumes more power and/or space may not be able to fit within existing data center resource constraints, including cooling limitations. There are real trade-offs as most semiconductors require more power to switch faster. See also performance per watt.
    • In some embedded systems, where memory is a significant cost, better code density can significantly reduce costs.
  • Vendor benchmarks tend to ignore requirements for development, test, and disaster recovery computing capacity. Vendors only like to report what might be narrowly required for production capacity in order to make their initial acquisition price seem as low as possible.
  • Benchmarks are having trouble adapting to widely distributed servers, particularly those with extra sensitivity to network topologies. The emergence of grid computing, in particular, complicates benchmarking since some workloads are "grid friendly", while others are not.
  • Users can have very different perceptions of performance than benchmarks may suggest. In particular, users appreciate predictability — servers that always meet or exceed service level agreements. Benchmarks tend to emphasize mean scores (IT perspective), rather than maximum worst-case response times (real-time computing perspective), or low standard deviations (user perspective).
  • Many server architectures degrade dramatically at high (near 100%) levels of usage — "fall off a cliff" — and benchmarks should (but often do not) take that factor into account. Vendors, in particular, tend to publish server benchmarks at continuous at about 80% usage — an unrealistic situation — and do not document what happens to the overall system when demand spikes beyond that level.
  • Many benchmarks focus on one application, or even one application tier, to the exclusion of other applications. Most data centers are now implementing virtualization extensively for a variety of reasons, and benchmarking is still catching up to that reality where multiple applications and application tiers are concurrently running on consolidated servers.
  • There are few (if any) high quality benchmarks that help measure the performance of batch computing, especially high volume concurrent batch and online computing. Batch computing tends to be much more focused on the predictability of completing long-running tasks correctly before deadlines, such as end of month or end of fiscal year. Many important core business processes are batch-oriented and probably always will be, such as billing.
  • Benchmarking institutions often disregard or do not follow basic scientific method. This includes, but is not limited to: small sample size, lack of variable control, and the limited repeatability of results.[3]

Benchmarking Principles

There are seven vital characteristics for benchmarks.[4] These key properties are:

  1. Relevance: Benchmarks should measure relatively vital features.
  2. Representativeness: Benchmark performance metrics should be broadly accepted by industry and academia.
  3. Equity: All systems should be fairly compared.
  4. Repeatability: Benchmark results can be verified.
  5. Cost-effectiveness: Benchmark tests are economical.
  6. Scalability: Benchmark tests should work across systems possessing a range of resources from low to high.
  7. Transparency: Benchmark metrics should be easy to understand.

Types of benchmark

  1. Real program
    • word processing software
    • tool software of CAD
    • user's application software (i.e.: MIS)
  2. Component Benchmark / Microbenchmark
    • core routine consists of a relatively small and specific piece of code.
    • measure performance of a computer's basic components[5]
    • may be used for automatic detection of computer's hardware parameters like number of registers, cache size, memory latency, etc.
  3. Kernel
    • contains key codes
    • normally abstracted from actual program
    • popular kernel: Livermore loop
    • linpack benchmark (contains basic linear algebra subroutine written in FORTRAN language)
    • results are represented in Mflop/s.
  4. Synthetic Benchmark
    • Procedure for programming synthetic benchmark:
      • take statistics of all types of operations from many application programs
      • get proportion of each operation
      • write program based on the proportion above
    • Types of Synthetic Benchmark are:
      • Whetstone
      • Dhrystone
    • These were the first general purpose industry standard computer benchmarks. They do not necessarily obtain high scores on modern pipelined computers.
  5. I/O benchmarks
  6. Database benchmarks
    • measure the throughput and response times of database management systems (DBMS)
  7. Parallel benchmarks
    • used on machines with multiple cores and/or processors, or systems consisting of multiple machines

Common benchmarks

Industry standard (audited and verifiable)

  • Business Applications Performance Corporation (BAPCo)
  • Embedded Microprocessor Benchmark Consortium (EEMBC)
  • Linked Data Benchmark Council (LDBC)
    • Semantic Publishing Benchmark (SPB): an LDBC benchmark inspired by the Media/Publishing industry for testing the performance of RDF engines[6]
    • Social Network Benchmark (SNB): an LDBC benchmark for testing the performance of RDF engines consisting of three distinct benchmarks (Interactive Workload, Business Intelligence Workload, Graph Analytics Workload) on a common dataset[7]
  • Standard Performance Evaluation Corporation (SPEC), in particular their SPECint and SPECfp
  • Transaction Processing Performance Council (TPC): DBMS benchmarks[8]
    • TPC-A: measures performance in update-intensive database environments typical in on-line transaction processing (OLTP) applications[9]
    • TPC-C: an on-line transaction processing (OLTP) benchmark[10]
    • TPC-H: a decision support benchmark[11]

Open source benchmarks

  • AIM Multiuser Benchmark – composed of a list of tests that could be mixed to create a ‘load mix’ that would simulate a specific computer function on any UNIX-type OS.
  • Bonnie++ – filesystem and hard drive benchmark
  • BRL-CAD – cross-platform architecture-agnostic benchmark suite based on multithreaded ray tracing performance; baselined against a VAX-11/780; and used since 1984 for evaluating relative CPU performance, compiler differences, optimization levels, coherency, architecture differences, and operating system differences.
  • Collective Knowledge – customizable, cross-platform framework to crowdsource benchmarking and optimization of user workloads (such as deep learning) across hardware provided by volunteers
  • Coremark – Embedded computing benchmark
  • Data Storage Benchmark – an RDF continuation of the LDBC Social Network Benchmark, from the Hobbit Project[12]
  • DEISA Benchmark Suite – scientific HPC applications benchmark
  • Dhrystone – integer arithmetic performance, often reported in DMIPS (Dhrystone millions of instructions per second)
  • DiskSpd – Command-line tool for storage benchmarking that generates a variety of requests against computer files, partitions or storage devices
  • Embench™ - portable, open-source benchmarks, for benchmarking deeply embedded systems; they assume the presence of no OS, minimal C library support and, in particular, no output stream. Embench is a project of the Free and Open Source Silicon Foundation.
  • Faceted Browsing Benchmark – benchmarks systems that support browsing through linked data by iterative transitions performed by an intelligent user, from the Hobbit Project[13]
  • Fhourstones – an integer benchmark
  • HINT – designed to measure overall CPU and memory performance
  • Iometer – I/O subsystem measurement and characterization tool for single and clustered systems.
  • IOzone – Filesystem benchmark
  • Kubestone – Benchmarking Operator for Kubernetes and OpenShift
  • LINPACK benchmarks – traditionally used to measure FLOPS
  • Livermore loops
  • NAS parallel benchmarks
  • NBench – synthetic benchmark suite measuring performance of integer arithmetic, memory operations, and floating-point arithmetic
  • PAL – a benchmark for realtime physics engines
  • PerfKitBenchmarker – A set of benchmarks to measure and compare cloud offerings.
  • Phoronix Test Suite – open-source cross-platform benchmarking suite for Linux, OpenSolaris, FreeBSD, OSX and Windows. It includes a number of other benchmarks included on this page to simplify execution.
  • POV-Ray – 3D render
  • Tak (function) – a simple benchmark used to test recursion performance
  • TATP Benchmark – Telecommunication Application Transaction Processing Benchmark
  • TPoX – An XML transaction processing benchmark for XML databases
  • VUP (VAX unit of performance) – also called VAX MIPS
  • Whetstone – floating-point arithmetic performance, often reported in millions of Whetstone instructions per second (MWIPS)

Microsoft Windows benchmarks

  • BAPCo: MobileMark, SYSmark, WebMark
  • CrystalDiskMark
  • Futuremark: 3DMark, PCMark
  • PiFast
  • SuperPrime
  • Super PI
  • Whetstone
  • Windows System Assessment Tool, included with Windows Vista and later releases, providing an index for consumers to rate their systems easily
  • Worldbench (discontinued)

Others

  • AnTuTu – commonly used on phones and ARM-based devices.
  • Berlin SPARQL Benchmark (BSBM) – defines a suite of benchmarks for comparing the performance of storage systems that expose SPARQL endpoints via the SPARQL protocol across architectures[14]
  • Geekbench – A cross-platform benchmark for Windows, Linux, macOS, iOS and Android.
  • iCOMP – the Intel comparative microprocessor performance, published by Intel
  • Khornerstone
  • Lehigh University Benchmark (LUBM) – facilitates the evaluation of Semantic Web repositories via extensional queries over a large data set that commits to a single realistic ontology[15]
  • Performance Rating – modeling scheme used by AMD and Cyrix to reflect the relative performance usually compared to competing products.
  • SunSpider – a browser speed test
  • VMmark – a virtualization benchmark suite.[16]
  • RenderStats – a 3D rendering benchmark database.[17]

See also

  • Benchmarking (business perspective)
  • Figure of merit
  • Lossless compression benchmarks
  • Performance Counter Monitor
  • Test suite – a collection of test cases intended to show that a software program has some specified set of behaviors

References

  1. ^ Fleming, Philip J.; Wallace, John J. (1986-03-01). "How not to lie with statistics: the correct way to summarize benchmark results". Communications of the ACM. 29 (3): 218–221. doi:10.1145/5666.5673. ISSN 0001-0782. S2CID 1047380. Retrieved 2017-06-09.
  2. ^ Krazit, Tom (2003). "NVidia's Benchmark Tactics Reassessed". IDG News. Archived from the original on 2011-06-06. Retrieved 2009-08-08.
  3. ^ Castor, Kevin (2006). "Hardware Testing and Benchmarking Methodology". Archived from the original on 2008-02-05. Retrieved 2008-02-24.
  4. ^ Dai, Wei; Berleant, Daniel (December 12–14, 2019). "Benchmarking Contemporary Deep Learning Hardware and Frameworks: a Survey of Qualitative Metrics" (PDF). 2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI). Los Angeles, CA, USA: IEEE. pp. 148–155. arXiv:. doi:10.1109/CogMI48466.2019.00029.
  5. ^ Ehliar, Andreas; Liu, Dake. "Benchmarking network processors" (PDF). Cite journal requires |journal= (help)
  6. ^ LDBC. "LDBC Semantic Publishing Benchmark". LDBC SPB. LDBC. Retrieved 2018-07-02.
  7. ^ LDBC. "LDBC Social Network Benchmark". LDBC SNB. LDBC. Retrieved 2018-07-02.
  8. ^ Transaction Processing Performance Council (February 1998). "History and Overview of the TPC". TPC. Transaction Processing Performance Council. Retrieved 2018-07-02.
  9. ^ Transaction Processing Performance Council. "TPC-A". Transaction Processing Performance Council. Retrieved 2018-07-02.
  10. ^ Transaction Processing Performance Council. "TPC-C". Transaction Processing Performance Council. Retrieved 2018-07-02.
  11. ^ Transaction Processing Performance Council. "TPC-H". Transaction Processing Performance Council. Retrieved 2018-07-02.
  12. ^ "Data Storage Benchmark". 2017-07-28. Retrieved 2018-07-02.
  13. ^ "Faceted Browsing Benchmark". 2017-07-27. Retrieved 2018-07-02.
  14. ^ "Berlin SPARQL Benchmark (BSBM)". Retrieved 2018-07-02.
  15. ^ "SWAT Projects - the Lehigh University Benchmark (LUBM)". Lehigh University Benchmark (LUBM). Retrieved 2018-07-02.
  16. ^ "VMmark GPU Benchmark Software Rules 1.1.1". VMware. 2008. date=January 2018}}
  17. ^ "3D rendering benchmark database". Retrieved 2019-09-29. Cite journal requires |journal= (help)

Further reading

  • Gray, Jim, ed. (1993). The Benchmark Handbook for Database and Transaction Systems. Morgan Kaufmann Series in Data Management Systems (2nd ed.). Morgan Kaufmann Publishers, Inc. ISBN 1-55860-292-5.
  • Scalzo, Bert; Kline, Kevin; Fernandez, Claudia; Burleson, Donald K.; Ault, Mike (2007). Database Benchmarking Practical Methods for Oracle & SQL Server. ISBN 978-0-9776715-3-3.
  • Nambiar, Raghunath; Poess, Meikel, eds. (2009). Performance Evaluation and Benchmarking. Springer. ISBN 978-3-642-10423-7.


By: Wikipedia.org
Edited: 2021-06-18 18:01:22
Source: Wikipedia.org