HEP experiments analyze millions of independent events. It is a typical High Throughput problem that can be solved by a farm of millions of independent computing cores. The typical working node is a server with two socket filled with x86 commodity processors.
The goal of the HEPMARK experiment is to measure the performances of the Worker node with a single number. The resulting benchmark must have the simplicity of the previous, now obsolete benchmark SPECINT 2000, must be easily measured by computer technician in the HEP laboratories but also by the hardware vendor. Of course it must be validated against the typical HEP Workload: Event Generation, Simulation, Digitization and Reconstruction.
The second part is to understand the interactions of those programs, the HEP applications and the benchmark with the complicated architectures of the worker nodes. First of all the scalability since we moved in few years from two jobs per Worker Node to 8, 12 (even 16 using the logical processors of Hyper-threading enabled cpus). Then we want to spot the bootlenecks in the memory hierarchy with Level1, Level2, recently also Level3 caches with different latencies, bandwidths and associativity.