Garpur is a joint project between the University of Iceland and University of Reykjavík with funding from the Icelandic Centre for Research (Rannís).
Research topics of computations performed on Garpur ranges from transport in quantum wires to ice sheet modeling of glaciers.
Garpur was opened for users in late April 2016
Here is the original press release.

In late 2017 Garpur recieved and upgrade which more that doubled the performance of the cluster.

Hardware Configuration

Garpur at a glance

Queue: normal
nodes: 36
cpu: 2x Intel Xeon E5-2680v3 (2.5GHz, 12 core)
memory: 128GB (8x16GB DDR4 2133MHz)
interconnect: 56 Gb/s (FDR) infiniband
disk: 1Tb /scratch

Queue: himem
nodes: 8
cpu: 2x Intel Xeon E5-2698v3 (2.3GHz, 16 core)
memory: 256GB (16x16GB DDR4 2133MHz)
interconnect: 10 Gb/s ethernet
disk: 1Tb /scratch

Queue: himem-bigdisk
Is like himem, except 3 compute nodes have a 4Tb /scratch disk

Queue: omnip
nodes: 46
cpu: 2x Intel Xeon Gold 6130 CPU (2.10GHz, 16 core)
memory: 192GB DDR4
interconnect: 50Gb/s Omnipath
disk: 140G /scratch

Queue: gpu
nodes: 3
cpu: 2x Intel Xeon E5-2648L (1.8GHz, 8 core)
memory: 64GB (1x) & 80GB (2x)
gpu: 2x Tesla M2090
disk: no local disk only nfs mounts

Queue: vgpu
nodes: 2
cpu: 2x ntel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz
memory: 128GB (8x 16GiB DIMM Synchronous 2666 MHz (0.4 ns))
gpu: Tesla V100 https://www.nvidia.com/en-us/data-center/tesla-v100/
disk: 2Tb /scratch

Garpur current usage (only viewable within the university network).