Virtually all present-day computer systems, from personal computers to the largest supercomputers, implement the IEEE 54-bit floating-point arithmetic standard, which provides 53 binary or approximately 15 decimal digits accuracy. For most scientific applications, this is more than sufficient. However, for a rapidly expanding body of applications, 54-bit IEEE arithmetic is no longer sufficient. These range from some interesting new mathematical investigations to large-scale physical simulations performed on highly parallel supercomputers. Moreover in these applications, portions of the code typically involve numerically sensitive calculations, which produce results of questionable accuracy using conventional arithmetic . These inaccurate results may in turn induce other errors, such as taking the wrong path in a conditional branch. Such blocks of code benefit enormously from a combination of reliable numeric techniques and the use of high-precision arithmetic. Indeed, the aim of reliable numeric techniques is to deliver, together with the computed result, a guaranteed upper bound on the total error or, equivalently, to compute an enclosure for the exact result.
It is perhaps not a coincidence that interest in high-precision computations has arisen in the same period that many scientific computations are implemented on highly parallel and distributed, often heterogeneous, computer systems. Such systems have made possible much larger-scale runs than before, greatly magnifying numerical difficulties. Switching from hardware to high-precision arithmetic to tackle these difficulties, has benefits in its own right. Since high-precision arithmetic is implemented in software, the result is independent of the specific hardware in the heterogeneous system on which it is computed.
In  the successful solution of several problems in scientific computing using high-precision arithmetic is described. It is worth noting that all of these successful applications of high-precision arithmetic have arisen in the past ten years. This may be indicative of the birth of a new era of scientific computing, in which the numerical precision required for a computation is as important to the program design as are the algorithms and data structures.
Aim of the project
It is the aim of the project team to contribute to the solution of a number of open problems in computational physics, in particular nanotechnology, which require the use of high-precision and reliable computations. The nanoscopic domain is a scale of length situated between the microscopic (atom and molecular scale) and the macroscopic scale. Characteristic for nanotechnology research is that a finite number (on the order of 10 to 10000) of particles (e.g. atoms, molecules, electrons) are involved, and hence that surface effects are of crucial importance. The large number of particles implies that it is practically impossible to obtain analytic results and that one needs to focus on computational methods.
As will become clear from the project description, the key to the solution of the open problems in nanotechnology is the high-precision, reliable evaluation of certain special functions. Up to this date, even environments such as Maple, Mathematica, MATLAB and libraries such as IMSL, CERN and NAG offer no routines for the reliable evaluation of special functions.