In numerical analysis and scientific computing, truncation error is the error made by truncating an infinite sum and approximating it by a finite sum. For instance, if we approximate the sine function by the first two non-zero term of its Taylor series, as in sin ( x ) ≈ x − 1 6 x 3 {displaystyle sin(x)approx x-{ frac {1}{6}}x^{3}} for small x {displaystyle x} , the resulting error is a truncation error. It is present even with infinite-precision arithmetic, because it is caused by truncation of the infinite Taylor series to form the algorithm.