"A little learning is a dangerous thing; drink deep, or taste not the Pierian spring: there shallow draughts intoxicate the brain, and drinking largely sobers us again." (Alexander Pope)
The words of this famous English poet never sounded truer in my ears than when I read for the N-th time some poor soul pontificating on the Internet on how double precision numbers are imprecise, BigDecimals are so much better and you should always do financial calculations with BigDecimals (or some other language's equivalent of this Java arbitrary precision number class).
Wrong, wrong, wrong.
IEEE 754 floating-point numbers, among which we find our doubles and floats, for all their flaws, are an amazing thing. There is a reason why they are used so much more often than the arbitrary precision numbers - they offer an excellent compromise between speed and accuracy. Almost all scientific software uses floating-point numbers. Contrary to the popular misconception, they are also used a lot in financial calculations, the exception being the accounting of actual cashflows, where special rounding rules apply and you need to represent numbers like £2.11 exactly, without any possibility of an error. However, nobody who is sane will bother to use Java's BigDecimal to price an option, it just doesn't make sense.
Also, the accuracy of arbitrary precision numbers is lost whenever you perform any non-trivial calculations, like calculating a sine or cosine. In general, there is no way to do it exactly on a computer, and any precision you gain by avoiding floating-point numbers is lost. The fact that people recommend BigDecimal or C#'s System.Decimal to compute a square root using Newton's method (which does not compute the square root digit by digit, but as a converging sequence of approximations), shatters my belief in human intelligence. I prefer to use IEEE double numbers, knowing that I can trust only so many digits of the final result, than live with an illusion of arbitrary precision which just isn't there. Say nothing about the performance hit.
Arbitrary precision arithmetic is very useful in some cases, but please don't answer any question "why is 1.0001 - 0.0001 == 1 not true in Java/C/C++" on the Internet with "because floating-point numbers are broken, use arbitrary precision libraries".