Is anyone aware of any implementations that use other than 64-bit floating-point?
As I understand you are asking about Python implementations: sure, the gmpy package supports arbitrary-precision floating point.
I'd be particularly interested in any that use greater precision than the usual 56-bit mantissa.
Nit-pickingly: it's usual that the mantissa is 53-bit.
Do modern 64-bit systems implement anything wider than the normal double?
As Mark said: sure. x86 systems have supported 80-bit "extended" precision for ages. Some architectures have architecture support for 128-bit floats (e.g. Itanium, SPARC v9); it's not clear to me whether they actually implement the long double operations in hardware, or whether they trap and get software-emulated.