It is arguably much simpler than unicode and character encodings. Besides, for most applications Decimal's current precision will be good enough: changing the current context (which, I believe, is thread-local, not process-global) is only necessary in special cases.
The problem is that the context cannot be depended upon so the result of trivial calculations or even a negative decimal literal will depend on something that can be changed anywhere else in the code. The expression -1.123456789d will evaluate differently depending on the context and I think many people will be surprised by that. I expect that this would lead to naive code that breaks when the context is changed. Great care needs to be taken to get this right, or otherwise it should just be considered generally unsafe to change the context.
This is a good point, but I still don't think it outweighs the large cognitive baggage (and maintenance overhead for library authors as well as ourselves) of having two different types doing mostly similar things.
It is also not clear that a decimal type would be really beneficial for non-expert users. Most people get by with floats fine.
But let's take something similar: BigDecimal(1).divide(BigDecimal(3)). This expression cannot be computed exactly in decimal format so it raises ArithmeticError. If you want it to round to some precision then you can provide a context with BigDecimal(1).divide(BigDecimal(3), mymathcontext).
But then it becomes, ironically, more cumbersome to spell than Decimal(1) / Decimal(3) (I'm talking about the explicit context, of course, not the .divide() method which needn't exist in Python)