
Even if you don't mention it explicitly, its existence shows through in the fact that there is an arbitrary limit on the amount you can peek ahead, and that limit needs to be documented so that people can write correct programs.
This is true of both kinds of peeking, so I concede that they both break the abstraction.
However I think the non-blocking peek breaks it more than the blocking one, because it also brings non-deterministic behaviour. It depends on the point of view. For example, someone is writing a program that must read from any kind of file descriptor and generate the derivation tree of the text read
Greg Ewing wrote: based on some context-free grammar. The problem is that the chosen method to accomplish it would read 2 symbols (bytes) ahead and this guy is using peek() to grab these 2 bytes. The program will seem to work correctly most of the time, but on the 4095th byte read, he would grab 1 byte at most using peek() (despite the existence of say 10k bytes ahead). The blocking definition of peek() would create this hard-to-spot bug. Thus, for him, this behavior would seem non-deterministic. On the other hand, if someone is conscious about the number of raw reads, he would definitely be willing to look into the documentation for any parameters that match his special needs. That's why the non-blocking behavior should be the default one while the blocking behavior should be accessible by choice.