On 8 December 2016 at 03:36, Alexander Belopolsky <alexander.belopolsky@gmail.com> wrote:
On Wed, Dec 7, 2016 at 9:07 PM, Mikhail V <mikhailwas@gmail.com> wrote:
it somehow settled in peoples' minds that hex reference should be preferred, for no solid reason IMO.
I may be showing my age, but all the facts that I remember about ASCII codes are in hex:
1. SPACE is 0x20 followed by punctuation symbols. 2. Decimal digits start at 0x30 with '0' = 0x30, '1' = 0x31, ... 3. @ is 0x40 followed by upper-case letter: 'A' = 0x41, 'B' = 0x42, ... 4. Lower-case letters are offset by 0x20 from the uppercase ones: 'a' = 0x61, 'b' = 0x62, ...
Unicode is also organized around hexadecimal codes with various scripts positioned in sections that start at round hexadecimal numbers. For example Cyrillic is at 0x0400 through 0x4FF <http://unicode.org/charts/PDF/U0400.pdf>.
The only decimal fact I remember about Unicode is that the largest code-point is 1114111 - a palindrome!
As an aside, I've just noticed that in my example: s = "first cyrillic letters: \{1040}\{1041}\{1042}" s = "first cyrillic letters: \u0410\u0411\u0412" the hex and decimal codes are made up of same digits, such a peculiar coincidence... So you were catched up from the beginning with hex, as I see ;) I on the contrary in dark times of learning programming (that was C) always oriented myself on decimal codes and don't regret it now.