History of Character Codes
ASCII defined numeric codes for various characters, with the numeric values running from 0 to 127. For example, the lowercase letter ‘a’ is assigned 97 as its code value. It only defined unaccented characters. There was an ‘e’, but no ‘é’ or ‘í’. This meant that languages which required accented characters couldn’t be faithfully represented in ASCII. (Actually the missing accents matter for English, too, which contains words such as ‘na?ve’ and ‘café’, and some publications have house styles which require spellings such as ‘co?perate’.)
In the 1980s, almost all personal computers were 8-bit, meaning that bytes could hold values ranging from 0 to 255. ASCII codes only went up to 127, so some machines assigned values between 128 and 255 to accented characters.
255
characters aren’t very many. For example, you can’t fit both the accented characters used in Western Europe and the Cyrillic alphabet used for Russian into the 128-255 range because there are more than 127 such characters.
You
could write files using different codes (all your Russian files in a coding system called KOI8, all your French files in a different coding system called Latin1), but what if you wanted to write a French document that quotes some Russian text? In the 1980s
people began to want to solve this problem, and the Unicode standardization effort began.
Unicode
started out using 16-bit characters instead of 8-bit characters. 16 bits means you have 2^16 = 65,536 distinct values available, making it possible to represent many different characters from many different alphabets; an initial goal was to have Unicode contain
the alphabets for every single human language. It turns out that even 16 bits isn’t enough to meet that goal, and the modern Unicode specification uses a wider range of codes, 0-1,114,111 (0x10ffff in base-16).
Definations
The Unicode standard describes how characters are represented by code points. A code point is an integer value, usually denoted in base 16. In the standard, a code point is written using the notation U+12ca to mean the character with value 0x12ca (4810 decimal). The Unicode standard contains a lot of tables listing characters and their corresponding code points:
Strictly, these definitions imply that it’s meaningless to say ‘this is character U+12ca’. U+12ca is a code point, which represents some particular character; in this case, it represents the character ‘ETHIOPIC SYLLABLE WI’. In informal contexts, this distinction between code points and characters will sometimes be forgotten.
A character is represented on a screen or on paper by a set of graphical elements that’s called a glyph. The glyph for an uppercase A, for example, is two diagonal strokes and a horizontal stroke, though the exact details will depend on the font being used. Most Python code doesn’t need to worry about glyphs; figuring out the correct glyph to display is generally the job of a GUI toolkit or a terminal’s font renderer.
Encodings
To summarize the previous section: a Unicode string is a sequence of code points, which are numbers from 0 to 0x10ffff. This sequence needs to be represented
as a set of bytes (meaning, values from 0-255) in memory. The rules for translating a Unicode string into a sequence of bytes are called an encoding.
The first encoding you might think of is an array of 32-bit integers. In this representation, the string “Python” would look like this:
This representation is straightforward but using it presents a number of problems.
- It’s not portable; different processors order the bytes differently.
- It’s very wasteful of space. In most texts, the majority of the code points are less than 127, or less than 255, so a lot of space is occupied by zero bytes. The above string takes 24 bytes compared to the 6 bytes needed for an ASCII representation. Increased RAM usage doesn’t matter too much (desktop computers have megabytes of RAM, and strings aren’t usually that large), but expanding our usage of disk and network bandwidth by a factor of 4 is intolerable.
- It’s not compatible with existing C functions such as strlen(), so a new family of wide string functions would need to be used.
- Many Internet standards are defined in terms of textual data, and can’t handle content with embedded zero bytes.
Encodings don’t have to handle every possible Unicode character, and most encodings don’t. For example, Python’s default encoding is the ‘ascii’ encoding. The rules for converting a Unicode string into the ASCII encoding are simple; for each code point:
- If the code point is < 128, each byte is the same as the value of the code point.
- If the code point is 128 or greater, the Unicode string can’t be represented in this encoding. (Python raises a UnicodeEncodeError exception in this case.)
UTF-8
is one of the most commonly used encodings. UTF stands for “Unicode Transformation Format”, and the ‘8’ means that 8-bit numbers are used in the encoding. (There’s also a UTF-16 encoding, but it’s less frequently used than UTF-8.) UTF-8 uses the following
rules:
- If the code point is <128, it’s represented by the corresponding byte value.
- If the code point is between 128 and 0x7ff, it’s turned into two byte values between 128 and 255.
- Code points >0x7ff are turned into three- or four-byte sequences, where each byte of the sequence is between 128 and 255.
UTF-8 has several convenient properties:
- It can handle any Unicode code point.
- A Unicode string is turned into a string of bytes containing no embedded zero bytes. This avoids byte-ordering issues, and means UTF-8 strings can be processed by C functions such as strcpy() and sent through protocols that can’t handle zero bytes.
- A string of ASCII text is also valid UTF-8 text.
- UTF-8 is fairly compact; the majority of code points are turned into two bytes, and values less than 128 occupy only a single byte.
- If bytes are corrupted or lost, it’s possible to determine the start of the next UTF-8-encoded code point and resynchronize. It’s also unlikely that random 8-bit data will look like valid UTF-8.
Unicode strings are expressed as instances of the unicode type, one of Python’s repertoire of built-in types. It derives from an abstract type called basestring, which is also an ancestor of the str type; you can therefore check if a value is a string type with isinstance(value, basestring). Under the hood, Python represents Unicode strings as either 16- or 32-bit integers, depending on how the Python interpreter was compiled.
The unicode() constructor has the signature unicode(string[, encoding, errors]). All of its arguments should be 8-bit strings. The first argument is converted to Unicode using the specified encoding; if you leave off theencoding argument, the ASCII encoding is used for the conversion, so characters greater than 127 will be treated as errors:
The errors argument
specifies the response when the input string can’t be converted according to the encoding’s rules. Legal values for this argument are ‘strict’ (raise a UnicodeDecodeError exception),
‘replace’ (add U+FFFD, ‘REPLACEMENT CHARACTER’), or ‘ignore’ (just leave the character out of the Unicode result). The following examples show the differences:
Encodings are specified as strings containing the encoding’s name. Python 2.7 comes with roughly 100 different encodings; see the Python Library Reference at Standard Encodings for a list. Some encodings have multiple names; for example, ‘latin-1’, ‘iso_8859_1’ and ‘8859’ are all synonyms for the same encoding.
One-character
Unicode strings can also be created with the unichr() built-in
function, which takes integers and returns a Unicode string of length 1 that contains the corresponding code point. The reverse operation is the built-in ord() function
that takes a one-character Unicode string and returns the code point value:
Instances of the unicode type have many of the same methods as the 8-bit string type for operations such as searching and formatting:
Note that the arguments to these methods can be Unicode strings or 8-bit strings. 8-bit strings will be converted to Unicode before carrying out the operation; Python’s default ASCII encoding will be used, so characters greater than 127 will cause an exception:
Much Python code that operates on strings will therefore work with Unicode strings without requiring any changes to the code. (Input and output code needs more updating for Unicode; more on this later.)
Another important method is .encode([encoding], [errors=‘strict‘]), which returns an 8-bit string version of the Unicode string, encoded in the requested encoding. The errors parameter is the same as the parameter of theunicode() constructor, with one additional possibility; as well as ‘strict’, ‘ignore’, and ‘replace’, you can also pass ‘xmlcharrefreplace’ which uses XML’s character references. The following example shows the different results:
Python’s 8-bit strings have a .decode([encoding], [errors]) method that interprets the string using the given encoding:
Unicode Literals in Python Source Code
In Python source code, Unicode literals are written as strings prefixed with the ‘u’ or ‘U’ character: u‘abcdefghijk‘. Specific code points can be written using the \u escape sequence, which is followed by four hex digits giving the code point. The \U escape sequence is similar, but expects 8 hex digits, not 4.
Unicode literals can also use the same escape sequences as 8-bit strings, including \x, but \x only takes two hex digits so it can’t express an arbitrary code point. Octal escapes can go up to U+01ff, which is octal 777.
Using escape sequences for code points greater than 127 is fine in small doses, but becomes an annoyance if you’re using many accented characters, as you would in a program with messages in French or some other accent-using language. You can also assemble strings using the unichr() built-in function, but this is even more tedious.
Ideally, you’d want to be able to write literals in your language’s natural encoding. You could then edit Python source code with your favorite editor which would display the accented characters naturally, and have the right characters used at runtime.
Python supports writing Unicode literals in any encoding, but you have to declare the encoding being used. This is done by including a special comment as either the first or second line of the source file:
The syntax is inspired by Emacs’s notation for specifying variables local to a file. Emacs supports many different variables, but Python only supports ‘coding’. The -*- symbols indicate to Emacs that the comment is special; they have no significance to Python but are a convention. Python looks for coding: name or coding=name in the comment.
Reading and Writing Unicode Data
Once you’ve written some code that works with Unicode data, the next problem is input/output. How do you get Unicode strings into your program, and how do you convert Unicode into a form suitable for storage or transmission?
It’s possible that you may not need to do anything depending on your input sources and output destinations; you should check whether the libraries used in your application support Unicode natively. XML parsers often return Unicode data, for example. Many relational databases also support Unicode-valued columns and can return Unicode values from an SQL query.
Unicode data is usually converted to a particular encoding before it gets written to disk or sent over a socket. It’s possible to do all the work yourself: open a file, read an 8-bit string from it, and convert the string withunicode(str, encoding). However, the manual approach is not recommended.
One problem is the multi-byte nature of encodings; one Unicode character can be represented by several bytes. If you want to read the file in arbitrary-sized chunks (say, 1K or 4K), you need to write error-handling code to catch the case where only part of the bytes encoding a single Unicode character are read at the end of a chunk. One solution would be to read the entire file into memory and then perform the decoding, but that prevents you from working with files that are extremely large; if you need to read a 2Gb file, you need 2Gb of RAM. (More, really, since for at least a moment you’d need to have both the encoded string and its Unicode version in memory.)
The solution would be to use the low-level decoding interface to catch the case of partial coding sequences. The work of implementing this has already been done for you: the codecs module includes a version of the open()function that returns a file-like object that assumes the file’s contents are in a specified encoding and accepts Unicode parameters for methods such as .read() and .write().
The function’s parameters are open(filename, mode=‘rb‘, encoding=None, errors=‘strict‘, buffering=1). mode can be ‘r‘, ‘w‘, or ‘a‘, just like the corresponding parameter to the regular built-in open() function; add a ‘+‘ to update the file. buffering is similarly parallel to the standard function’s parameter. encoding is a string giving the encoding to use; if it’s left as None, a regular Python file object that accepts 8-bit strings is returned. Otherwise, a wrapper object is returned, and data written to or read from the wrapper object will be converted as needed. errors specifies the action for encoding errors and can be one of the usual values of ‘strict’, ‘ignore’, and ‘replace’.
Reading Unicode from a file is therefore simple:
It’s also possible to open files in update mode, allowing both reading and writing:
Unicode character U+FEFF is used as a byte-order mark (BOM), and is often written as the first character of a file in order to assist with autodetection of the file’s byte ordering. Some encodings, such as UTF-16, expect a BOM to be present at the start of a file; when such an encoding is used, the BOM will be automatically written as the first character and will be silently dropped when the file is read. There are variants of these encodings, such as ‘utf-16-le’ and ‘utf-16-be’ for little-endian and big-endian encodings, that specify one particular byte ordering and don’t skip the BOM.