where `string_character` is any character except the double quote (back quote)
and escape characters.
YAP supports four different textual elements:
+ Atoms, mentioned above, are textual representations of symbols, that are interned in the
data-base. They are stored either in ISO-LATIN-1 (first 256 code points), or as UTF-32.
+ Strings are atomic representations of text. The back-quote character is used to identify these objects in the program. Strings exist as stack objects, in the same way as other Prolog terms. As Prolog unification cannot be used to manipulate strings, YAP includes built-ins such as string_arg/3, sub_string/5, or string_concat to manipulate them efficiently. Strings are stored as opaque objects containing a
+ Lists of codes represent text as a list of numbers, where each number is a character code. A string of _N_ bytes requires _N_ pairs, that is _2N_ cells, leading to a total of 16 bytes per character on 64 byte machines. Thus, they are a very expensive, but very flexible representation, as one can use unification to construct and access string elements.
+ Lists of atoms represent text as a list of atoms, where each number has a single character code. A string of _N_ bytes also requires _2N_ pairs. They have similar properties to lists of codes.
The flags `double_quotes` and `backquoted_string` change the interpretation of text strings, they can take the
values `atom`, `string`, `codes`, and `chars`.
Examples:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"" "a string" "a double-quote:"""
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The first string is an empty string, the last string shows the use of
double-quoting.
Escape sequences can be used to include the non-printable characters
The UCS standard describes all possible characters (or code points, as they include
ideograms, ligatures, and other symbols). The current version, Unicode 8.0, allows
code points up to 0x10FFFF, and thus allows for 1,114,112 code points. See [Unicode Charts](http://unicode.org/charts/) for the supported languages.
Notice that most symbols are rarely used. Encodings represent the Unicode characters in a way
that is more suited for communication. The most popular encoding, especially in the context of the web and in the Unix/Linux/BSD/Mac communities, is
UTF-8. UTF-8 is compact and as it uses bytes, does not have different endianesses.
Bytes 0...127 represent simply the corresponding US-ASCII
character, while bytes 128...255 are used for multi-byte
encoding of characters placed higher in the UCS space.
Especially on
MS-Windows and Java the 16-bit Unicode standard, represented by pairs of bytes is
also popular. Originally, Microsoft supported a UCS-2 with 16 bits that
could represent only up to 64k characters. This was later extended to support the full
Unicode, we will call the latter version UTF-16. The extension uses a hole in the first 64K code points. Characters above 0xFFFF are divided into two 2-byte words, each one in that hole. There are two versions of UTF-16: big and low
endian. By default, UTF-16 is big endian, in practice most often it is used on Intel
hardware that is naturally little endian.
UTF-32, often called UCS-4, provides a natural interface where a code point is coded as
four octets. Unfortunately, it is also more expensive, so it is not as widely used.
Last, other encodings are also commonly used. One such legacy encoding is ISO-LATIN-1, that
supported latin based languages in western europe. YAP currently uses either ISO-LATIN-1 or UTF-32
internally.
Prolog supports the default encoding used by the Operating System,
Namely, YAP checks the variables LANG, LC_ALL and LC_TYPE. Say, if at boot YAP detects that the
environment variable `LANG` ends in "UTF-8", this encoding is
assumed. Otherwise, the default is `text` and the translation is
left to the wide-character functions of the C-library (note that the
Prolog native UTF-8 mode is considerably faster than the generic
`mbrtowc()` one).
Prolog allows the encoding to be specified explicitly in
load_files/2 for loading Prolog source with an alternative
encoding, `open/4` when opening files or using `set_stream/2` on
any open stream (not yet implemented). For Prolog source files we also
provide the `encoding/1` directive that can be used to switch
between encodings that are compatible to US-ASCII (`ascii`,
`iso_latin_1`, `utf8` and many locales).
For
additional information and Unicode resources, please visit the
[unicode](http://www.unicode.org/) organization web page.
YAP currently defines and supports the following encodings:
+ `octet`
Default encoding for <em>binary</em> streams. This causes
the stream to be read and written fully untranslated.
+ `ascii` or `US_ASCII`
7-bit encoding in 8-bit bytes. Equivalent to `iso_latin_1`,
but generates errors and warnings on encountering values above
127.
+ `iso_latin_1` or `ISO-8859-1`
8-bit encoding supporting many western languages. This causes
the stream to be read and written fully untranslated.
+ `text`
C-library default locale encoding for text files. Files are read and
written using the C-library functions `mbrtowc()` and
`wcrtomb()`. This may be the same as one of the other locales,
notably it may be the same as `iso_latin_1` for western
languages and `utf8` in a UTF-8 context.
+ `utf8`, `iso_utf8`, or `UTF-8``
Multi-byte encoding of the full Unicode 8, compatible to `ascii` .
See above.
+ `unicode_be` or `UCS-2BE`
Unicode Big Endian. Reads input in pairs of bytes, most
significant byte first. Can only represent 16-bit characters.
+ `unicode_le` or `UCS-2LE`
Unicode Little Endian. Reads input in pairs of bytes, least
significant byte first. Can only represent 16-bit characters.
+ `utf16_le` or `UTF-16LE` (experimental)
UTF-16 Little Endian. Reads input in pairs of bytes, least
significant byte first. Can represent the full Unicode.
+ `utf16_le` or `UTF-16BE` (experimental)
Unicode Big Endian. Reads input in pairs of bytes, least
significant byte first. Can represent the full Unicode.
+ `utf32_le` or `UTF-32LE` (experimental)
UTF-16 Little Endian. Reads input in pairs of bytes, least
significant byte first. Can represent the full Unicode.
+ `utf32_le` or `UTF-32BE` (experimental)
Unicode Big Endian. Reads input in pairs of bytes, least
significant byte first. Can only represent 16-bit characters.
Note that not all encodings can represent all characters. This implies
that writing text to a stream may cause errors because the stream
cannot represent these characters. The behaviour of a stream on these
errors can be controlled using `open/4` or `set_stream/2` (not
implemented). Initially the terminal stream write the characters using
Prolog escape sequences while other streams generate an I/O exception.
From Stream Encoding, you may have got the impression that
text-files are complicated. This section deals with a related topic,
making live often easier for the user, but providing another worry to
the programmer. *BOM* or <em>Byte Order Marker</em> is a technique
for identifying Unicode text-files as well as the encoding they
use. Please read the [W3C](https://www.w3.org/International/questions/qa-byte-order-mark.en.php]
page for a detailed explanation of byte-order marks.
BOMa are necessary on multi-byte encodings, such as UTF-16 and UTF-32. There is a BOM for UTF-8, but it is rarely used.
The BOM is handled by the open/4 predicate. By default, text-files are
probed for the BOM when opened for reading. If a BOM is found, the
encoding is set accordingly and the property `bom(true)` is
available through stream_property/2. When opening a file for
writing, writing a BOM can be requested using the option
`bom(true)` with `open/4`. YAP will parse an UTF-8 file for a BOM only if explicitly required to do so. Do notice that YAP will write a BOM by default on UTF-16 (including UCS-2) and
UTF-32; otherwise the default is not to write a BOM. BOMs are not avaliable for ASCII and