Your source code is sent to the interpreter encoded in UTF-8,
and is expected to write output encoded in UTF-8 to STDOUT.
For languages where it matters, your code is run in the
en_US
locale with a UTF-8 output encoding.
In Unicode-aware languages like Python, this means
print("๐")
and print(chr(0x1f642))
both produce the emoji U+1F642 "Slightly Smiling Face" ๐, which
is encoded as f0 9f 99 82
in UTF-8.
In less Unicode-aware languages where strings are byte strings,
you might still get away with UTF-8 in string literals. For
example, OCaml treats "๐"
as a string of length 4
(four bytes), but Char.chr 0x1f642
is an error.
In yet other languages, like brainfuck, you have to print the
individual bytes f0 9f 99 82
one by one.