-
-
Notifications
You must be signed in to change notification settings - Fork 30.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python startup fails with a fatal error if a command line argument contains an invalid Unicode character #80064
Comments
When an invalid unicode character is given to argv (cli arguments), then python abort()s with an fatal error about an character not in range (ValueError: character U+7fffbeba is not in range [U+0000; U+10ffff]). I am wondering if this behaviour should change to replace those with U+FFFD REPLACEMENT CHARACTER (like .decode(..., 'replace')) or even with something similar/better (see https://docs.python.org/3/library/codecs.html#error-handlers ) The reason for this is that other applications can use the invalid character since it is just some data (like GDB for use as an argument to the program to be debugged), where in python this becomes an limitation, since the script (if specified) never runs. The main motivation for me is that there is an command-not-found debian package that gets the wrongly-typed command as a command argument. If that then contains an invalid unicode character, it then just fails rather saying it couldn't find the/a similar command. If this doesn't get changed, it either then has to accept that this is a limitation, use an other way of passing the command or re-write it in not python. # Requires bash 4.2+ Current thread 0x00007fd212eaf740 (most recent call first): $ python3.6 --version
Python 3.6.7
$ uname -a
Linux nopea 4.15.0-39-generic #42-Ubuntu SMP Tue Oct 23 15:48:01 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic GDB backtrace just before throwing the error: (note that it's argc=2 since first argument is a script) Similar issues: |
I'm on 4.15.0-44-generic and I cannot reproduce the crash. I get "python3: can't open file '������': [Errno 2] No such file or directory" Could you try this on a different machine / installation? |
Hm, this seems to be due to how the terminal emulator handles those special characters, actually. I can reproduce in another terminal. |
I'd say that the terminal is not really relevant here, but rather the locale settings because it uses wide string functions. Prefixing it with LC_ALL=C produces the same output as you had on my Ubuntu machine. I also get that output when running it in Cygwin (and MSYS2), although it seems setting LC_ALL has no effect. |
In Unix, Python 3.6 decodes the char * command line arguments via mbstowcs. In Linux, I see the following misbehavior of mbstowcs when decoding an overlong UTF-8 sequence: >>> mbstowcs = ctypes.CDLL(None, use_errno=True).mbstowcs
>>> arg = bytes(x + 128 for x in [1 + 124, 63, 63, 59, 58, 58])
>>> mbstowcs(None, arg, 0)
1
>>> buf = (ctypes.c_int * 2)()
>>> mbstowcs(buf, arg, 2)
1
>>> hex(buf[0])
'0x7fffbeba' This shouldn't be an issue in 3.7, at least not with the default UTF-8 mode configuration. With this mode, Py_DecodeLocale calls _Py_DecodeUTF8Ex using the surrogateescape error handler 1. |
Pretty sure this is an issue still, I see it on current git master. This seems to work around it? https://p.sipsolutions.net/603927f1537226b3.txt Basically, it seems that mbstowcs() and mbrtowc() on glibc with utf-8 just blindly decode even invalid UTF-8 to a too large wchar_t, rather than failing. |
A simple test case is something like ./python -c 'import sys; print(sys.argv[1].encode(sys.getfilesystemencoding(), "surrogateescape"))' "$(echo -ne '\xfa\xbd\x83\x96\x80')" Which you'd probably expect to print b'\xfa\xbd\x83\x96\x80' i.e. the same bytes that were passed in, but currently that fails. |
In fact that python one-liner works with just about everything else that you can throw at it, just not something that "looks like utf-8 but isn't". And of course adding LC_CTYPE=ascii or something like that fixes it, as you'd expect. Then the "surrogateescape" works fine, since mbstowcs() won't try to decode it as utf-8. |
And wrt. _Py_DecodeUTF8Ex() - it doesn't seem to help. But that's probably because I'm not __ANDROID__, nor __APPLE__, and then regardless of current_locale being non-zero or not, we end up in decode_current_locale() where the impedance mismatch happens. Setting PYTHONUTF8=1 in the environment works too, in that case we do get into _Py_DecodeUTF8Ex(). |
Like I said above, it could be argued that the bug is in glibc, and then https://p.sipsolutions.net/6a4e9fce82dbbfa0.txt could be used as a simple LD_PRELOAD wrapper to work around this, just to illustrate the problem from that side. Arguably, that makes glibc in violation of RFC 3629, since it says:
[...] In UTF-8, characters from the U+0000..U+10FFFF range (the UTF-16 [...]
--------------------+--------------------------------------------- [...] Implementations of the decoding algorithm above MUST protect against [...] Here's a simple test program: |
I've also filed https://sourceware.org/bugzilla/show_bug.cgi?id=26034 for glibc, because that's where really the issues seems to be? But perhaps python should be forgiving of glibc errors here. |
I wrote PR 24843 to fix this issue. With this fix, os.fsencode(sys.argv[1]) returns the original byte sequence as expected. -- I dislike the replace error handler since it loses information. The PEP-383 surrogateescape error handler exists to prevent losing information. The root issue is that Py_DecodeLocale() creates wide characters outside Python Unicode valid range: [U+0000; U+10ffff]. On Linux, Py_DecodeLocale() usually calls mbstowcs() of the C library. The problem is that the the glibc UTF-8 decoder doesn't respect the RFC 3629, it doesn't reject characters outside [U+0000; U+10ffff] range. The following issue requests to change the glibc UTF-8 codec to respect the RFC 3629, but it's open since 2006: Even if the glibc changes, Python should behave the same on old glibc version. My PEP modifies Py_DecodeLocale() to check if there are characters outside [U+0000; U+10ffff] range and use the surrogateescape error handler in that case. |
This issue is different: it is about the Py_Main() function called explicitly when Python is embedded in an application. Python fails if the command line contains a *wide character* outside the [U+0000; U+10ffff] range. This issue is about Python on Linux in which case Py_BytesMain() is used to decode *bytes* from the command line. |
Right, enabling explicitly the Python UTF-8 Mode works around the issue: $ python3.10 -c 'import sys; print(ascii(sys.argv))' $'\U7fffbeba'
Fatal Python error: init_interp_main: failed to update the Python config
Python runtime state: core initialized
ValueError: character U+7fffbeba is not in range [U+0000; U+10ffff] Current thread 0x00007effa1891740 (most recent call first): $ python3.10 -X utf8 -c 'import sys; print(ascii(sys.argv))' $'\U7fffbeba'
['-c', '\udcfd\udcbf\udcbf\udcbb\udcba\udcba'] |
When the Python UTF-8 Mode is used, on macOS or on Android, Python uses its own UTF-8 decoder which respects the RFC 3629: it rejects characters outside [U+0000; U+10ffff]. Otherwise, Python relies on the libc mbstowcs() decoder which may or may not create characters outside the [U+0000; U+10ffff] range. I understand that this issue is mostly about the UTF-8 encoding, I don't think that other encodings can produce characters greater than U+10ffff code point. |
Return a classical int, rather than size_t.
Return a classical int, rather than size_t. The size_t type was kept from copied/pasted code related to mbstowcs().
* main: CI: Temporarily skip paths with spaces to avoid error (python#105110) pythongh-105071: add missing versionadded directive (python#105097) pythongh-80064: Fix is_valid_wide_char() return type (python#105099) Small speedup for dataclass __eq__ and __repr__ (python#104904) pythongh-103921: Minor PEP-695 fixes to the `ast` module docs (python#105093) pythongh-105091: stable_abi.py: Remove "Unixy" check from --all on other platforms (pythonGH-105092)
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
Linked PRs
The text was updated successfully, but these errors were encountered: