The Question :
311 people think this question is useful
I have a socket server that is supposed to receive UTF-8 valid characters from clients.
The problem is some clients (mainly hackers) are sending all the wrong kind of data over it.
I can easily distinguish the genuine client, but I am logging to files all the data sent so I can analyze it later.
Sometimes I get characters like this
œ that cause the
I need to be able to make the string UTF-8 with or without those characters.
For my particular case the socket service was an MTA and thus I only expect to receive ASCII commands such as:
MAIL FROM: <firstname.lastname@example.org>
I was logging all of this in JSON.
Then some folks out there without good intentions decided to send all kind of junk.
That is why for my specific case it is perfectly OK to strip the non ASCII characters.
The Question Comments :
The Answer 1
354 people think this answer is useful
str = unicode(str, errors='replace')
str = unicode(str, errors='ignore')
Note: This will strip out (ignore) the characters in question returning the string without them.
For me this is ideal case since I’m using it as protection against non-ASCII input which is not allowed by my application.
Alternatively: Use the open method from the
codecs module to read in the file:
with codecs.open(file_name, 'r', encoding='utf-8',
errors='ignore') as fdata:
The Answer 2
105 people think this answer is useful
Changing the engine from C to Python did the trick for me.
Engine is C:
pd.read_csv(gdp_path, sep='\t', engine='c')
‘utf-8’ codec can’t decode byte 0x92 in position 18: invalid start byte
Engine is Python:
pd.read_csv(gdp_path, sep='\t', engine='python')
No errors for me.
The Answer 3
66 people think this answer is useful
This type of issue crops up for me now that I’ve moved to Python 3. I had no idea Python 2 was simply steam rolling any issues with file encoding.
I found this nice explanation of the differences and how to find a solution after none of the above worked for me.
In short, to make Python 3 behave as similarly as possible to Python 2 use:
with open(filename, encoding="latin-1") as datafile:
# work on datafile here
However, read the article, there is no one size fits all solution.
The Answer 4
32 people think this answer is useful
>>> print '\x9c'.decode('cp1252')
The Answer 5
27 people think this answer is useful
I had same problem with
UnicodeDecodeError and i solved it with this line.
Don’t know if is the best way but it worked for me.
str = str.decode('unicode_escape').encode('utf-8')
The Answer 6
20 people think this answer is useful
the first,Using get_encoding_type to get the files type of encode:
from chardet import detect
# get file encoding type
with open(file, 'rb') as f:
rawdata = f.read()
the second, opening the files with the type:
open(current_file, 'r', encoding = get_encoding_type, errors='ignore')
The Answer 7
5 people think this answer is useful
I have solved this problem just by adding
df = pd.read_csv(fileName,encoding='latin1')
The Answer 8
4 people think this answer is useful
Just in case of someone has the same problem. I’am using vim with YouCompleteMe, failed to start ycmd with this error message, what I did is:
export LC_CTYPE="en_US.UTF-8", the problem is gone.
The Answer 9
3 people think this answer is useful
What can you do if you need to make a change to a file, but don’t know the file’s encoding? If you know the encoding is ASCII-compatible and only want to examine or modify the ASCII parts, you can open the file with the surrogateescape error handler:
with open(fname, 'r', encoding="ascii", errors="surrogateescape") as f:
data = f.read()
The Answer 10
2 people think this answer is useful
I have resolved this problem using this code
df = pd.read_csv(gdp_path, engine='python')