5

I've asked this before in a round-about manner before here on Stack Overflow, and want to get it right this time. How do I convert ANSI (Codepage 1252) to UTF-8, while preserving the special characters? (I am aware that UTF-8 supports a larger character set than ANSI, but it is okay if I can preserve all UTF-8 characters that are supported by ANSI and substitute the rest with a ? or something)

Why I Want To Convert ANSI → UTF-8

I am basically writing a program that splits vCard files (VCF) into individual files, each containing a single contact. I've noticed that Nokia and Sony Ericsson phones save the backup VCF file in UTF-8 (without BOM), but Android saves it in ANSI (1252). And God knows in what formats the other phones save them in!

So my questions are

  1. Isn't there an industry standard for vCard files' character encoding?
  2. Which is easier for my solving my problem? Converting ANSI to UTF8 (and/or the other way round) or trying to detect which encoding the input file has and notifying the user about it?

tl;dr Need to know how to convert the character encoding from (ANSI / UTF8) to (UTF8 / ANSI) while preserving all special characters.

Community
  • 1
  • 1
GPX
  • 2,966
  • 9
  • 43
  • 64

7 Answers7

14

You shouldn't convert from one encoding to the other. You have to read each file using the encoding that it was created with, or you will lose information.

Once you read the file using the correct encoding you have the content as a Unicode string, from there you can save it using any encoding you like.

If you need to detect the encoding, you can read the file as bytes and then look for character codes that are specific for either encoding. If the file contains no special characters, either encoding will work as the characters 32..127 are the same for both encodings.

Guffa
  • 640,220
  • 96
  • 678
  • 956
9

VCF is encoded in utf-8 as demanded by the spec in chapter 3.4. You need to take this seriously, the format would be utterly useless if that wasn't cast in stone. If you are seeing some Android app mangling accented characters then work from the assumption that this is a bug in that app. Or more likely, that it got bad info from somewhere else. Your attempt to correct the encoding would then cause more problems because your version of the card will never match the original.

You convert from 1252 to utf-8 with Encoding.GetEncoding(1252).GetString(), passing in a byte[]. Do not ever try to write code that reads a string and whacks it into a byte[] so you can use the conversion method, that just makes the encoding problems a lot worse. In other words, you'd need to read the file with FileStream, not StreamReader. But again, avoid fixing other people's problems.

Hans Passant
  • 873,011
  • 131
  • 1,552
  • 2,371
  • Thanks for pointing out the standard. And when I said Android saves the contacts in ANSI, I didn't mean any 3rd part app. Android's own 'Contacts' feature exports VCFs in ANSI! What do we do now? – GPX Dec 09 '10 at 10:37
  • in .net core , encoding 1252 dont exist, need to be installed, ref: https://stackoverflow.com/questions/37870084/net-core-doesnt-know-about-windows-1252-how-to-fix – Wagner Pereira Mar 12 '19 at 14:23
9

This is what I use in C# (I've been using it to convert from Windows-1252 to UTF8)

    public static String readFileAsUtf8(string fileName)
    {
        Encoding encoding = Encoding.Default;
        String original = String.Empty;

        using (StreamReader sr = new StreamReader(fileName, Encoding.Default))
        {
            original = sr.ReadToEnd();
            encoding = sr.CurrentEncoding;
            sr.Close();
        }

        if (encoding == Encoding.UTF8)
            return original;

        byte[] encBytes = encoding.GetBytes(original);
        byte[] utf8Bytes = Encoding.Convert(encoding, Encoding.UTF8, encBytes);
        return Encoding.UTF8.GetString(utf8Bytes);
    }
djunod
  • 4,425
  • 1
  • 25
  • 26
  • 1
    Thanks! MS Word's Save as Filtered Html was saving using Windows-1252 and that was causing problems with smart quotes in a tool I was writing. Your code solved that issue. – Rhyous Dec 30 '19 at 22:44
6

I do it this way:

    private static void ConvertAnsiToUTF8(string inputFilePath, string outputFilePath)
    {
        string fileContent = File.ReadAllText(inputFilePath, Encoding.Default);
        File.WriteAllText(outputFilePath, fileContent, Encoding.UTF8);
    }
3

I found this question while working to process a large collection of ancient text files into well formatted PDFs. None of the files have a BOM, and the oldest of the files contain Codepage 1252 code points that cause incorrect decoding to UTF8. This happens only some of the time, UTF8 works the majority of the time. Also, the latest of the text data DOES contain UTF8 code points, so it's a mixed bag.

So, I also set out "to detect which encoding the input file has" and after reading How to detect the character encoding of a text file? and How to determine the encoding of text? arrived at the conclusion that this would be difficult at best.

BUT, I found The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets in the comments, read it, and found this gem:

UTF-8 has the neat side effect that English text looks exactly the same in UTF-8 as it did in ASCII, so Americans don’t even notice anything wrong. Only the rest of the world has to jump through hoops. Specifically, Hello, which was U+0048 U+0065 U+006C U+006C U+006F, will be stored as 48 65 6C 6C 6F, which, behold! is the same as it was stored in ASCII, and ANSI, and every OEM character set on the planet.

The entire article is short and well worth the read.

So, I solved my problem with the following code. Since only a small amount of my text data contains difficult character code points, I don't mind the performance overhead of the exception handling, especially since this only had to run once. Perhaps there are more clever ways of avoiding the try/catch but I did not bother with devising one.

    public static string ReadAllTextFromFile(string file)
    {
        const int WindowsCodepage1252 = 1252;

        string text;

        try
        {
            var utf8Encoding = Encoding.GetEncoding("UTF-8", EncoderFallback.ExceptionFallback, DecoderFallback.ExceptionFallback); 
            text = File.ReadAllText(file, utf8Encoding);
        }
        catch (DecoderFallbackException dfe)//then text is not entirely valid UTF8, contains Codepage 1252 characters that can't be correctly decoded to UTF8
        {
            var codepage1252Encoding = Encoding.GetEncoding(WindowsCodepage1252, EncoderFallback.ExceptionFallback, DecoderFallback.ExceptionFallback);
            text = File.ReadAllText(file, codepage1252Encoding);
        }

        return text;
    }

It's also worth noting that the StreamReader class has constructors that take a specific Encoding object, and as I have shown you can adjust the EncoderFallback/DecoderFallback behavior to suit your needs. So if you need a StreamReader or StreamWriter for finer grained work, this approach can still be used.

Sam Mackrill
  • 3,775
  • 8
  • 33
  • 54
MJB
  • 71
  • 2
0

I use this to convert file encoding to UTF-8

public static void ConvertFileEncoding(String sourcePath, String destPath)
        {
            // If the destination's parent doesn't exist, create it.
            String parent = Path.GetDirectoryName(Path.GetFullPath(destPath));
            if (!Directory.Exists(parent))
            {
                Directory.CreateDirectory(parent);
            }

            // Convert the file.
            String tempName = null;
            try
            {
                tempName = Path.GetTempFileName();
                using (StreamReader sr = new StreamReader(sourcePath))
                {
                    using (StreamWriter sw = new StreamWriter(tempName, false, Encoding.UTF8))
                    {
                        int charsRead;
                        char[] buffer = new char[128 * 1024];
                        while ((charsRead = sr.ReadBlock(buffer, 0, buffer.Length)) > 0)
                        {
                            sw.Write(buffer, 0, charsRead);
                        }
                    }
                }
                File.Delete(destPath);
                File.Move(tempName, destPath);
            }
            finally
            {
                File.Delete(tempName);
            }
        }
Prateek Gupta
  • 781
  • 7
  • 18
-2
  1. Isn't there an industry standard for vCard files' character encoding?
  2. Which is easier for my solving my problem? Converting ANSI to UTF8 (and/or the other way round) or trying to detect which encoding the input file has and notifying the user about it?

How I solved this: I have vCard file (*.vcf) - 200 contacts in one file in russian language... I opened it with vCardOrganizer 2.1 program then made Split to divide it on 200....and what I see - contacts with messy symbols, only thing I can read it numbers :-) ...

Steps: (when you do this steps be patient, sometimes it takes time) Open vCard file (my file size was 3mb) with "notepad" Then go from Menu - File-Save As..in opened window choose file name, dont forget put .vcf , and encoding - ANSI or UTF-8...and finally click Save.. I converted filename.vcf (UTF-8) to filename.vcf (ANSI) - nothing lost and perfect readable russian language...if you have quest write: yoshidakatana@gmail.com

Good Luck !!!

  • Owner of question is developing(programming) an application. He is not planning to use 3rdparty application and most probably not a going to do it a single time. Question is about doing this in C#. Please read question properly – AaA May 02 '17 at 09:55