6

I am trying to write a program in C# that will split a vCard (VCF) file with multiple contacts into individual files for each contact. I understand that the vCard needs to be saved as ANSI (1252) for most mobile phones to read them.

However, if I open a VCF file using StreamReader and then write it back with StreamWriter (setting 1252 as the Encoding format), all special characters like å, æ and ø are getting written as ?. Surely ANSI (1252) would support these characters. How do I fix this?

Edit: Here's the piece of code I use to read and write the file.

private void ReadFile()
{
   StreamReader sreader = new StreamReader(sourceVCFFile);
   string fullFileContents = sreader.ReadToEnd();
}

private void WriteFile()
{
   StreamWriter swriter = new StreamWriter(sourceVCFFile, false, Encoding.GetEncoding(1252));
   swriter.Write(fullFileContents);
}
abatishchev
  • 92,232
  • 78
  • 284
  • 421
GPX
  • 2,966
  • 9
  • 43
  • 64

1 Answers1

12

You are correct in assuming that Windows-1252 supports the special characters you listed above (for a full list see the Wikipedia entry).

using (var writer = new StreamWriter(destination, true, Encoding.GetEncoding(1252)))
{
    writer.WriteLine(source);
}

In my test app using the code above it produced this result:

Look at the cool letters I can make: å, æ, and ø!

No question marks to be found. Are you setting the encoding when your reading it in with StreamReader?

EDIT: You should just be able to use Encoding.Convert to convert the UTF-8 VCF file into Windows-1252. No need for Regex.Replace. Here is how I would do it:

// You might want to think of a better method name.
public string ConvertUTF8ToWin1252(string source)
{
    Encoding utf8 = new UTF8Encoding();
    Encoding win1252 = Encoding.GetEncoding(1252);

    byte[] input = source.ToUTF8ByteArray();  // Note the use of my extension method
    byte[] output = Encoding.Convert(utf8, win1252, input);

    return win1252.GetString(output);
}

And here is how my extension method looks:

public static class StringHelper
{
    // It should be noted that this method is expecting UTF-8 input only,
    // so you probably should give it a more fitting name.
    public static byte[] ToUTF8ByteArray(this string str)
    {
        Encoding encoding = new UTF8Encoding();
        return encoding.GetBytes(str);
    }
}

Also you'll probably want to add usings to your ReadFile and WriteFile methods.

Community
  • 1
  • 1
Kredns
  • 34,183
  • 49
  • 147
  • 200
  • I think the key to the OP's problem is your last question: make sure that the `StreamReader` that reads the VCF has the 1252 encoding set. – Jim Mischel Dec 04 '10 at 05:41
  • I am not setting the encoding when reading the file using `StreamReader`. And I am pretty much using the same piece of code as your sample. But the input VCF file is in UTF-8. For some reason, Sony Ericsson's "Backup to MS" feature saves the VCF file in UTF-8! – GPX Dec 04 '10 at 06:28
  • @GPX: See my updated answer, I think it should solve your problem. – Kredns Dec 04 '10 at 07:31
  • @Lucas: Thanks for the reply! I've added the code that I'm using. Now to use yours, do I do a `Regex.Replace()`? And also, should I hardcode the byte array for each special character? – GPX Dec 04 '10 at 07:54
  • @Lucas: What would I do if the input VCF file is not UTF-8!? – GPX Dec 04 '10 at 08:43
  • @Lucas: Works like a charm. Thanks a ton! – GPX Dec 04 '10 at 09:21
  • @Lucas: **UPDATE**: How do I handle VCF files that are in ANSI, then? Looks like there's no proper way to detect ANSI encoding! – GPX Dec 04 '10 at 14:34
  • @Lucas: Also, your suggested method properly recodes a UTF-8 stream with special characters into ANSI. But if it is a UTF-8 stream WITHOUT any special characters, then the result is also a UTF-8 stream! – GPX Dec 04 '10 at 14:49
  • @GPX: I not 100% sure what you mean by your last comment, but if the VCF file is in ANSI why should their be a problem? – Kredns Dec 05 '10 at 03:16
  • @GPX: It should also be noted that you should only call my function once you know that the input is in UTF-8 format. So you will need to put the proper checks in before your call my method. – Kredns Dec 05 '10 at 03:18
  • 1
    @Lucas: I've got everything so wrong. I used the inbuilt functions to backup contacts on both SE and Nokia phones, and guess what, both are being saved in UTF-8! I feel so terrible I missed it, after all these questions! Now if I just open a VCF file using StreamReader in UTF-8 mode and the again save it using StreamWriter in UTF-8 mode, the file is saved with special characters preserved, However, if I open the file using Notepad2, it shows "UTF-8 with Signature" as the encoding. Am I doing something wrong? – GPX Dec 05 '10 at 03:24
  • 1
    @GPX: Wikipedia states that the BOM ["may cause interoperability problems with existing software that could otherwise handle UTF-8"](http://en.wikipedia.org/wiki/UTF-8#Byte_order_mark) . It then goes on to give several examples of the problems it could cause. So basically **UTF-8 with signature** just means **with BOM added**. – Kredns Dec 05 '10 at 03:36
  • @GPX: Also don't feel bad, character sets are a complex subject. It just takes time and practice. – Kredns Dec 05 '10 at 03:38
  • @Lucas: So how do I save the file without adding BOM? – GPX Dec 05 '10 at 03:39
  • 1
    @GPX: Notepad2 may be adding it just by opening it. If you have a HEX editor/viewer handy you might want to look at the text file right after running your program. If the BOM is in fact being added by .NET then you could always write code that checks to see if the first three bytes are `0xEF, 0xBB, 0xBF` and if so remove them. – Kredns Dec 05 '10 at 03:43