No subject


Fri Feb 8 08:55:55 EST 2008


Microsoft.VisualBasic assembly grew by 7. The problem is that the
source files for these tests are encoded in utf8 and CONTAIN arabic,
japanese, and other international characters.

<rant>
As mcs no more defaults to the encoding set in the LANG environment
variable (mine says LANG=en_US.UTF-8) one edits sources with, say,
gedit like I do where you see every accented letter or another
international character correctly represented in the source and then
mcs compiles then all wrong.

I regret that decision and could not find a good rationale, beyond
easing source portability for VS.NET users.

<trivia>
Even though one can change the default encoding to save source files
to utf-8 (or any other encoding)  in VS.NET, most users doesn't even
know about that, and those who know fear to make it because you should
do it before creating any file in the project, as changing encodings
after creation confuses SourceSafe (their CVS/SVN counterpart).
</trivia>
</rant>

Well the remedy is to use the new -codepage:x command line option for mcs. 

After digging my way in the build system I added this line to the Makefile:

TEST_MCS_FLAGS = -codepage:utf8

and also adjusted this one:

LIB_MCS_FLAGS = /r:$(corlib) /r:System.dll /r:System.Windows.Forms.dll
@Microsoft.VisualBasic.dll.resources -codepage:utf8

This email is just an alert to the many developers that commit utf-8
encoded sources, and may slip some troubling non-ASCII character into
them just to find the results of compiling and executing that code
aren't the expected ones.  Please review your makefiles.

Sorry for the ranting,
Have fun,

-- 
Rafael "Monoman" Teixeira
---------------------------------------
Just the 'crazy' me in a sane world, or would it be the reverse? I dunno...



More information about the Mono-devel-list mailing list