[Mono-dev] Keep-alive connection with Remoting?

Ympostor Ympostor at clix.pt
Thu Aug 10 17:30:27 EDT 2006

I have misplaced my last message on the thread, so I will re-send it
with the correct quotes (sorry) and include some more info:

Rafael Teixeira wrote:
> inline
>> - If I had to deal with scalability issues, how could I use a not
>> long-living service without loosing the connection with the clients?
> It all depends on the frequency with which your clients make requests
> to the server, if they space it in the tenths of second, you probably
> can easily pay the price of reconnecting each time.
>> I am thinking of ordering them to disconnect and reconnect again but,
>> wouldn't this consume so many resources and processor cycles (and time
>> in which the server is not available for incoming connections) if we are
>> treating with ~1000 clients...?
>>From my experience disconnecting and reconnecting means that ~1000
> clients can be served by a Pentium 3 recycling just some 100-200
> connections (TCP/IP open sockets), with acceptable performance except
> if all of then want to download/upload megabytes of data at the same
> time.
> I've experienced (on Windows 2003 Server running IIS 6, to be clear),
> good performance (in truth the bottleneck was the database) recycling
> 380 connections on Dual Zeons for some 30000 simultaneous clients.
> Trying to keep 30000 open TCP connections is something you simply
> can't do with affordable hardware, no matter the framework/language
> you use.

Very interesting numbers; but I think I am considering a "clustered"
solution similar to the one proposed by Brian.

Anyway, I think I will give up with the Reachability library, it seems
very unstable and I have found out that it's much slower than the
original solution proposed by Ingo Rammer; so I am testing it, I have
specified the same clientProviders/serverProviders parameters so as to
make it work with FW2.0 and now I get a weird behaviour: if I try the
demo between client and servers on the local machine, it works
perfectly, but if I place the client on other machine (changing the URL
of the server in the config file, of course), I get:

Called main thread 'MainThread'
---- Testing sync calls / SAO ----
Registered connection #0 as Count: 1
Registered connection #0 as bc305056-1a3c-4a4f-9d9a-5889713d8325. Count: 2
Got sync result: Testing
---- Testing sync calls / CAO ----
Closing connection #0 to
Unregistered connection #0 as bc305056-1a3c-4a4f-9d9a-5889713d8325. Count: 1
Unregistered connection #0 as . Count: 0

Unhandled Exception:
System.Runtime.Serialization.SerializationException: Binary
  stream '0' does not contain a valid BinaryHeader. Possible causes are
invalid stream or object version change between serialization and

Server stack trace:
    at System.Runtime.Serialization.Formatters.Binary.__BinaryParser.Run()
aderHandler handler, __BinaryParser serParser, Boolean fCheck, Boolean
isCrossAppDomain, IMethodCallMessage methodCallMessage)
(Stream serializationStream, HeaderHandler handler, Boolean fCheck,
Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage)
age(Stream inputStream, IMethodCallMessage reqMsg, Boolean bStrictBinding)
age(IMessage msg)

Exception rethrown at [0]:
reqMsg, IMessage retMsg)
msgData, Int32 type)
ssage msg)
(IConstructionCallMessage ctorMsg)
llMessage ctorMsg)
(IConstructionCallMessage ctorMsg)
essage(IMessage reqMsg)
    at System.Runtime.Remoting.Activation.ActivationServices.Activate
(RemotingProxy remProxy, IConstructionCallMessage ctorMsg)
    at System.Runtime.Remoting.Proxies.RemotingProxy.Invoke(IMessage reqMsg)
msgData, Int32 type)
    at Service.SomeCAO..ctor()
    at Client.Client.Main(String[] args) in C:\Documents and
t.cs:line 70

Which seems a very strange problem because both machines are running the
same runtime.

I have found yet another solution for bidirectional remoting:


The thing I am afraid about this one (I haven't tried it out) is that it
lacks also 2.0 support (which could lead me to more exceptions, as well
as is happening with the one from Ingo) and that it seems to have some bugs:


(Although they seem to be corrected as stated on the "Updates" paragraph
of the first link.)

>> However, my main concern now is to make it work and then lately solve
>> any scalability issues. And then, my update about the progress, if
>> anyone is interested:
>> As I said, I already changed the sources to 2.0 API and to work with
>> long-living connections, but now I am stuck with a very stupid problem:
>> I just want to implement a similar method to "SendMessage" but that
>> sends a file to the client, but I don't know why it works only sometimes:
>> - In the same host, one client and one server, it works (the file is
>> received).
>> - Same host, two clients and one server, one client receives the file
>> and the other client receives the first notification but doesn't end the
>> method to save the file. :?
> Looks like some timeout in the request processing for the second
> thread, as some I/O contention or locks in your code, may preclude
> processing to occur in parallel at least with good performance.
> Avoid synchrionization locks, and if I/O is really time consuming
> adjust the timeouts for conclusion of each request processing.
>> - Host A with server, host B with client. If I send a normal message the
>> communication works, but if I try to send the file, there is no
>> communication. :?
> Having no code to look at I can't verify what may be happening. How
> are you returning the file? as byte array (byte[])? that may mean that
> you have to read it entirely in memory and send it as a whole in the
> remoting channel, performance would be terrible for big files (> 8 K)
> and so the request timeouts would stompit.
> For large files transfer , the best solution is to send them on a
> dedicated (separate) socket, using a buffered stream to read the file
> and another to write in the socket (that is what MSN Messenger and
> similar programs do),
> But if you really don't wan't to escape out of remoting at least use a
> blocked approach:
> class MyMarshallByRefObject : MarshallByRefObject {
> ...
> public int StartDownload (string filename) // returns a transfer ID
> public byte[] ReadBlock(int transferID) // returns fixed size or limited blocks
> }
> in the client
> MyMarshallByRefObject x = CreateIt(...);
> int myTransfer = x.StartDownload("somefile");
> while (true) {
>     byte[] buffer = x.ReadBlock(myTransfer);
>     if (buffer == null)
>         exit;
>     processBlock(buffer); // may write to some file
>          // this time spent processing each buffer received
>          // may give time for other clients to be serviced
> }

Many thanks about this advice for sending the file. I know that
sending the whole file in a single call would be terrific for large
files, but I just wanted to do the proof of concept (in fact, locally,
it worked perfectly for 8MB files!, remotelly it wouldn't receive the
call at all, and no time-outs were involved because I was making the
call just after launching the server and the client).



More information about the Mono-devel-list mailing list