[Mono-dev] Keep-alive connection with Remoting?

Rafael Teixeira monoman at gmail.com
Thu Aug 10 10:18:08 EDT 2006


On 8/10/06, Ympostor <ympostor at clix.pt> wrote:
> Rafael Teixeira wrote:
> > You have to choose the server state-model and configure the Lifetime
> > policy in the server. The default even for the singleton state-model
> > is to kill the server object after a few minutes and recreate on the
> > next request from the client.
> Thanks for the guidance. I said that I had overriden the LifeTimeService
> method in the clients. I made it on the server with no luck, but after
> that I realised that I had to do it also on the MarshalByRefObject
> object used in the Reachability sources, then all time-out problems
> disappeared.

Yes that is what I intended to say. All MBROs not to use the default

> Really interesting thoughts but I have some questions:
> - If long-living service is used, are you saying that its dead objects
> won't never be collected until the service dies for any reason?

The remoting server will have a reference to the objects and as its
LifeTimeService doesn't say when to release it it will keep it up and
GC won't be able to collect it.

You can overcome this by having some remote finalization method the
client calls, and the MBRO then sinalizes a specialized
LifeTimeService that the server instance can be released.

> - If I had to deal with scalability issues, how could I use a not
> long-living service without loosing the connection with the clients?

It all depends on the frequency with which your clients make requests
to the server, if they space it in the tenths of second, you probably
can easily pay the price of reconnecting each time.

> I am thinking of ordering them to disconnect and reconnect again but,
> wouldn't this consume so many resources and processor cycles (and time
> in which the server is not available for incoming connections) if we are
> treating with ~1000 clients...?

>From my experience disconnecting and reconnecting means that ~1000
clients can be served by a Pentium 3 recycling just some 100-200
connections (TCP/IP open sockets), with acceptable performance except
if all of then want to download/upload megabytes of data at the same

I've experienced (on Windows 2003 Server running IIS 6, to be clear),
good performance (in truth the bottleneck was the database) recycling
380 connections on Dual Zeons for some 30000 simultaneous clients.

Trying to keep 30000 open TCP connections is something you simply
can't do with affordable hardware, no matter the framework/language
you use.

> However, my main concern now is to make it work and then lately solve
> any scalability issues. And then, my update about the progress, if
> anyone is interested:
> As I said, I already changed the sources to 2.0 API and to work with
> long-living connections, but now I am stuck with a very stupid problem:
> I just want to implement a similar method to "SendMessage" but that
> sends a file to the client, but I don't know why it works only sometimes:
> - In the same host, one client and one server, it works (the file is
> received).
> - Same host, two clients and one server, one client receives the file
> and the other client receives the first notification but doesn't end the
> method to save the file. :?

Looks like some timeout in the request processing for the second
thread, as some I/O contention or locks in your code, may preclude
processing to occur in parallel at least with good performance.

Avoid synchrionization locks, and if I/O is really time consuming
adjust the timeouts for conclusion of each request processing.

> - Host A with server, host B with client. If I send a normal message the
> communication works, but if I try to send the file, there is no
> communication. :?

Having no code to look at I can't verify what may be happening. How
are you returning the file? as byte array (byte[])? that may mean that
you have to read it entirely in memory and send it as a whole in the
remoting channel, performance would be terrible for big files (> 8 K)
and so the request timeouts would stompit.

For large files transfer , the best solution is to send them on a
dedicated (separate) socket, using a buffered stream to read the file
and another to write in the socket (that is what MSN Messenger and
similar programs do),

But if you really don't wan't to escape out of remoting at least use a
blocked approach:

class MyMarshallByRefObject : MarshallByRefObject {


public int StartDownload (string filename) // returns a transfer ID

public byte[] ReadBlock(int transferID) // returns fixed size or limited blocks


in the client

MyMarshallByRefObject x = CreateIt(...);
int myTransfer = x.StartDownload("somefile");
while (true) {
    byte[] buffer = x.ReadBlock(myTransfer);
    if (buffer == null)
    processBlock(buffer); // may write to some file
         // this time spent processing each buffer received
         // may give time for other clients to be serviced

Hope it helps,

> I am becoming crazy... Perhaps in the end this is not a very stable
> solution.
> BTW I have found two more solutions:
> a) PeerChannel [
> http://www.mailframe.net/Products/PeerChannel/default.aspx ]
> It seems to be free but:
> - I haven't managed to make it work yet (NullReferenceException thrown
> at the middleware channels, not in the client or server...? perhaps
> again due to 1.1 vs 2.0 problems...).
> - Not all source is attached in the ZIP. There is a
> MailFrameDataStructures.dll that doesn't come with the source!
> b) DotNetRemoting SDK [ http://www.dotnetremoting.com/ ]
> Again, commercial, and seems not to be very popular (I have read some
> criticism about this library...).
> Regards.

Hope it helps,

Rafael "Monoman" Teixeira
"The reasonable man adapts himself to the world; the unreasonable one
persists in trying to adapt the world to himself. Therefore all
progress depends on the unreasonable man." George Bernard Shaw

More information about the Mono-devel-list mailing list