Pages

Saturday, September 22, 2007

.NET remoting – speed / security

.NET remoting is a guaranteed mechanism of talking between multiple objects in different appdomain. What is appdomain? If you know the difference between managed world and un-managed world then you should have come across appdomains. CLR when started (it will start when the application is being run) will occupy a section of the un-managed memory (we call it un-managed because the developer is responsible for creation and deletion) and makes a section with in it, managed memory. Hence forth whatever objects are being created in the application is run within the managed segment of memory and the whole boundary is termed as appdomain or application domain.

Using remoting the various appdomains can interact/exchange data; this could be in the same machine or machine to machine across firewalls etc.

The design of the remoting could be such that of a client server model. Say if you are designing the provider with interfaces so that their implementation will be inheriting MarshalByRefObject and hence the clients/consumer (which sits in other appdomains) can access the data using the activator methods. There could be some potential issues with respect to security if you expose the interfaces directly, say in my case I have the consumer and the provider sitting in the same machine. And the provider uses certain DLL’s to play with the data. The consumer should be restricted with the data access the provider defines the scope of the data for the consumers, in this case if the provider exposes the interfaces, which could be re-casted.

And also when the interfaces implementation directly inherits MarshalByRefObject then what happens is the consumer gets the actual object each time, this means each time you need to pull a data your request and the connection is going to be really slow, can I overcome it, yes ask for more data at one shot. Dough well I do not know when I have to ask data, say the server/provider gives the data, there will be events raised and you will be monitoring the events and have the data flow back and forth. So in this case what to do?
Yep you got to think of a dataset (ADO.NET) kind off model, pass a duplicate copy each time and let that be a structure (at least in my case) so that all the data that are changed and needs to be shown in the GUI or front end is got as one shot. What happens constantly if the server/provider’s data is changing queue some data wait for some time and then re-send the structure each time, by this mechanism you have the duplicate copy each time. What does that mean? Well when some is taking the actual copy then you are asking the main provider to wait until the consumer is done with the object/data. So you are essentially ending up in a synchronous scenario, has any design be considered a good one if it is synchronous only in normal scenario? We know the answer it is NO, so we have to do it the above way to make it asynchronous. Who ever gets the data or has will not trouble the other. And by doing this we save the multiple clients requests that are waiting to be answered by the provider.

How did we end up like the above design?, what happened was the consumer/provider access model made the whole application slog, just to get a single data and this happened more often when more data is passed. So the pipes have been heavily used and the operation was slow. Though the advantage is we have a thread safe code, but total impact was slow. And the irony was my application is called Extreme tuning utility that means faster performance of the hardware based on the algorithms we run, can this itself ruin the machine’s speed…no ways….So we need to think of faster mechanism as described above. And also not allowing the hackers to interpret the interfaces the way they wanted, remember the re-casting stuff I started in the beginning.

No comments: