Virtual vs Override vs New keyword

Virtual Keyword

The Virtual keyword is used for generating a virtual path for its derived classes on implementing method overriding. The Virtual keyword is used within a set with an override keyword. It is used as:

// Base Class 
class A 

    public virtual void show() 
    { 
        Console.WriteLine(“Hello: Base Class!”); 
        Console.ReadLine(); 
    } 

Override Keyword

The Override keyword is used in the derived class of the base class in order to override the base class method. The Override keyword is used with the virtual keyword, as in:

// Base Class 
class A 

    public virtual void show() 
    { 
        Console.WriteLine(“Hello: Base Class!”); 
        Console.ReadLine(); 
    } 

 
// Derived Class 
class B : A 

    public override void show() 
    { 
        Console.WriteLine(“Hello: Derived Class!”); 
        Console.ReadLine(); 
    } 

New Keyword

The New keyword is also used for polymorphism but in the case of method overriding. So what does overriding means? In simple words we can say that we are changing what the base class does for the derived class.

It is implemented as:

class A 

    public void show() 
    { 
        Console.WriteLine(“Hello: Base Class!”); 
        Console.ReadLine(); 
    } 

 
class B : A 

    public new void show() 
    { 
        Console.WriteLine(“Hello: Derived Class!”); 
        Console.ReadLine(); 
    } 
}  

Advertisements

WCF – Q & A ‘s

What is the difference between WCF and ASMX Web Services?

First of all, it needs to understand that WCF Service provides all the capabilities of .NET web servies and further extends it.

Simple and basic difference is that ASMX web service is designed to send and receive messages using SOAP over HTTP only. While WCF service can exchange messages using any format (SOAP is default) over any transport protocol (HTTP, TCP/IP, MSMQ, NamedPipes etc).

ASMX is simple but limited in many ways as compared to WCF.

ASMX web services can be hosted only in IIS while WCF service has all the following hosting options:

a. IIS

b. WAS (Windows Process Activation Services)

c. Console Application

d. Windows NT Services

e. WCF provided Host

ASMX web services support is limited to HTTP while WCF supports HTTP, TCP, MSMQ, NamedPipes.

ASMX Security is limited. Normally authentication and authorization is done using IIS and ASP.NET security configuration and transport layer security.For message layer security, WSE can be used.

WCF provides a consistent security programming model for any protocol and it supports many of the same capabilities as IIS and WS-* security protocols, additionally, it provides support for claim-based authorization that provides finer-grained control over resources than role-based security.WCF security is consistent regardless of the host that is used to implement WCF service.

Another major difference is that ASMX web services uses XmlSerializer for serialization while WCF uses DataContractSerializer which is far better in performance than XmlSerializer.

Key Issues with XmlSerializer in serializing .NET types to xml are:

a. Only public fields or properties of the .NET types can be translated to Xml.

b. Only the classes that implement IEnumerable can be translated.

c. Classes that implement IDictionary, such as Hashtable cannot be serialized.

What are WCF Service Endpoints? Explain.

For Windows Communication Foundation services to be consumed, it’s necessary that it must be exposed; Clients need information about service to communicate with it. This is where service endpoints play their role.

A WCF service endpoint has three basic elements i.e. Address, Binding and Contract.

Address: It defines “WHERE”. Address is the URL that identifies the location of the service.

Binding:  It defines “HOW”. Binding defines how the service can be accessed.

Contract: It defines “WHAT”. Contract identifies what is exposed by the service.

What are the possible ways of hosting a WCF service? Explain.

For a Windows Communication Foundation service to host, we need at least a managed process, a ServiceHost instance and an Endpoint configured. Possible approaches for hosting a service are:

1. Hosting in a Managed Application/ Self Hosting

a. Console Application

b. Windows Application

c. Windows Service

2. Hosting on Web Server

a. IIS 6.0 (ASP.NET Application supports only HTTP)

b. Windows Process Activation Service (WAS) i.e. IIS 7.0 supports HTTP, TCP,NamedPipes, MSMQ.

How we can achieve Operation Overloading while exposing WCF Services?

By default, WSDL doesn’t support operation overloading. Overloading behavior can be achieved by using “Name” property of OperationContract attribute.

[ServiceContract]

interface IMyCalculator

{

[OperationContract(Name = “SumInt”)]

int Sum(int arg1,int arg2);

[OperationContract(Name = “SumDouble”)]

double Sum(double arg1,double arg2);

}

When the proxy will be generated for these operations, it will have 2 methods with different names i.e. SumInt and SumDouble.

What Message Exchange Patterns (MEPs) supported by WCF? Explain each of them briefly.

1. Request/Response

2. One Way

3. Duplex

Request/Response

It’s the default pattern. In this pattern, a response message will always be generated to consumer when the operation is called, even with the void return type. In this scenario, response will have empty SOAP body.

One Way

In some cases, we are interested to send a message to service in order to execute certain business functionality but not interested in receiving anything back. OneWay MEP will work in such scenarios.

If we want queued message delivery, OneWay is the only available option.

Duplex

The Duplex MEP is basically a two-way message channel. In some cases, we want to send a message to service to initiate some longer-running processing and require a notification back from service in order to confirm that the requested process has been completed.

What is DataContractSerializer and How its different from XmlSerializer?

Serialization is the process of converting an object instance to a portable and transferable format. So, whenever we are talking about web services, serialization is very important.

Windows Communication Foundation has DataContractSerializer that is new in .NET 3.0 and uses opt-in approach as compared to XmlSerializer that uses opt-out. Opt-in means specify whatever we want to serialize while Opt-out means you don’t have to specify each and every property to serialize, specify only those you don’t want to serialize.

DataContractSerializer is about 10% faster than XmlSerializer but it has almost no control over how the object will be serialized. If we wanted to have more control over how object should be serialized that XmlSerializer is a better choice.

How we can use MessageContract partially with DataContract for a service operation in WCF?

MessageContract must be used all or none. If we are using MessageContract into an operation signature, then we must use MessageContract as the only parameter type and as the return type of the operation.

Which standard binding could be used for a service that was designed to replace an existing ASMX web service?

The basicHttpBinding standard binding is designed to expose a service as if it is an ASMX/ASP.NET web service. This will enable us to support existing clients as applications are upgrade to WCF.

Please explain briefly different Instance Modes in WCF?

WCF will bind an incoming message request to a particular service instance, so the available modes are:

Per Call: Instance created for each call, most efficient in term of memory but need to maintain session.

Per Session: Instance created for a complete session of a user. Session is maintained.

Single: Only one instance created for all clients/users and shared among all.Least efficient in terms of memory.

Please explain different modes of security in WCF? Or Explain the difference between Transport and Message Level Security.

In Windows Communication Foundation, we can configure to use security at different levels

a.    Transport Level security means providing security at the transport layer itself. When dealing with security at Transport level, we are concerned about integrity, privacy and authentication of message as it travels along the physical wire. It depends on the binding being used that how WCF makes it secure because most of the bindings have built-in security.

<netTcpBinding>

<binding name=”netTcpTransportBinding”>

<security mode=”Transport”>

<Transport clientCredentialType=”Windows” />

</security>

</binding>

</netTcpBinding>

b.    Message Level Security

For Tranport level security, we actually ensure the transport that is being used should be secured but in message level security, we actually secure the message. We encrypt the message before transporting it.

<wsHttpBinding>

<binding name=”wsHttpMessageBinding”>

<security mode=”Message”>

<Message clientCredentialType=”UserName” />

</security>

</binding>

</wsHttpBinding>

It totally depends upon the requirements but we can use a mixed security mode also as follows:

<basicHttpBinding>

<binding name=”basicHttp”>

<security mode=”TransportWithMessageCredential”>

<Transport />

<Message clientCredentialType=”UserName” />

</security>

</binding>

</basicHttpBinding>

SQL Server Recovery Models

Full Recovery Model

The Full Recovery Model is the most resistant to data loss of all the recovery models. The Full Recovery Model makes full use of the transaction log – all database operations are written to the transaction log. This includes all DML statements, but also whenever BCP or bulk insert is used.

For heavy OLTP databases, there is overhead associated with logging all of the transactions, and the transaction log must be continually backed up to prevent it from getting too large.

Benefits:

Most resistant to data loss
Most flexible recovery options – including point in time recovery

Disadvantages:

Can take up a lot of disk space
Requires database administrator time and patience to be used properly

Bulk-Logged Recovery Model

The Bulk-Logged Recovery Model differs from the Full Recovery Model in that rows that are inserted during bulk operations aren’t logged – yet a full restore is still possible because the extents that have been changed are tracked.

Benefits:

Transaction log stays small
Easier from an administration standpoint (don’t have to worry about transaction logs)

Disadvantages:

Not for production systems
Point in time recovery not possible
Least data resistant recovery model

Simple Recovery Model

The simple recovery model is the most open to data loss. The transaction log can’t be backed up and is automatically truncated at checkpoints. This potential loss of data is makes the simple recovery model a poor choice for production databases. This option can take up less disk space since the transaction log is constantly truncated.

Benefits:

Transaction log stays small
Easier from an administration standpoint (don’t have to worry about transaction logs)

Disadvantages:

Not for production systems
Point in time recovery not possible
Least data resistant recovery model

Difference between ref and out parameters in .NET

The out and the ref parameters are used to return values in the same variable, that you pass as an argument of a method. These both parameters are very useful when your method needs to return more than one value.

You must assigned value to out parameter in calee method body, otherwise the method won’t get compiled.

Ref Parameter : It has to be initialized before passing to the Method. The ref keyword on a method parameter causes a method to refer to the same variable that was passed as an input parameter for the same method. If you do any changes to the variable, they will be reflected in the variable.

int sampleData = 0;
sampleMethod(ref sampleData);

Example of Ref Parameter:
public static void Main()
{
int i = 3; // Variable need to be initialized
sampleMethod(ref i );
}

public static void sampleMethod(ref int sampleData)
{
sampleData++;
}

Out Parameter : It is not necessary to be initialized before passing to Method. The out parameter can be used to return the values in the same variable passed as a parameter of the method. Any changes made to the parameter will be reflected in the variable.

int sampleData;
sampleMethod(out sampleData);

Example of Out Parameter:
public static void Main()
{
int i, j; // Variable need not be initialized
sampleMethod(out i, out j);
}
public static int sampleMethod(out int sampleData1, out int sampleData2)
{
sampleData1 = 10;
sampleData2 = 20;
return 0;
}

Reference Type And Value Type in C#

In simple words, all value based types are allocated on the stack, while all reference based types are allocated on the heap. A value type contains the actual value. A reference type contains a reference to the value. When we assign a value type to another value type, a field-by-field copy is made. When we copy a reference type to another reference type, only the memory address is copied.

By saying stack, we mean things are kept one on top of the other. We keep track of each value at the top. By saying heap, we mean things are kept in a mashed order. We keep track of each value by its address, that is referenced by a pointer to it.

All value types are implicitly derived from System.ValueType. This class actually overrides the implementation in System.Object, the base class for all objects which is a reference type itself.

Data types like integers, floating point numbers, character data, Boolean values, Enumerations and Structures are examples of Value Types.

Classes, Strings, Arrays are examples of Reference Types.

A value type may not contain NULL values. Reference types may contain NULL values.

It is not possible to derive new types from Value Types. This is possible in Reference types. However, Value Types like Structures can implement interfaces.

class Program
{
static void Main(string[] args)
{
// Pass reference type by value
ArrayList arrayList = new ArrayList() { 0, 1, 2, 3 };
Console.WriteLine(“Pass by Value”);

PassByValue(arrayList);

// What should be the output of below line ??
Console.WriteLine(arrayList[1]);
arrayList = new ArrayList() { 0, 1, 2, 3 };
Console.WriteLine(“Pass by Reference”);
PassByReference(ref arrayList);
// What should be the output of below line ??
Console.WriteLine(arrayList[1]);
Console.Read();
}
private static void PassByValue(ArrayList arrayList)
{
Console.WriteLine(arrayList[1]);
// Now Change the first position value
arrayList[1] = 90;
arrayList = new ArrayList() { 101, 102, 103, 104 };
Console.WriteLine(arrayList[1]);
}
private static void PassByReference(ref ArrayList arrayList)
{
Console.WriteLine(arrayList[1]);
// Now Change the first position value
arrayList[1] = 90;
arrayList = new ArrayList() { 101, 102, 103, 104 };
Console.WriteLine(arrayList[1]);
}
}

Interpretation:

First we’ll take the case of passing value types by reference.

Let’s have a look at the PassbyValue function:

The first line of code obviously would look out for value placed at second index in the arrayList and print out 1. After that, we change the value present at second index to 90. In the third line, since we had passed the reference type by value; it created a copy of original memory block pointing to the original memory location. But as soon we re-create the object, this loses the reference to the original memory location and acts as a different arrayList object then onwards. However, the changes done to the arrayList before the re-creation of object still persists. That’s why, when we try to access the second index value, after the PassByValue function, we still get the output as 90.

Now, let’s have a look at the Pass by Reference function:

Here too the first line of code output would be the same as reflected by the PassByValue function. The second would again change the value present at the second index to 90. In the third line, since we had passed the reference type by reference, it would just re-initialize its value to the new array (note that here the original memory location is getting new values overwritten here), thus the value for arrayList[1] inside the function would be 102 and after that the newly changed array would be referred everywhere, even outside the function.

Output:

Pass By Value
1
102
90
Pass By Reference
1
102
102

Conclusion:

Passing reference types by Value creates a copy of the memory location and thus it is possible to change the value of the original reference type object inside the function (as soon we re-create that object). Passing reference types by ref doesn’t create any copy of the object; it impacts the original reference object.

Normalization

Normalization is a method for organizing data elements in a database into tables.

Normalization Avoids

  • Duplication of Data  – The same data is listed in multiple lines of the database
  • Insert Anomaly  – A record about an entity cannot be inserted into the table without first inserting information about another entity – Cannot enter a customer without a sales order
  • Delete Anomaly – A record cannot be deleted without deleting a record about a related entity.  Cannot delete a sales order without deleting all of the customer’s information.
  • Update Anomaly – Cannot update information without changing information in many places.  To update customer information, it must be updated for each sales order the customer has placed

Normalization is a three stage process – After the first stage, the data is said to be in first normal form, after the second, it is in second normal form, after the third, it is in third normal form

Before Normalization

  1. Begin with a list of all of the fields that must appear in the database.  Think of this as one big table.
  2. Do not include computed fields
  3. One place to begin getting this information is from a printed document used by the system.
  4. Additional attributes besides those for the entities described on the document can be added to the database.

Before Normalization – Example

Fields in the original data table will be as follows:

SalesOrderNo, Date, CustomerNo, CustomerName, CustomerAdd, ClerkNo, ClerkName, ItemNo, Description, Qty, UnitPrice

Think of this as the baseline – one large table

Normalization:  First Normal Form

  • Separate Repeating Groups into New Tables.
  • Repeating Groups  Fields that may be repeated several times for one document/entity
  • Create a new table containing the repeating data
  • The primary key of the new table (repeating group) is always a composite key; Usually document number and a field uniquely describing the repeating line, like an item number.

First Normal Form Example

The new table is as follows:

SalesOrderNo, ItemNo, Description, Qty, UnitPrice

The repeating fields will be removed from the original data table, leaving the following.

SalesOrderNo, Date, CustomerNo, CustomerName, CustomerAdd, ClerkNo, ClerkName

These two tables are a database in first normal form

What if we did not Normalize the Database to First Normal Form?

Repetition of Data – SO Header data repeated for every line in sales order.

Normalization:  Second Normal Form

  • Remove Partial Dependencies.
  • Functional Dependency  The value of one attribute in a table is determined entirely by the value of another.
  • Partial Dependency A type of functional dependency where an attribute is functionally dependent on only part of the primary key (primary key must be a composite key).
  • Create separate table with the functionally dependent data and the part of the key on which it depends.  Tables created at this step will usually contain descriptions of resources.

Second Normal Form Example

The new table will contain the following fields:

ItemNo, Description

All of these fields except the primary key will be removed from the original table.  The primary key will be left in the original table to allow linking of data:

SalesOrderNo, ItemNo, Qty, UnitPrice

Never treat price as dependent on item.  Price may be different for different sales orders (discounts, special customers, etc.)

Along with the unchanged table below, these tables make up a database in second normal form:

SalesOrderNo, Date, CustomerNo, CustomerName, CustomerAdd, ClerkNo, ClerkName

What if we did not Normalize the Database to Second Normal Form?

  • Repetition of Data – Description would appear every time we had an order for the item
  • Delete Anomalies – All information about inventory items is stored in the SalesOrderDetail table.  Delete a sales order, delete the item.
  • Insert Anomalies – To insert an inventory item, must insert sales order.
  • Update Anomalies – To change the description, must change it on every SO.

Normalization:  Third Normal Form

  • Remove transitive dependencies.
  • Transitive Dependency  A type of functional dependency where an attribute is functionally dependent on an attribute other than the primary key.  Thus its value is only indirectly determined by the primary key.
  • Create a separate table containing the attribute and the fields that are functionally dependent on it. Tables created at this step will usually contain descriptions of either resources or agents.  Keep a copy of the key attribute in the original file.

Third Normal Form Example

The new tables would be:

CustomerNo, CustomerName, CustomerAdd

ClerkNo, ClerkName

All of these fields except the primary key will be removed from the original table.  The primary key will be left in the original table to allow linking of data as follows:

SalesOrderNo, Date, CustomerNo, ClerkNo

Together with the unchanged tables below, these tables make up the database in third normal form.

ItemNo, Description

SalesOrderNo, ItemNo, Qty, UnitPrice

What if we did not Normalize the Database to Third Normal Form?

  • Repetition of Data – Detail for Cust/Clerk would appear on every SO
  • Delete Anomalies – Delete a sales order, delete the customer/clerk
  • Insert Anomalies – To insert a customer/clerk, must insert sales order.
  • Update Anomalies – To change the name/address, etc, must change it on every SO.

Completed Tables in Third Normal Form

Customers:  CustomerNo, CustomerName, CustomerAdd

Clerks:  ClerkNo, ClerkName

Inventory Items:  ItemNo, Description

Sales Orders:  SalesOrderNo, Date, CustomerNo, ClerkNo

SalesOrderDetail:  SalesOrderNo, ItemNo, Qty, UnitPrice

 

Partial class

Instead of defining an entire class, you can split the definition into multiple classes by using the partial keyword. When the application is complied, the C# complier will group all the partial classes together and treat them as a single class. There are a couple of good reasons to use partial classes. Programmers can work on different parts of a class without needing to share the same physical file. Also you can separate your application business logic from the designer-generated code.

It is possible to split the definition of a class or a struct, or an interface over two or more source files. Each source file contains a section of the class definition, and all parts are combined when the application is compiled. There are several situations when splitting a class definition is desirable:

When working on large projects, spreading a class over separate files allows multiple programmers to work on it simultaneously.

When working with automatically generated source, code can be added to the class without having to recreate the source file. Visual Studio uses this approach when creating Windows Forms, Web Service wrapper code, and so on. You can create code that uses these classes without having to edit the file created by Visual Studio.

To split a class definition, use the partial keyword modifier, as shown below:

public partial class Employee
{
public void DoWork()
{
}
}

public partial class Employee
{
public void GoToLunch()
{
}
}

Nested types can be partial, even if the type they are nested within is not partial itself. For example:

class Container
{
partial class Nested
{
void Test() { }
}
partial class Nested
{
void Test2() { }
}
}

At compile time, attributes of partial-type definitions are merged. For example, the following declarations:

[System.SerializableAttribute]
partial class Moon { }

[System.ObsoleteAttribute]
partial class Moon { }

are equivalent to:

[System.SerializableAttribute]
[System.ObsoleteAttribute]
class Moon { }

The following are merged together from all the partial-type definitions:

XML comments
interfaces
generic-type parameter attributes
class attributes
members

For example, the following declarations:

partial class Earth : Planet, IRotate { }
partial class Earth : IRevolve { }

are equivalent to:

class Earth : Planet, IRotate, IRevolve { }

There are several rules to follow when working with partial class definitions:

All partial-type definitions meant to be parts of the same type must be modified with partial. For example, the following class declarations generate an error:

public partial class A { }
//public class A { } // Error, must also be marked partial

The partial modifier can only appear immediately before the keywords class, struct, or interface.

Nested partial types are allowed in partial-type definitions, for example:

partial class ClassWithNestedClass
{
partial class NestedClass { }
}

partial class ClassWithNestedClass
{
partial class NestedClass { }
}

All partial-type definitions meant to be parts of the same type must be defined in the same assembly and the same module (.exe or .dll file). Partial definitions cannot span multiple modules.

The class name and generic-type parameters must match on all partial-type definitions. Generic types can be partial. Each partial declaration must use the same parameter names in the same order.

The following keywords on a partial-type definition are optional, but if present on one partial-type definition, cannot conflict with the keywords specified on another partial definition for the same type:

public
private
protected
internal
abstract
sealed
base class
new modifier (nested parts)
generic constraints

Example 1:

In the following example, the fields and the constructor of the class, CoOrds, are declared in one partial class definition, while the member, PrintCoOrds, is declared in another partial class definition.

public partial class CoOrds
{
private int x;
private int y;

public CoOrds(int x, int y)
{
this.x = x;
this.y = y;
}
}

public partial class CoOrds
{
public void PrintCoOrds()
{
System.Console.WriteLine(“CoOrds: {0},{1}”, x, y);
}

}

class TestCoOrds
{
static void Main()
{
CoOrds myCoOrds = new CoOrds(10, 15);
myCoOrds.PrintCoOrds();
}
}

Output

CoOrds: 10,15

 

Example 2:

The following example shows that you can also develop partial structs and interfaces.

partial interface ITest
{
void Interface_Test();
}

partial interface ITest
{
void Interface_Test2();
}

partial struct S1
{
void Struct_Test() { }
}

partial struct S1
{
void Struct_Test2() { }
}

Generics

Generics allow you to define type-safe classes without compromising type safety, performance, or productivity

Generics were added to version 2.0 of the C# language and the common language runtime (CLR). Generics introduce to the .NET Framework the concept of type parameters, which make it possible to design classes and methods that defer the specification of one or more types until the class or method is declared and instantiated by client code. For example, by using a generic type parameter T you can write a single class that other client code can use without incurring the cost or risk of runtime casts or boxing operations, as shown here:

// Declare the generic class.
public class GenericList<T>

{
void Add(T input) { }

}

class TestGenericList

{
private class ExampleClass { }

static void Main()

{

// Declare a list of type int.

GenericList<int> list1 = new GenericList<int>();

// Declare a list of type string.

GenericList<string> list2 = new GenericList<string>();

// Declare a list of type ExampleClass.

GenericList<ExampleClass> list3 = new GenericList<ExampleClass>();

}

}

Preventing Session Timeouts in C# ASP .NET

Introduction

C# ASP .NET has a setting in the web.config file which allows selecting the desired session timeout. When the session timeout value expires, the currently logged in user’s session is deleted and the user is directed back to the login page. The default timeout value usually hovers around 20 minutes for ASP .NET’s session timeout. While this is the expected behavior, often clients may require the session timeout to be increased dramatically or even avoid any timeout at all while the user is logged in.

This article describes a solution for web applications which require a session to never timeout or for those who have a session timeout occurring before the value set in the web.config. The solution is invisible and seamless and has been tested in Internet Explorer, Firefox, and Safari.

Why Would a Client Want No Session Timeout?

A typical scenerio where a user may want to remain permanently logged in until specifically logging out could include a phone technical support operator. The operator logs into a web application to begin taking calls and modifying data. A phone call could last over an hour, with the operator modifying data in between on a single page, and a session timeout at this point could result in a loss of data for the operator. To resolve this, the client may specify to increase the session timeout to several hours. Certainly, the operator would finish a call within a few hours before a page refresh.

Sliding Expiration is Key

It’s important to note a key property about session in ASP .NET web applications and IIS, regarding sliding expiration. If sliding expiration is enabled (which it is by default in Visual Studio), the moment a postback occurs within your C# ASP .NET web application, the session timeout counter is refreshed. This means that as long as the user is navigating pages or utilizing controls which issue a postback, the session will remain active. The session timeout problem occurs, such as in the example above, when a user remains on a single page for too long, such as a data-entry page, before clicking the save button.

Increasing the Session Timeout Doesn’t Always Work

At first glance, increasing the session timeout value in C# ASP .NET’s web.config file should resolve the issue. You would assume that by changing the timeout value to 60 minutes in the line below, that a user would remain logged into a web application session for a full 60 minutes.

<authentication mode=”Forms”>

<forms name=”MyAuth” timeout=”60″ protection=”All” loginUrl=”~/Web/Login.aspx” slidingExpiration=”true” />

</authentication>

<sessionState mode=”InProc” cookieless=”false” timeout=”60″ />

However, there are actually two problems with this. The first problem is that setting the timeout value to anything greater than 1 hour will result in excessive memory being held on the server, as IIS holds all session memory for the duration of the session. Imagine a timeout value of 5 hours on a high traffic site, holding all session data for thousands of user sessions. The second problem may come upon testing the application, where often the web application will timeout after only 15 minutes. What exactly is happening? While the problem may actually be a value configured in IIS for the session timeout or connection timeout properties (which in the case of shared hosting, you may not even have access to), it becomes apparent we need to take control of the session timeout into our own hands.

Asking the User to Refresh

Offhand, the most obvious solution would be to ask the user to refresh their web browser at least every 15 minutes if they plan to remain on a single page that long. This is a poor solution for obvious reasons. However, what if we could come up with a method to automatically refresh the page behind the scenes, effectively creating a postback.

The Solution – Meta Refresh and Postback

To resolve this issue, we’ll need to automatically refresh a web page in the application in order to create a postback. This can be done with a meta-refresh tag. Of course, to keep the web page from refreshing constantly, we’ll place the refresh inside a tiny IFRAME. The IFRAME itself, will run on the server and change a querystring parameter to avoid any browser caching of the page. This ensures the page is always loaded upon refresh.

Start by adding the following tag to your master page:

Next, create a new page named KeepSessionAlive.aspx. In the head section of the page, add the following lines:

<meta id=”MetaRefresh” http-equiv=”refresh” content=”21600;url=KeepSessionAlive.aspx” runat=”server” />

<script language=”javascript”>

window.status = “<%=WindowStatusText%>”;

</script>

The key to this line is the content value. By default, we set the value to 21600 seconds, which is equal to 6 hours. However, we will be setting the value ourselves in the Page_Load of the web application for this page, so this default value can be ignored.

Add the following code to the Page_Load of KeepSessionAlive.aspx.cs:

protected string WindowStatusText = “”;

protected void Page_Load(object sender, EventArgs e)

{

if (User.Identity.IsAuthenticated)

{

// Refresh this page 60 seconds before session timeout, effectively resetting the session timeout counter.

MetaRefresh.Attributes[“content”] = Convert.ToString((Session.Timeout * 60) – 60) + “;url=KeepSessionAlive.aspx?q=” + DateTime.Now.Ticks;

WindowStatusText = “Last refresh ” + DateTime.Now.ToShortDateString() + ” ” + DateTime.Now.ToShortTimeString();

}

}

It’s important to note that we include a random querystring parameter on the end of the target url to refresh to. Without this parameter, many web browsers would cache KeepSessionAlive.aspx and never send us the full postback. The random parameter keeps the web browser issuing a full postback, which keeps our session alive. The auto-refresh will actually occur 1 minute before the session is due to expire.

The final important step is to change your web.config session timeout value to be a value less than that of IIS’s possible timeout values. If your value is greater than IIS’s, your auto-refresh will never occur since IIS would have already reset your session state before the refresh timer activates. Choosing a value such as 10 minutes appears to work well. Remember, even though the session timeout value is 10 minutes, your auto-refresh method combined with sliding expiration, will keep the session alive. Alternate solutions include setting the web.config timeout values to 20 or 30 minutes and setting the meta-refresh value to 5 minutes.

<authentication mode=”Forms”>

<forms name=”MyAuth” timeout=”10″ protection=”All” loginUrl=”~/Web/Login.aspx” slidingExpiration=”true” />

</authentication>

<sessionState mode=”InProc” cookieless=”false” timeout=”10″ />

Testing the Results and Advantages

After making the changes as shown above, log into your web application to establish a session and try sitting on the same page for 20 minutes or longer. You should be able to verify that the session remains alive and active, long after 10 minutes, without being booted back to the login page.

This method actually has two added advantages over the standard web.config session timeout value. The first advantage is that you can keep a session permanently active – as long as the user’s web browser is open, the session will not be logged out. The second advantage is that as soon as the user closes the web browser, a session timeout will occur after only 10 minutes, quickly freeing up the server memory (rather than holding onto the session memory for 20,30,60 minutes or longer before cleanup).

Don’t Forget About Security

It’s important to note there are security implications to keeping a user’s session permanently active on a single page (until the web browser is closed). Particularlly, if the user walks away from his desk, there is a chance for an attacker to jump right into the web application and gain access. Without a session timeout, the web page would remain open. However, if the PC has its own locked-PC timeout (ie. screensaver), it may help alleviate this issue. In either case, security should always be considered when making session timeout changes.

Conclusion

Session timeouts in C# ASP .NET can be unpredictable and often rely not only on the web.config session timeout value, but also on various timeout values within IIS, the server, and the cookie. By taking advantage of the sliding expiration feature of ASP .NET, we can control the session timeout to our specific needs, providing a seamless experience for the user and preventing session timeouts completely in a memory efficent manner.

Web Services

Web Services are applications that provide services on the internet. Web services allow for programmatic access of business logic over the Web. Web services typically rely on XML-based protocols, messages, and interface descriptions for communication and access. SOAP over HTTP is the most commonly used protocol for invoking Web services. SOAP defines a standardized format in XML which can be exchanged between two entities over standard protocols such as HTTP.

Example: Google search engine’s web service, e.g., allows other applications to delegate the task of searching over the internet to Google web service and use the result produced by it in their own applications.

What are Web Services?

  • Web services are application components
  • Web services communicate using open protocols
  • Web services are self-contained and self-describing
  • Web services can be discovered using UDDI
  • Web services can be used by other applications
  • XML is the basis for Web services

Web services platform elements:

  • SOAP (Simple Object Access Protocol)
  • UDDI (Universal Description, Discovery and Integration)
  • WSDL (Web Services Description Language)

SOAP is an XML-based protocol to let applications exchange information over HTTP.

Or more simple: SOAP is a protocol for accessing a Web Service.

  • SOAP stands for Simple Object Access Protocol
  • SOAP is a communication protocol
  • SOAP is a format for sending messages
  • SOAP is designed to communicate via Internet
  • SOAP is platform independent
  • SOAP is language independent
  • SOAP is based on XML
  • SOAP is simple and extensible
  • SOAP allows you to get around firewalls
  • SOAP is a W3C standard

WSDL is an XML-based language for locating and describing Web services.

  • WSDL stands for Web Services Description Language
  • WSDL is based on XML
  • WSDL is used to describe Web services
  • WSDL is used to locate Web services
  • WSDL is a W3C standard

UDDI – Universal Description, Discovery and Integration. It is an XML-based standard for describing, publishing, and finding Web services. It is platform independent, open framework and specification for a distributed registry of Web services

  • UDDI stands for Universal Description, Discovery and Integration
  • UDDI is a directory for storing information about web services
  • UDDI is a directory of web service interfaces described by WSDL
  • UDDI communicates via SOAP
  • UDDI is built into the Microsoft .NET platform

Web services use XML to code and to decode data, and SOAP to transport it (using open protocols).

Interoperability has Highest Priority

  • When all major platforms could access the Web using Web browsers, different platforms could interact. For these platforms to work together, Web-applications were developed.
  • Web-applications are simply applications that run on the web. These are built around the Web browser standards and can be used by any browser on any platform.

Uses of Web service:

  • Application integration Web services within an intranet are commonly used to integrate business applications running on different platforms.

For example, a .NET client running on Windows 2000 can easily invoke a Java Web service running on a mainframe or Unix machine to retrieve data from a legacy application.

  • Business integration Web services allow trading partners to engage in e-business allowing them to leverage the existing Internet infrastructure. Organizations can send electronic purchase orders to suppliers and receive electronic invoices. Doing e-business with Web services means a low barrier to entry because Web services can be added to existing applications running on any platform without changing legacy code.
  • Commercial Web services focus on selling content and business services to clients over the Internet similar to familiar Web pages. Unlike Web pages, commercial Web services target applications as their direct users.

Reusable application-components.

  • Web services can offer application-components like: currency conversion, weather reports, or even language translation as services.

Connect existing software.

  • Web services can help to solve the interoperability problem by giving different applications a way to link their data.
  • With Web services you can exchange data between different applications and different platforms.

DISCO:

DISCO is the abbreviated form of Discovery. It is basically used to club or group common services together on a server and provides links to the schema documents of the services it describes may require.

Disco.exe:

The Web Services Discovery tool discovers the URLs of XML Web services located on a Web server and saves documents related to each XML Web service on a local disk.

Create a Web Service:

This sample explains about the creation of sample web service and consuming it.

Step 1: Create a new web service by clicking File->New->WebSite and select “ASP.Net Web Service”

Step 2:

Create a class and methods which is need to be exposed as service. Decorate the class with “WebService” and methods with “WebMethod” attribute.

[WebService(Namespace = “http://tempuri.org/&#8221;)]

[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]

// To allow this Web Service to be called from script,

//using ASP.NET AJAX, uncomment the following line.

// [System.Web.Script.Services.ScriptService]

public class Service : System.Web.Services.WebService

{

    public Service () {

       //Uncomment the following line if using designed components

        //InitializeComponent();

    }

   [WebMethod]

    public string HelloWorld() {

        return “Hello World”;

    }

 [WebMethod ]

    public string SayHello(string name) {

        return “Hello ” + name;

    }

}

Step 3:

Run the web service

Step 4: Create the client application to consume the service by clicking File->New->Project and select the Console Application.

Step 5: Right click the project file and select “Add Service Reference”

Step 6:

Create a new instance for proxy class and call the web method “SayHello”

class Program
    {
        static void Main(string[] args)
        {
            ServiceReference1.ServiceSoapClient proxy = new
                            ServiceReference1.ServiceSoapClient();
            Console.WriteLine(proxy.SayHello("Ram"));
            Console.ReadLine();
        }
    }
 

Step 7:

Output window are shown

Difference between Add Reference and Add Service reference:

Add Reference is used to add the .Net assemblies and COM components to the project files, where as Add Service Reference is used to create a proxy for the web service.

Where on the Internet would you look for Web services?

http://www.uddi.org

When would you use .NET Remoting and when Web services?

When both service and client are .Net platform, .Net remoting will be more efficient where are if both server and client are different platform use web service for communication

Authentication for Web Services (using SOAP headers):

It had to be simple for the client applications to authenticate, also that the web based administration system had to be used. This prevented me from using the Windows authentication (which is fairly easy to use for the clients of this web service.) By using SOAP headers to pass username and password information, it greatly simplifies any authentication request.

Using the code

I wanted to make it really easy for the client to understand:

protected System.Web.UI.WebControls.DataGrid dgData;

private void Page_Load(object sender, System.EventArgs e)

{

    //simple client

    AuthWebService.WebService webService = new AuthWebService.WebService();

    AuthWebService.AuthHeader authentication = new AuthWebService.AuthHeader();

    authentication.Username = “test”;

    authentication.Password = “test”;

    webService.AuthHeaderValue = authentication;

    //Bind the results – do something here

    DataSet dsData = webService.SensitiveData();

    dgData.DataSource = dsData;

    dgData.DataBind();   

}

Basically all the client needs to do is create an authentication object, fill out the username and password, and then pass them to the web service object. The web service code is also pretty simple; the .NET framework lets you create custom SOAP headers by deriving from the SoapHeader class, so we wanted to add a username and password:

using System.Web.Services.Protocols;

public class AuthHeader : SoapHeader

{

    public string Username;

    public string Password;

}

The next step is to identify the web services that need the authentication, in the example I’ve included it’s the method SensitiveData. To force the use of our new SOAP header we need to add the following attribute to our method:

[SoapHeader (“Authentication”, Required=true)]

So our full definition for our web service method is:

public AuthHeader Authentication;

[SoapHeader (“Authentication”, Required=true)]

[WebMethod (Description=”Returns some sample data”)]

public DataSet SensitiveData()

{

    DataSet data = new DataSet();

     //Do our authentication

    //this can be via a database or whatever

    if(Authentication.Username == “test” && Authentication.Password == “test”)

    {

        //they are allowed access to our sensitive data

        //just create some dummy data

        DataTable dtTable1 = new DataTable();

        DataColumn drCol1 = new DataColumn(“Data”, System.Type.GetType(“System.String”));

        dtTable1.Columns.Add(drCol1);

        DataRow drRow = dtTable1.NewRow();

        drRow[“Data”] = “Sensitive Data”;

        dtTable1.Rows.Add(drRow);

        dtTable1.AcceptChanges();

        data.Tables.Add(dtTable1);

    }else{

        data = null;  

 }           

  return data;

}

 

I should also mention that when I say SOAP headers, I actually mean the soap:Header element in a SOAP request, it has nothing to do with the HTTP headers sent with the request. The SOAP request looks something like:

<?xml version=”1.0″ encoding=”utf-8″?>

<soap:Envelope xmlns:soap=”http://schemas.xmlsoap.org/soap/envelope/”&gt;

  <soap:Header>

    <AUTHHEADER xmlns=”http://tempuri.org/”&gt;

      <USERNAME>string</USERNAME>

      <PASSWORD>string</PASSWORD>

    </AUTHHEADER>

  </soap:Header>

  <soap:Body>

    <SENSITIVEDATA xmlns=”http://tempuri.org/&#8221; />

  </soap:Body>

</soap:Envelope>