Wednesday, October 28, 2009

Codd Rules

Rule 1 : The information Rule.
"All information in a relational data base is represented explicitly at the logical level and in exactly one way - by values in tables."

Everything within the database exists in tables and is accessed via table access routines.

Rule 2 : Guaranteed access Rule.
"Each and every datum (atomic value) in a relational data base is guaranteed to be logically accessible by resorting to a combination of table name, primary key value and column name."

To access any data-item you specify which column within which table it exists, there is no reading of characters 10 to 20 of a 255 byte string.

Rule 3 : Systematic treatment of null values.
"Null values (distinct from the empty character string or a string of blank characters and distinct from zero or any other number) are supported in fully relational DBMS for representing missing information and inapplicable information in a systematic way, independent of data type."

If data does not exist or does not apply then a value of NULL is applied, this is understood by the RDBMS as meaning non-applicable data.

Rule 4 : Dynamic on-line catalog based on the relational model.
"The data base description is represented at the logical level in the same way as-ordinary data, so that authorized users can apply the same relational language to its interrogation as they apply to the regular data."

The Data Dictionary is held within the RDBMS, thus there is no-need for off-line volumes to tell you the structure of the database.

Rule 5 : Comprehensive data sub-language Rule.
"A relational system may support several languages and various modes of terminal use (for example, the fill-in-the-blanks mode). However, there must be at least one language whose statements are expressible, per some well-defined syntax, as character strings and that is comprehensive in supporting all the following items

Data Definition
View Definition
Data Manipulation (Interactive and by program).
Integrity Constraints
Authorization.

Every RDBMS should provide a language to allow the user to query the contents of the RDBMS and also manipulate the contents of the RDBMS.

Rule 6 : .View updating Rule
"All views that are theoretically updatable are also updatable by the system."

Not only can the user modify data, but so can the RDBMS when the user is not logged-in.

Rule 7 : High-level insert, update and delete.
"The capability of handling a base relation or a derived relation as a single operand applies not only to the retrieval of data but also to the insertion, update and deletion of data."

The user should be able to modify several tables by modifying the view to which they act as base tables.

Rule 8 : Physical data independence.
"Application programs and terminal activities remain logically unimpaired whenever any changes are made in either storage representations or access methods."

The user should not be aware of where or upon which media data-files are stored

Rule 9 : Logical data independence.
"Application programs and terminal activities remain logically unimpaired when information-preserving changes of any kind that theoretically permit un-impairment are made to the base tables."

User programs and the user should not be aware of any changes to the structure of the tables (such as the addition of extra columns).

Rule 10 : Integrity independence.
"Integrity constraints specific to a particular relational data base must be definable in the relational data sub-language and storable in the catalog, not in the application programs."

If a column only accepts certain values, then it is the RDBMS which enforces these constraints and not the user program, this means that an invalid value can never be entered into this column, whilst if the constraints were enforced via programs there is always a chance that a buggy program might allow incorrect values into the system.

Rule 11 : Distribution independence.
"A relational DBMS has distribution independence."

The RDBMS may spread across more than one system and across several networks, however to the end-user the tables should appear no different to those that are local.

Rule 12 : Non-subversion Rule.
"If a relational system has a low-level (single-record-at-a-time) language, that low level cannot be used to subvert or bypass the integrity Rules and constraints expressed in the higher level relational language (multiple-records-at-a-time)."

The RDBMS should prevent users from accessing the data without going through the Oracle data-read functions.
In Rule 5 Codd stated that an RDBMS required a Query Language, however Codd does not explicitly state that SQL should be the query tool, just that there should be a tool, and many of the initial products had their own tools, Oracle had UFI (User Friendly Interface), Ingres had QUEL (QUery Execution Language) and the never released DB1 had a language called sequel, the acronym SQL is often pronounced such as it was sequel that provided the core functionality to SQL.
Even when the vendors eventually all started offering SQL the flavours were/are all radically different and contained wildly varying syntax. This situation was somewhat resolved in the late 80's when ANSI brought out their first definition of the SQL syntax.
This has since been upgraded to version 2 and now all vendors offer a standard core SQL, however ANSI SQL is somewhat limited and thus all RDBMS providers offer extensions to SQL which may differ from vendor to vendor.

Tuesday, October 20, 2009

ADO.Net Interview Questions

1. What is ADO.Net?

ActiveX Data Object (ADO).NET is the primary relational data access model for Microsoft .NET-based applications. ADO.Net provides consistent data access from database management system (DBMS) such as SQL Server, Oracle etc. ADO.NET is exclusively designed to meet the requirements of web-based applications model such as disconnected data architecture, integration with XML, common data representation, combining data from multiple data sources, and optimization in interacting with the database.

2. Explain the ADO .Net Architecture?

ADO.NET Architecture includes three data providers for implementing connectivity with databases: SQL Server .NET Data Provider, OLEDB .NET Data Provider, and ODBC .Net Data Provider. You can access data through data provider in two ways either using a DataReader or DataAdapter.

Leverage current ADO knowledge
Support the N-Tier programming model
Provide support for XML

In distributed applications, the concept of working with disconnected data has become very common. A disconnected model means that once you have retrieved the data that you need, the connection to the data source is dropped—you work with the data locally. The reason why this model has become so popular is that it frees up precious database server resources, which leads to highly scalable applications. The ADO.NET solution for disconnected data is the DataSet object.

Data Access in ADO.NET relies on two components:

DataSet
Data Provider.
DataSet


The ADO.NET Data Set is explicitly designed for data access independent of any data source. As a result, it can be used with multiple and differing data sources, used with XML data, or used to manage data local to the application. The DataSet contains a collection of one or more objects made up of rows and columns of data, as well as primary key, foreign key, constraint, and relation information about the data in the DataTable objects.
The dataset is a disconnected, in-memory representation of data. It can be considered as a local copy of the relevant portions of the database. The DataSet is persisted in memory and the data in it can be manipulated and updated independent of the database. When the use of this DataSet is finished, changes can be made back to the central database for updating. The data in DataSet can be loaded from any valid data source like Microsoft SQL server database, an Oracle database or from a Microsoft Access database.



Data Provider

The Data Provider is responsible for providing and maintaining the connection to the database. A DataProvider is a set of related components that work together to provide data in an efficient and performance driven manner. The .NET Framework currently comes with two DataProviders: the SQL Data Provider which is designed only to work with Microsoft's SQL Server 7.0 or later and the OleDb DataProvider which allows us to connect to other types of databases like Access and Oracle. Each DataProvider consists of the following component classes:

The Connection object which provides a connection to the database
The Command object which is used to execute a command
The DataReader object which provides a forward-only, read only, connected recordset
The DataAdapter object which populates a disconnected DataSet with data and performs update

Data access with ADO.NET can be summarized as follows:
A connection object establishes the connection for the application with the database. The command object provides direct execution of the command to the database. If the command returns more than a single value, the command object returns a DataReader to provide the data. Alternatively, the DataAdapter can be used to fill the Dataset object. The database can be updated using the command object or the DataAdapter.


Component classes that make up the Data Providers

The Connection Object


The Connection object creates the connection to the database. Microsoft Visual Studio .NET provides two types of Connection classes: the SqlConnection object, which is designed specifically to connect to Microsoft SQL Server 7.0 or later, and the OleDbConnection object, which can provide connections to a wide range of database types like Microsoft Access and Oracle. The Connection object contains all of the information required to open a connection to the database.







The Command Object


The Command object is represented by two corresponding classes: SqlCommand and OleDbCommand. Command objects are used to execute commands to a database across a data connection. The Command objects can be used to execute stored procedures on the database, SQL commands, or return complete tables directly. Command objects provide three methods that are used to execute commands on the database:
ExecuteNonQuery: Executes commands that have no return values such as INSERT, UPDATE or DELETE
ExecuteScalar: Returns a single value from a database query
ExecuteReader: Returns a result set by way of a DataReader object



The DataReader Object


The DataReader object provides a forward-only, read-only, connected stream recordset from a database. Unlike other components of the Data Provider, DataReader objects cannot be directly instantiated. Rather, the DataReader is returned as the result of the Command object's ExecuteReader method. The SqlCommand.ExecuteReader method returns a SqlDataReader object, and the OleDbCommand.ExecuteReader method returns an OleDbDataReader object. The DataReader can provide rows of data directly to application logic when you do not need to keep the data cached in memory. Because only one row is in memory at a time, the DataReader provides the lowest overhead in terms of system performance but requires the exclusive use of an open Connection object for the lifetime of the DataReader.



The DataAdapter Object


The DataAdapter is the class at the core of ADO .NET's disconnected data access. It is essentially the middleman facilitating all communication between the database and a DataSet. The DataAdapter is used either to fill a DataTable or DataSet with data from the database with it's Fill method. After the memory-resident data has been manipulated, the DataAdapter can commit the changes to the database by calling the Update method. The DataAdapter provides four properties that represent database commands:



SelectCommand
InsertCommand
DeleteCommand
UpdateCommand
When the Update method is called, changes in the DataSet are copied back to the database and the appropriate InsertCommand, DeleteCommand, or UpdateCommand is executed.


3. What are the advantages and drawbacks of using ADO.NET?

Pros

ADO.NET is rich with plenty of features that are bound to impress even the most skeptical of programmers. If this weren’t the case, Microsoft wouldn’t even be able to get anyone to use the Beta. What we’ve done here is come up with a short list of some of the more outstanding benefits to using the ADO.NET architecture and the System.Data namespace.

* Performance – there is no doubt that ADO.NET is extremely fast. The actual figures vary depending on who performed the test and which benchmark was being used, but ADO.NET performs much, much faster at the same tasks than its predecessor, ADO. Some of the reasons why ADO.NET is faster than ADO are discussed in the ADO versus ADO.NET section later in this chapter.

* Optimized SQL Provider – in addition to performing well under general circumstances, ADO.NET includes a SQL Server Data Provider that is highly optimized for interaction with SQL Server. It uses SQL Server’s own TDS (Tabular Data Stream) format for exchanging information. Without question, your SQL Server 7 and above data access operations will run blazingly fast utilizing this optimized Data Provider.

* XML Support (and Reliance) – everything you do in ADO.NET at some point will boil down to the use of XML. In fact, many of the classes in ADO.NET, such as the DataSet, are so intertwined with XML that they simply cannot exist or function without utilizing the technology. You’ll see later when we compare and contrast the “old” and the “new” why the reliance on XML for internal storage provides many, many advantages, both to the framework and to the programmer utilizing the class library.

* Disconnected Operation Model – the core ADO.NET class, the DataSet, operates in an entirely disconnected fashion. This may be new to some programmers, but it is a remarkably efficient and scalable architecture. Because the disconnected model allows for the DataSet class to be unaware of the origin of its data, an unlimited number of supported data sources can be plugged into code without any hassle in the future.

* Rich Object Model – the entire ADO.NET architecture is built on a hierarchy of class inheritance and interface implementation. Once you start looking for things you need within this namespace, you’ll find that the logical inheritance of features and base class support makes the entire system extremely easy to use, and very customizable to suit your own needs. It is just another example of how everything in the .NET framework is pushing toward a trend of strong application design and strong OOP implementations.


Cons

Hard as it may be to believe, there are a couple of drawbacks or disadvantages to using the ADO.NET architecture. I’m sure others can find many more faults than we list here, but we decided to stick with a short list of some of the more obvious and important shortcomings of the technology.

* Managed-Only Access – for a few obvious reasons, and some far more technical, you cannot utilize the ADO.NET architecture from anything but managed code. This means that there is no COM interoperability allowed for ADO.NET. Therefore, in order to take advantage of the advanced SQL Server Data Provider and any other feature like DataSets, XML internal data storage, etc, your code must be running under the CLR.

* Only Three Managed Data Providers (so far) – unfortunately, if you need to access any data that requires a driver that cannot be used through either an OLEDB provider or the SQL Server Data Provider, then you may be out of luck. However, the good news is that the OLEDB provider for ODBC is available for download from Microsoft. At that point the down-side becomes one of performance, in which you are invoking multiple layers of abstraction as well as crossing the COM InterOp gap, incurring some initial overhead as well.

* Learning Curve – despite the misleading name, ADO.NET is not simply a new version of ADO, nor should it even be considered a direct successor. ADO.NET should be thought of more as the data access class library for use with the .NET framework. The difficulty in learning to use ADO.NET to its fullest is that a lot of it does seem familiar. It is this that causes some common pitfalls. Programmers need to learn that even though some syntax may appear the same, there is actually a considerable amount of difference in the internal workings of many classes. For example (this will be discussed in far more detail later), an ADO.NET DataSet is nothing at all like a disconnected ADO RecordSet. Some may consider a learning curve a drawback, but I consider learning curves more like scheduling issues. There’s a learning curve in learning anything new; it’s just up to you to schedule that curve into your time so that you can learn the new technology at a pace that fits your schedule.


4. Explain what a diffgram is and its usage ?

A DiffGram is an XML format that is used to identify current and original versions of data elements. The DataSet uses the DiffGram format to load and persist its contents, and to serialize its contents for transport across a network connection. When a DataSet is written as a DiffGram, it populates the DiffGram with all the necessary information to accurately recreate the contents, though not the schema, of the DataSet, including column values from both the Original and Current row versions, row error information, and row order.
When sending and retrieving a DataSet from an XML Web service, the DiffGram format is implicitly used. Additionally, when loading the contents of a DataSet from XML using the ReadXml method, or when writing the contents of a DataSet in XML using the WriteXml method, you can select that the contents be read or written as a DiffGram.
The DiffGram format is divided into three sections: the current data, the original (or "before") data, and an errors section, as shown in the following example.





xmlns:msdata="urn:schemas-microsoft-com:xml-msdata"
xmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">









The DiffGram format consists of the following blocks of data:

The name of this element, DataInstance, is used for explanation purposes in this documentation. A DataInstance element represents a DataSet or a row of a DataTable. Instead of DataInstance, the element would contain the name of the DataSet or DataTable. This block of the DiffGram format contains the current data, whether it has been modified or not. An element, or row, that has been modified is identified with the diffgr:hasChanges annotation.

This block of the DiffGram format contains the original version of a row. Elements in this block are matched to elements in the DataInstance block using the diffgr:id annotation.

This block of the DiffGram format contains error information for a particular row in the DataInstance block. Elements in this block are matched to elements in the DataInstance block using the diffgr:id annotation.



Attribute
Description

diffgr:hasChanges
The row has been modified (see related row in ) or inserted.

diffgr:hasErrors
The row has an error (see related row in ).

diffgr:id
Identifies the ID used to couple rows across sections: TableName+RowIdentifier.

diffgr:parentId
Identifies the ID used to identify the parent of the current row.

diffgr:error
Contains the error text for the row in .

msdata:rowOrder
Tracks the ordinal position of the row in the DataSet.

msdata:hidden
Identifies columns marked as hidden msdata:hiddenColumn=…



5. Can you edit data in the Repeater control?

NO.

6. Which method do you invoke on the DataAdapter control to load your generated dataset with data?

You have to use the Fill method of the DataAdapter control and pass the dataset object as an argument to load the generated data.

7. Which are the different IsolationLevels ?

Isolation Level
Description

ReadCommitted
The default for SQL Server. This level ensures that data written by one transaction will only be accessible in a second transaction after the first transaction commits.

ReadUncommitted
This permits your transaction to read data within the database, even data that has not yet been committed by another transaction. For example, if two users were accessing the same database, and the first inserted some data without concluding their transaction (by means of a Commit or Rollback), then the second user with their isolation level set to ReadUncommitted could read the data.

RepeatableRead
This level, which extends the ReadCommitted level, ensures that if the same statement is issued within the transaction, regardless of other poten- tial updates made to the database, the same data will always be returned. This level does require extra locks to be held on the data, which could adversely affect performance. This level guarantees that, for each row in the initial query, no changes can be made to that data. It does, however, permit "phantom" rows to show up — these are completely new rows that another transaction might have inserted while your transaction was running.

Serializable
This is the most "exclusive" transaction level, which in effect serializes access to data within the database. With this isolation level, phantom rows can never show up, so a SQL statement issued within a serializable transac- tion will always retrieve the same data. The negative performance impact of a Serializable transaction should not be underestimated — if you don't absolutely need to use this level of isolation, stay away from it.



8. How xml files and be read and write using dataset?.

DataSet exposes method like ReadXml and WriteXml to read and write xml

9. What are the different rowversions available?

DataRow Version Value
Description

Current
The value existing at present within the column. If no edit has occurred, this will be the same as the original value. If an edit (or edits) have occurred, the value will be the last valid value entered.

Default
The default value (in other words, any default set up for the column).

Original
The value of the column when originally selected from the database. If the DataRow's AcceptChanges method is called, this value will update to the Current value.

Proposed
When changes are in progress for a row, it is possible to retrieve this modified value. If you call BeginEdit() on the row and make changes, each column will have a proposed value until either EndEdit() or CancelEdit() is called.



10. Explain ACID properties?.

The ACID model is one of the oldest and most important concepts of database theory. It sets forward four goals that every database management system must strive to achieve: atomicity, consistency, isolation and durability. No database that fails to meet any of these four goals can be considered reliable.

Let’s take a moment to examine each one of these characteristics in detail:



Atomicity states that database modifications must follow an “all or nothing” rule. Each transaction is said to be “atomic.” If one part of the transaction fails, the entire transaction fails. It is critical that the database management system maintain the atomic nature of transactions in spite of any DBMS, operating system or hardware failure.


Consistency states that only valid data will be written to the database. If, for some reason, a transaction is executed that violates the database’s consistency rules, the entire transaction will be rolled back and the database will be restored to a state consistent with those rules. On the other hand, if a transaction successfully executes, it will take the database from one state that is consistent with the rules to another state that is also consistent with the rules.


Isolation requires that multiple transactions occurring at the same time not impact each other’s execution. For example, if Joe issues a transaction against a database at the same time that Mary issues a different transaction, both transactions should operate on the database in an isolated manner. The database should either perform Joe’s entire transaction before executing Mary’s or vice-versa. This prevents Joe’s transaction from reading intermediate data produced as a side effect of part of Mary’s transaction that will not eventually be committed to the database. Note that the isolation property does not ensure which transaction will execute first, merely that they will not interfere with each other.


Durability ensures that any transaction committed to the database will not be lost. Durability is ensured through the use of database backups and transaction logs that facilitate the restoration of committed transactions in spite of any subsequent software or hardware failures.


11. Differences Between ADO and ADO.NET

ADO.NET is an evolution of ADO. The following table lists several data access features and how each feature differs between ADO and ADO.NET.



Feature
ADO
ADO.NET

Memory-resident data representation
Uses the Recordset object, which holds single rows of data, much like a database table
Uses the DataSet object, which can contain one or more tables represented by DataTable objects

Relationships between multiple tables
Requires the JOIN query to assemble data from multiple database tables in a single result table. Also offers hierarchical recordsets, but they are hard to use
Supports the DataRelation object to associate rows in one DataTable object with rows in another DataTable object

Data navigation
Traverses rows in a Recordset sequentially, by using the .MoveNext method
The DataSet uses a navigation paradigm for nonsequential access to rows in a table. Accessing the data is more like accessing data in a collection or array. This is possible because of the Rows collection of the DataTable; it allows you to access rows by index. Follows relationships to navigate from rows in one table to corresponding rows in another table

Disconnected access
Provided by the Recordset but it has to be explicitly coded for. The default for a Recordset object is to be connected via the ActiveConnection property. You communicate to a database with calls to an OLE DB provider
Communicates to a database with standardized calls to the DataAdapter object, which communicates to an OLE DB data provider, or directly to a SQL Server data provider

Programmability
All Recordset field data types are COM Variant data types, and usually correspond to field names in a database table
Uses the strongly typed programming characteristic of XML. Data is self-describing because names for code items correspond to the business problem solved by the code. Data in DataSet and DataReader objects can be strongly typed, thus making code easier to read and to write

Sharing disconnected data between tiers or components
Uses COM marshaling to transmit a disconnected record set. This supports only those data types defined by the COM standard. Requires type conversions, which demand system resources
Transmits a DataSet as XML. The XML format places no restrictions on data types and requires no type conversions

Transmitting data through firewalls
Problematic, because firewalls are typically configured to prevent system-level requests such as COM marshaling
Supported, because ADO.NET DataSet objects use XML, which can pass through firewalls

Scalability
Since the defaults in ADO are to use connected Recordset objects, database locks, and active database connections for long durations contend for limited database resources
Disconnected access to database data without retaining database locks or active database connections for lengthy periods limits contention for limited database resources



12. Whate are different types of Commands available with DataAdapter ?

The SqlDataAdapter has four command objects



SelectCommand
InsertCommand
DeleteCommand
UpdateCommand


13. What is a Dataset?

Major component of ADO.NET is the DataSet object, which you can think of as being similar to an in-memory relational database. DataSet objects contain DataTable objects, relationships, and constraints, allowing them to replicate an entire data source, or selected parts of it, in a disconnected fashion.
A DataSet object is always disconnected from the source whose data it contains, and as a consequence it doesn't care where the data comes from—it can be used to manipulate data from a traditional database or an XML document, or anything in between. In order to connect a DataSet to a data source, you need to use a data adapter as an intermediary between the DataSet and the .NET data provider.

Datasets are the result of bringing together ADO and XML. A dataset contains one or more data of tabular XML, known as DataTables, these data can be treated separately, or can have relationships defined between them. Indeed these relationships give you ADO data SHAPING without needing to master the SHAPE language, which many people are not comfortable with.
The dataset is a disconnected in-memory cache database. The dataset object model looks like this:
Dataset
DataTableCollection
DataTable
DataView
DataRowCollection
DataRow
DataColumnCollection
DataColumn
ChildRelations
ParentRelations
Constraints
PrimaryKey
DataRelationCollection
Let’s take a look at each of these:
DataTableCollection: As we say that a DataSet is an in-memory database. So it has this collection, which holds data from multiple tables in a single DataSet object.
DataTable: In the DataTableCollection, we have DataTable objects, which represents the individual tables of the dataset.
DataView: The way we have views in database, same way we can have DataViews. We can use these DataViews to do Sort, filter data.
DataRowCollection: Similar to DataTableCollection, to represent each row in each Table we have DataRowCollection.
DataRow: To represent each and every row of the DataRowCollection, we have DataRows.
DataColumnCollection: Similar to DataTableCollection, to represent each column in each Table we have DataColumnCollection.
DataColumn: To represent each and every Column of the DataColumnCollection, we have DataColumn.
PrimaryKey: Dataset defines Primary key for the table and the primary key validation will take place without going to the database.
Constraints: We can define various constraints on the Tables, and can use Dataset.Tables(0).enforceConstraints. This will execute all the constraints, whenever we enter data in DataTable.
DataRelationCollection: as we know that we can have more than 1 table in the dataset, we can also define relationship between these tables using this collection and maintain a parent-child relationship.


14. How you will set the datarelation between two columns?

ADO.NET provides DataRelation object to set relation between two columns.It helps to enforce the following constraints,a unique constraint, which guarantees that a column in the table contains no duplicates and a foreign-key constraint,which can be used to maintain referential integrity.A unique constraint is implemented either by simply setting the Unique property of a data column to true, or by adding an instance of the UniqueConstraint class to the DataRelation object's ParentKeyConstraint. As part of the foreign-key constraint, you can specify referential integrity rules that are applied at three points,when a parent record is updated,when a parent record is deleted and when a change is accepted or rejected.

15. Which method do you invoke on the DataAdapter control to load your generated dataset with data?

Use the Fill method of the DataAdapter control and pass the dataset object as an argument to load the generated data.

16. How do you handle data concurrency in .NET ?

In general, there are three common ways to manage concurrency in a database:



Pessimistic concurrency control: A row is unavailable to users from the time the record is fetched until it is updated in the database.
Optimistic concurrency control: A row is unavailable to other users only while the data is actually being updated. The update examines the row in the database and determines whether any changes have been made. Attempting to update a record that has already been changed results in a concurrency violation.
"Last in wins": A row is unavailable to other users only while the data is actually being updated. However, no effort is made to compare updates against the original record; the record is simply written out, potentially overwriting any changes made by other users since you last refreshed the records.
Pessimistic Concurrency

Pessimistic concurrency is typically used for two reasons. First, in some situations there is high contention for the same records. The cost of placing locks on the data is less than the cost of rolling back changes when concurrency conflicts occur.
Pessimistic concurrency is also useful for situations where it is detrimental for the record to change during the course of a transaction. A good example is an inventory application. Consider a company representative checking inventory for a potential customer. You typically want to lock the record until an order is generated, which would generally flag the item with a status of ordered and remove it from available inventory. If no order is generated, the lock would be released so that other users checking inventory get an accurate count of available inventory.
However, pessimistic concurrency control is not possible in a disconnected architecture. Connections are open only long enough to read the data or to update it, so locks cannot be sustained for long periods. Moreover, an application that holds onto locks for long periods is not scalable.


Optimistic Concurrency


In optimistic concurrency, locks are set and held only while the database is being accessed. The locks prevent other users from attempting to update records at the same instant. The data is always available except for the exact moment that an update is taking place. For more information, see Using Optimistic Concurrency.
When an update is attempted, the original version of a changed row is compared against the existing row in the database. If the two are different, the update fails with a concurrency error. It is up to you at that point to reconcile the two rows, using business logic that you create.


Last in Wins


With "last in wins," no check of the original data is made and the update is simply written to the database. It is understood that the following scenario can occur:



User A fetches a record from the database.
User B fetches the same record from the database, modifies it, and writes the updated record back to the database.
User A modifies the 'old' record and writes it back to the database.
In the above scenario, the changes User B made were never seen by User A. Be sure that this situation is acceptable if you plan to use the "last in wins" approach of concurrency control.


Concurrency Control in ADO.NET and Visual Studio


ADO.NET and Visual Studio use optimistic concurrency, because the data architecture is based on disconnected data. Therefore, you need to add business logic to resolve issues with optimistic concurrency.
If you choose to use optimistic concurrency, there are two general ways to determine if changes have occurred: the version approach (true version numbers or date-time stamps) and the saving-all-values approach.


The Version Number Approach


In the version number approach, the record to be updated must have a column that contains a date-time stamp or version number. The date-time stamp or a version number is saved on the client when the record is read. This value is then made part of the update.
One way to handle concurrency is to update only if value in the WHERE clause matches the value on the record. The SQL representation of this approach is:



UPDATE Table1 SET Column1 = @newvalue1, Column2 = @newvalue2 WHERE DateTimeStamp = @origDateTimeStamp

Alternatively, the comparison can be made using the version number:


UPDATE Table1 SET Column1 = @newvalue1, Column2 = @newvalue2 WHERE RowVersion = @origRowVersionValue


If the date-time stamps or version numbers match, the record in the data store has not changed and can be safely updated with the new values from the dataset. An error is returned if they don't match. You can write code to implement this form of concurrency checking in Visual Studio. You will also have to write code to respond to any update conflicts. To keep the date-time stamp or version number accurate, you need to set up a trigger on the table to update it when a change to a row occurs.


The Saving-All-Values Approach


An alternative to using a date-time stamp or version number is to get copies of all the fields when the record is read. The DataSet object in ADO.NET maintains two versions of each modified record: an original version (that was originally read from the data source) and a modified version, representing the user updates. When attempting to write the record back to the data source, the original values in the data row are compared against the record in the data source. If they match, it means that the database record has not changed since it was read. In that case, the changed values from the dataset are successfully written to the database.
Each data adapter command has a parameters collection for each of its four commands (DELETE, INSERT, SELECT, and UPDATE). Each command has parameters for both the original values, as well as the current (or modified) values.
The following example shows the command text for a dataset command that updates a typical Customers table. The command is specified for dynamic SQL and optimistic concurrency.



UPDATE Customers SET CustomerID = @currCustomerID, CompanyName = @currCompanyName, ContactName = @currContactName, ContactTitle = currContactTitle, Address = @currAddress, City = @currCity, PostalCode = @currPostalCode, Phone = @currPhone, Fax = @currFax WHERE (CustomerID = @origCustomerID) AND (Address = @origAddress OR @origAddress IS NULL AND Address IS NULL) AND (City = @origCity OR @origCity IS NULL AND City IS NULL) AND (CompanyName = @origCompanyName OR @origCompanyName IS NULL AND CompanyName IS NULL) AND (ContactName = @origContactName OR @origContactName IS NULL AND ContactName IS NULL) AND (ContactTitle = @origContactTitle OR @origContactTitle IS NULL AND ContactTitle IS NULL) AND (Fax = @origFax OR @origFax IS NULL AND Fax IS NULL) AND (Phone = @origPhone OR @origPhone IS NULL AND Phone IS NULL) AND (PostalCode = @origPostalCode OR @origPostalCode IS NULL AND PostalCode IS NULL); SELECT CustomerID, CompanyName, ContactName, ContactTitle, Address, City, PostalCode, Phone, Fax FROM Customers WHERE (CustomerID = @currCustomerID)

Note that the nine SET statement parameters represent the current values that will be written to the database, whereas the nine WHERE statement parameters represent the original values that are used to locate the original record.
The first nine parameters in the SET statement correspond to the first nine parameters in the parameters collection. These parameters would have their SourceVersion property set to Current.
The next nine parameters in the WHERE statement are used for optimistic concurrency. These placeholders would correspond to the next nine parameters in the parameters collection, and each of these parameters would have their SourceVersion property set to Original.
The SELECT statement is used to refresh the dataset after the update has occurred. It is generated when you set the Refresh the DataSet option in the Advanced SQL Generations Options dialog box.


17. What are relation objects in dataset and how & where to use them?

In a DataSet that contains multiple DataTable objects, you can use DataRelation objects to relate one table to another, to navigate through the tables, and to return child or parent rows from a related table. Adding a DataRelation to a DataSet adds, by default, a UniqueConstraint to the parent table and a ForeignKeyConstraint to the child table.

The following code example creates a DataRelation using two DataTable objects in a DataSet. Each DataTable contains a column named CustID, which serves as a link between the two DataTable objects. The example adds a single DataRelation to the Relations collection of the DataSet. The first argument in the example specifies the name of the DataRelation being created. The second argument sets the parent DataColumn and the third argument sets the child DataColumn.`

custDS.Relations.Add("CustOrders",
custDS.Tables["Customers"].Columns["CustID"],
custDS.Tables["Orders"].Columns["CustID"]);


18. Difference between OLEDB Provider and SqlClient ?

SQLClient .NET classes are highly optimized for the .net / sqlserver combination and achieve optimal results. The SqlClient data provider is fast. It's faster than the Oracle provider, and faster than accessing database via the OleDb layer. It's faster because it accesses the native library (which automatically gives you better performance), and it was written with lots of help from the SQL Server team.

19. What are the different namespaces used in the project to connect the database? What data providers available in .net to connect to database?

Following are different Namespaces:



System.Data.OleDb - classes that make up the .NET Framework Data Provider for OLE DB-compatible data sources. These classes allow you to connect to an OLE DB data source, execute commands against the source, and read the results.
System.Data.SqlClient - classes that make up the .NET Framework Data Provider for SQL Server, which allows you to connect to SQL Server 7.0, execute commands, and read results. The System.Data.SqlClient namespace is similar to the System.Data.OleDb namespace, but is optimized for access to SQL Server 7.0 and later.
System.Data.Odbc - classes that make up the .NET Framework Data Provider for ODBC. These classes allow you to access ODBC data source in the managed space.
System.Data.OracleClient - classes that make up the .NET Framework Data Provider for Oracle. These classes allow you to access an Oracle data source in the managed space.


20. What is Data Reader?

You can use the ADO.NET DataReader to retrieve a read-only, forward-only stream of data from a database. Using the DataReader can increase application performance and reduce system overhead because only one row at a time is ever in memory.
After creating an instance of the Command object, you create a DataReader by calling Command. ExecuteReader to retrieve rows from a data source, as shown in the following example.



SqlDataReader myReader = myCommand.ExecuteReader();



You use the Read method of the DataReader object to obtain a row from the results of the query.


while (myReader.Read())
Console.WriteLine("\t{0}\t{1}", myReader.GetInt32(0), myReader.GetString(1));
myReader.Close();


21. What is Data Set?

The DataSet is a memory-resident representation of data that provides a consistent relational programming model regardless of the data source. It can be used with multiple and differing data sources, used with XML data, or used to manage data local to the application. The DataSet represents a complete set of data including related tables, constraints, and relationships among the tables. The methods and objects in a DataSet are consistent with those in the relational database model. The DataSet can also persist and reload its contents as XML and its schema as XML Schema definition language (XSD) schema.

22. What is Data Adapter?

The DataAdapter serves as a bridge between a DataSet and a data source for retrieving and saving data. The DataAdapter provides this bridge by mapping Fill, which changes the data in the DataSet to match the data in the data source, and Update, which changes the data in the data source to match the data in the DataSet. If you are connecting to a Microsoft SQL Server database, you can increase overall performance by using the SqlDataAdapter along with its associated SqlCommand and SqlConnection. For other OLE DB-supported databases, use the DataAdapter with its associated OleDbCommand and OleDbConnection objects.

23. Which method do you invoke on the DataAdapter control to load your generated dataset with data?

Fill() method is used to load the generated data set with Data.

24. Explain different methods and Properties of DataReader which you have used in your project?

Following are the methods and properties :

Read
GetString
GetInt32
while (myReader.Read())
Console.WriteLine("\t{0}\t{1}", myReader.GetInt32(0), myReader.GetString(1));
myReader.Close();


25. What happens when we issue Dataset.ReadXml command?

Reads XML schema and data into the DataSet

26. What Method checks if a datareader is closed or opened?

IsClosed() checks whether a datareader is closed or open.

27. What is method to get XML and schema from Dataset?

getXML () and get Schema ().

28. Differences between dataset.clone and dataset.copy?

The Difference is as follows:

Clone - Copies the structure of the DataSet, including all DataTable schemas, relations, and constraints. Does not copy any data.
Copy - Copies both the structure and data for this DataSet.


29. What are the differences between the Recordset and the DataSet objects?

Tables represented by the object: The ADO Recordset object represents only one table at a given time, while the DataSet object in ADO.NET can represent any number of tables, keys, constraints and relations which makes it very much like an RDBMS.
Navigation: Navigating the Recordset object depend on the cursor used to create the object with limited functionality in moving back and forth while the DataSet represents data in “collections” that could be accessed through indexers in a random-access fashion
Connection Model: The Recordset is designed to work as a “connected object” with a server-side cursor in mind while the DataSet is designed to work as a disconnected object containing hierarchy of data in XML format.
Database Updates: Updating a database through the use of a Recordset object is direct since it is tied to the database. On the other hand, the DataSet as an independent data store must use a database-specific DataAdapter object to post updates to the database.


30. Which ADO.Net objects fall under connected database model and disconnected database model?

The DataReader object falls under connected model and DataSet, DataTable, DataAdapter objects fall under disconnected database model.


31. How to use ImportRow method?

The ImportRow method of DataTable copies a row into a DataTable with all of the properties and data of the row. It actually calls NewRow method on destination DataTable with current table schema and sets DataRowState to Added.


DataTable dt; /// fill the table before you use it
DataTable copyto;

foreach(DataRow dr in dt.Rows)
{
copyto.ImportRow(dr);
}


32. What are the pros and cons of using DataReader object?

The DataReader object is a forward-only resultset and is faster to traverse as compared to its counterpart DataTable. However it holds an active connection to the database until all records are retrieved from it or closed explicitly. This could be a problem when the resultset holds large number of records and the application has many concurrent users.

33. What are different execute methods of ADO.NET command object?

ExecuteScalar method returns a single value from the first row and first column of the resultset obtained from the execution of SQL query.
ExecuteNonQuery method executes the DML SQL query such as insert, delete or update and returns the number of rows affected by this action.
ExecuteReader method returns DataReader object which is a forward-only resultset.
ExecuteXMLReader method is available for SQL Server 2000 or later. Upon execution it builds XMLReader object from standard SQL query.


34. What is the difference between data reader and data adapter?

DateReader is an forward only and read only cursor type if you are accessing data through DataRead it shows the data on the web form/control but you can not perform the paging feature on that record(because it's forward only type).

Reader is best fit to show the Data (where no need to work on data)

DataAdapter is not only connect with the Databse(through Command object) it provide four types of command (InsertCommand, UpdateCommand, DeleteCommand, SelectCommand), It supports to the disconnected Architecture of .NET show we can populate the records to the DataSet. where as Dataadapter is best fit to work on data.


35. Difference between SqlCommand and SqlCommandBuilder?

SQLCommand is used to retrieve or update the data from database.

You can use the SELECT / INSERT,UPDATE,DELETE command with SQLCommand. SQLCommand will execute these commnds in the database.

SQLBUILDER is used to build the SQL Command like SELECT/ INSERTR, UPDATE etc.


36. Can you edit data in the Repeater control?

NO.

37. What are the different rowversions available?

There are four types of Rowversions.
Current:
The current values for the row. This row version does not exist for rows with a RowState of Deleted.
Default :
The row the default version for the current DataRowState. For a DataRowState value of Added, Modified or Current, the default version is Current. For a DataRowState of Deleted, the version is Original. For a DataRowState value of Detached, the version is Proposed.
Original:
The row contains its original values.
Proposed:
The proposed values for the row. This row version exists during an edit operation on a row, or for a row that is not part of a DataRowCollection


38. Explain DataSet.AcceptChanges and DataAdapter.Update methods?

Dataset maintains the rowstate of each row with in a table. as a dataset is loaded its rowstate version is unchanged . whenever there is a modification in a paricular row with in a datatable , dataset changes the rowversion as modified, Added or deleted based on the particular action performed on the particular row.

AcceptChanges() method again change the Rowversion back to Unchanged.

Update() method is for updation of any changes made to the dataset in the database. This function checks the rowversion of each row with in a table. If it finds any row with added rowstate then that particular row is inserted else if it is Modified it is upadted if deleted then a delete statement is executed.

But if Acceptchanges() is done before an update function it wold not update anything to database since rowstate becomes unchanged


39. How you will set the datarelation between two columns?

ADO.NET provides DataRelation object to set relation between two columns.It helps to enforce the following constraints,a unique constraint, which guarantees that a column in the table contains no duplicates and a foreign-key constraint,which can be used to maintain referential integrity.A unique constraint is implemented either by simply setting the Unique property of a data column to true, or by adding an instance of the UniqueConstraint class to the DataRelation object's ParentKeyConstraint. As part of the foreign-key constraint, you can specify referential integrity rules that are applied at three points,when a parent record is updated,when a parent record is deleted and when a change is accepted or rejected.

40. What connections does Microsoft SQL Server support?

Windows Authentication (via Active Directory) and SQL Server authentication (via Microsoft SQL Server username and passwords).



41. Which one is trusted and which one is untrusted?

Windows Authentication is trusted because the username and password are checked with the Active Directory, the SQL Server authentication is untrusted, since SQL Server is the only verifier participating in the transaction.

42. What is Connection pooling?

The connection represents an open and unique link to a data source. In a distributed system, this often involves a network connection. Depending on the underlying data source, the programming interface of the various connection objects may differ quite a bit. A connection object is specific to a particular type of data source, such as SQL Server and Oracle. Connection objects can't be used interchangeably across different data sources, but all share a common set of methods and properties grouped in the IDbConnection interface.
In ADO.NET, connection objects are implemented within data providers as sealed classes (that is, they are not further inheritable). This means that the behavior of a connection class can never be modified or overridden, just configured through properties and attributes. In ADO.NET, all connection classes support connection pooling, although each class may implement it differently. Connection pooling is implicit, meaning that you don't need to enable it because the provider manages this automatically.
ADO.NET pools connections with the same connection or configuration (connection string). It can maintain more than one pool (actually, one for each configuration). An interesting note: Connection pooling is utilized (by default) unless otherwise specified. If you close and dispose of all connections, then there will be no pool (since there are no available connections).
While leaving database connections continuously open can be troublesome, it can be advantageous for applications that are in constant communication with a database by negating the need to re-open connections. Some database administrators may frown on the practice since multiple connections (not all of which may be useful) to the database are open. Using connection pooling depends upon available server resources and application requirements (i.e., does it really need it).

Using connection pooling

Connection pooling is enabled by default. You may override the default behavior with the pooling setting in the connection string. The following SQL Server connection string does not utilize connection pooling:
Data Source=TestServer;Initial Catalog=Northwind;
User ID=Chester;Password=Tester;Pooling=False;
You can use the same approach with other .NET Data Providers. You may enable it by setting it to True (or eliminating the Pooling variable to use the default behavior). In addition, the default size of the connection pool is 100, but you may override this as well with connection string variables. You may use the following variables to control the minimum and maximum size of the pool as well as transaction support:


• Max Pool Size: The maximum number of connections allowed in the pool. The default value is 100.
• Min Pool Size: The minimum number of connections allowed in the pool. The default value is zero.
• Enlist: Signals whether the pooler automatically enlists the connection in the creation thread's current transaction context. The default value is true.


The following SQL Server connection string uses connection pooling with a minimum size of five and a maximum size of 100:


Data Source=TestServer;Initial Catalog=Northwind;
User ID=Chester;Password=Tester;Max Pool Size=50;
Min Pool Size=5;Pooling=True;


43. What are the two fundamental objects in ADO.NET ?

Datareader and Dataset are the two fundamental objects in ADO.NET.

44. What is the use of connection object ?

They are used to connect a data to a Command object.

An OleDbConnection object is used with an OLE-DB provider
A SqlConnection object uses Tabular Data Services (TDS) with MS SQL Server


45. What are the various objects in Dataset ?

Dataset has a collection of DataTable object within the Tables collection. Each DataTable object contains a collection of DataRow objects and a collection of DataColumn objects. There are also collections for the primary keys, constraints, and default values used in this table which is called as constraint collection, and the parent and child relationships between the tables. Finally, there is a DefaultView object for each table. This is used to create a Data View object based on the table, so that the data can be searched, filtered or otherwise manipulated while displaying the data.


46. How can we force the connection object to close after my datareader is closed ?

Command method Executereader takes a parameter called as CommandBehavior where in we can specify saying close connection automatically after the Datareader is close.
pobjDataReader =pobjCommand.ExecuteReader(CommandBehavior.CloseConnection)


47. How can we get only schema using dataReader?

pobjDataReader = pobjCommand.ExecuteReader(Co-mmandBehavior.SchemaOnly)

48. Explain how to use stored procedures with ADO.net?

Using Stored Procedures with a Command


Stored procedures offer many advantages in data-driven applications. Using stored procedures, database operations can be encapsulated in a single command, optimized for best performance, and enhanced with additional security. While a stored procedure can be called by simply passing the stored procedure name followed by parameter arguments as an SQL statement, using the Parameters collection of the ADO.NET Command object enables you to more explicitly define stored procedure parameters as well as to access output parameters and return values.
To call a stored procedure, set the CommandType of the Command object to StoredProcedure. Once the CommandType is set to StoredProcedure, you can use the Parameters collection to define parameters, as in the following example.
Note The OdbcCommand requires that you supply the full ODBC CALL syntax when calling a stored procedure.


SqlClient
[Visual Basic]
Dim nwindConn As SqlConnection = New SqlConnection("Data Source=localhost;Integrated Security=SSPI;" & _
"Initial Catalog=northwind")

Dim salesCMD As SqlCommand = New SqlCommand("SalesByCategory", nwindConn)
salesCMD.CommandType = CommandType.StoredProcedure

Dim myParm As SqlParameter = salesCMD.Parameters.Add("@CategoryName", SqlDbType.NVarChar, 15)
myParm.Value = "Beverages"

nwindConn.Open()

Dim myReader As SqlDataReader = salesCMD.ExecuteReader()

Console.WriteLine("{0}, {1}", myReader.GetName(0), myReader.GetName(1))

Do While myReader.Read()
Console.WriteLine("{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1))
Loop

myReader.Close()
nwindConn.Close()


[C#]


SqlConnection nwindConn = new SqlConnection("Data Source=localhost;Integrated Security=SSPI;Initial Catalog=northwind");

SqlCommand salesCMD = new SqlCommand("SalesByCategory", nwindConn);
salesCMD.CommandType = CommandType.StoredProcedure;

SqlParameter myParm = salesCMD.Parameters.Add("@CategoryName", SqlDbType.NVarChar, 15);
myParm.Value = "Beverages";

nwindConn.Open();

SqlDataReader myReader = salesCMD.ExecuteReader();

Console.WriteLine("{0}, {1}", myReader.GetName(0), myReader.GetName(1));

while (myReader.Read())
{
Console.WriteLine("{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1));
}

myReader.Close();
nwindConn.Close();


OleDb
[Visual Basic]
Dim nwindConn As OleDbConnection = New OleDbConnection("Provider=SQLOLEDB;Data Source=localhost;Integrated Security=SSPI;" & _
"Initial Catalog=northwind")

Dim salesCMD As OleDbCommand = New OleDbCommand("SalesByCategory", nwindConn)
salesCMD.CommandType = CommandType.StoredProcedure

Dim myParm As OleDbParameter = salesCMD.Parameters.Add("@CategoryName", OleDbType.VarChar, 15)
myParm.Value = "Beverages"

nwindConn.Open()

Dim myReader As OleDbDataReader = salesCMD.ExecuteReader()

Console.WriteLine("{0}, {1}", myReader.GetName(0), myReader.GetName(1))

Do While myReader.Read()
Console.WriteLine("{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1))
Loop

myReader.Close()
nwindConn.Close()


[C#]


OleDbConnection nwindConn = new OleDbConnection("Provider=SQLOLEDB;Data Source=localhost;Integrated Security=SSPI;" +
"Initial Catalog=northwind");

OleDbCommand salesCMD = new OleDbCommand("SalesByCategory", nwindConn);
salesCMD.CommandType = CommandType.StoredProcedure;

OleDbParameter myParm = salesCMD.Parameters.Add("@CategoryName", OleDbType.VarChar, 15);
myParm.Value = "Beverages";

nwindConn.Open();

OleDbDataReader myReader = salesCMD.ExecuteReader();

Console.WriteLine("\t{0}, {1}", myReader.GetName(0), myReader.GetName(1));

while (myReader.Read())
{
Console.WriteLine("\t{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1));
}

myReader.Close();
nwindConn.Close();


Odbc
[Visual Basic]


Dim nwindConn As OdbcConnection = New OdbcConnection("Driver={SQL Server};Server=localhost;Trusted_Connection=yes;" & _
"Database=northwind")
nwindConn.Open()

Dim salesCMD As OdbcCommand = New OdbcCommand("{ CALL SalesByCategory(?) }", nwindConn)
salesCMD.CommandType = CommandType.StoredProcedure

Dim myParm As OdbcParameter = salesCMD.Parameters.Add("@CategoryName", OdbcType.VarChar, 15)
myParm.Value = "Beverages"

Dim myReader As OdbcDataReader = salesCMD.ExecuteReader()

Console.WriteLine("{0}, {1}", myReader.GetName(0), myReader.GetName(1))

Do While myReader.Read()
Console.WriteLine("{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1))
Loop

myReader.Close()
nwindConn.Close()
[C#]
OdbcConnection nwindConn = new OdbcConnection("Driver={SQL Server};Server=localhost;Trusted_Connection=yes;" +
"Database=northwind");
nwindConn.Open();

OdbcCommand salesCMD = new OdbcCommand("{ CALL SalesByCategory(?) }", nwindConn);
salesCMD.CommandType = CommandType.StoredProcedure;

OdbcParameter myParm = salesCMD.Parameters.Add("@CategoryName", OdbcType.VarChar, 15);
myParm.Value = "Beverages";

OdbcDataReader myReader = salesCMD.ExecuteReader();

Console.WriteLine("\t{0}, {1}", myReader.GetName(0), myReader.GetName(1));

while (myReader.Read())
{
Console.WriteLine("\t{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1));
}

myReader.Close();
nwindConn.Close();


A Parameter object can be created using the Parameter constructor, or by calling the Add method of the Parameters collection of a Command. Parameters.Add will take as input either constructor arguments or an existing Parameter object. When setting the Value of a Parameter to a null reference, use DBNull.Value.
For parameters other than Input parameters, you must set the ParameterDirection property to specify whether the parameter type is InputOutput, Output, or ReturnValue. The following example shows the difference between creating Input, Output, and ReturnValue parameters.


[Visual Basic]


Dim sampleCMD As SqlCommand = New SqlCommand("SampleProc", nwindConn)
sampleCMD.CommandType = CommandType.StoredProcedure

Dim sampParm As SqlParameter = sampleCMD.Parameters.Add("RETURN_VALUE", SqlDbType.Int)
sampParm.Direction = ParameterDirection.ReturnValue

sampParm = sampleCMD.Parameters.Add("@InputParm", SqlDbType.NVarChar, 12)
sampParm.Value = "Sample Value"

sampParm = sampleCMD.Parameters.Add("@OutputParm", SqlDbType.NVarChar, 28)
sampParm.Direction = ParameterDirection.Output

nwindConn.Open()

Dim sampReader As SqlDataReader = sampleCMD.ExecuteReader()

Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1))

Do While sampReader.Read()
Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1))
Loop

sampReader.Close()
nwindConn.Close()

Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters("@OutputParm").Value)
Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters("RETURN_VALUE").Value)


[C#]


SqlCommand sampleCMD = new SqlCommand("SampleProc", nwindConn);
sampleCMD.CommandType = CommandType.StoredProcedure;

SqlParameter sampParm = sampleCMD.Parameters.Add("RETURN_VALUE", SqlDbType.Int);
sampParm.Direction = ParameterDirection.ReturnValue;

sampParm = sampleCMD.Parameters.Add("@InputParm", SqlDbType.NVarChar, 12);
sampParm.Value = "Sample Value";

sampParm = sampleCMD.Parameters.Add("@OutputParm", SqlDbType.NVarChar, 28);
sampParm.Direction = ParameterDirection.Output;

nwindConn.Open();

SqlDataReader sampReader = sampleCMD.ExecuteReader();

Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1));

while (sampReader.Read())
{
Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1));
}

sampReader.Close();
nwindConn.Close();

Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters["@OutputParm"].Value);
Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters["RETURN_VALUE"].Value);
OleDb


[Visual Basic]


Dim sampleCMD As OleDbCommand = New OleDbCommand("SampleProc", nwindConn)
sampleCMD.CommandType = CommandType.StoredProcedure

Dim sampParm As OleDbParameter = sampleCMD.Parameters.Add("RETURN_VALUE", OleDbType.Integer)
sampParm.Direction = ParameterDirection.ReturnValue

sampParm = sampleCMD.Parameters.Add("@InputParm", OleDbType.VarChar, 12)
sampParm.Value = "Sample Value"

sampParm = sampleCMD.Parameters.Add("@OutputParm", OleDbType.VarChar, 28)
sampParm.Direction = ParameterDirection.Output

nwindConn.Open()

Dim sampReader As OleDbDataReader = sampleCMD.ExecuteReader()

Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1))

Do While sampReader.Read()
Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1))
Loop

sampReader.Close()
nwindConn.Close()

Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters("@OutputParm").Value)
Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters("RETURN_VALUE").Value)


[C#]


OleDbCommand sampleCMD = new OleDbCommand("SampleProc", nwindConn);
sampleCMD.CommandType = CommandType.StoredProcedure;

OleDbParameter sampParm = sampleCMD.Parameters.Add("RETURN_VALUE", OleDbType.Integer);
sampParm.Direction = ParameterDirection.ReturnValue;

sampParm = sampleCMD.Parameters.Add("@InputParm", OleDbType.VarChar, 12);
sampParm.Value = "Sample Value";

sampParm = sampleCMD.Parameters.Add("@OutputParm", OleDbType.VarChar, 28);
sampParm.Direction = ParameterDirection.Output;

nwindConn.Open();

OleDbDataReader sampReader = sampleCMD.ExecuteReader();

Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1));

while (sampReader.Read())
{
Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1));
}

sampReader.Close();
nwindConn.Close();

Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters["@OutputParm"].Value);
Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters["RETURN_VALUE"].Value);


Odbc
[Visual Basic]
Dim sampleCMD As OdbcCommand = New OdbcCommand("{ ? = CALL SampleProc(?, ?) }", nwindConn)
sampleCMD.CommandType = CommandType.StoredProcedure

Dim sampParm As OdbcParameter = sampleCMD.Parameters.Add("RETURN_VALUE", OdbcType.Int)
sampParm.Direction = ParameterDirection.ReturnValue

sampParm = sampleCMD.Parameters.Add("@InputParm", OdbcType.VarChar, 12)
sampParm.Value = "Sample Value"

sampParm = sampleCMD.Parameters.Add("@OutputParm", OdbcType.VarChar, 28)
sampParm.Direction = ParameterDirection.Output

nwindConn.Open()

Dim sampReader As OdbcDataReader = sampleCMD.ExecuteReader()

Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1))

Do While sampReader.Read()
Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1))
Loop

sampReader.Close()
nwindConn.Close()

Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters("@OutputParm").Value)
Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters("RETURN_VALUE").Value)


[C#]


OdbcCommand sampleCMD = new OdbcCommand("{ ? = CALL SampleProc(?, ?) }", nwindConn);
sampleCMD.CommandType = CommandType.StoredProcedure;

OdbcParameter sampParm = sampleCMD.Parameters.Add("RETURN_VALUE", OdbcType.Int);
sampParm.Direction = ParameterDirection.ReturnValue;

sampParm = sampleCMD.Parameters.Add("@InputParm", OdbcType.VarChar, 12);
sampParm.Value = "Sample Value";

sampParm = sampleCMD.Parameters.Add("@OutputParm", OdbcType.VarChar, 28);
sampParm.Direction = ParameterDirection.Output;

nwindConn.Open();

OdbcDataReader sampReader = sampleCMD.ExecuteReader();

Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1));

while (sampReader.Read())
{
Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1));
}

sampReader.Close();
nwindConn.Close();

Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters["@OutputParm"].Value);
Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters["RETURN_VALUE"].Value);


Using Parameters with a SqlCommand


When using parameters with a SqlCommand, the names of the parameters added to the Parameters collection must match the names of the parameter markers in your stored procedure. The .NET Framework Data Provider for SQL Server treats parameters in the stored procedure as named parameters and searches for the matching parameter markers.
The .NET Framework Data Provider for SQL Server does not support the question mark (?) placeholder for passing parameters to an SQL statement or a stored procedure. In this case, you must use named parameters, as in the following example.
SELECT * FROM Customers WHERE CustomerID = @CustomerID
Using Parameters with an OleDbCommand or OdbcCommand
When using parameters with an OleDbCommand or OdbcCommand, the order of the parameters added to the Parameters collection must match the order of the parameters defined in your stored procedure. The .NET Framework Data Provider for OLE DB and .NET Framework Data Provider for ODBC treat parameters in a stored procedure as placeholders and applies parameter values in order. In addition, return value parameters must be the first parameters added to the Parameters collection.
The .NET Framework Data Provider for OLE DB and .NET Framework Data Provider for ODBC do not support named parameters for passing parameters to an SQL statement or a stored procedure. In this case, you must use the question mark (?) placeholder, as in the following example.
SELECT * FROM Customers WHERE CustomerID = ?
As a result, the order in which Parameter objects are added to the Parameters collection must directly correspond to the position of the question mark placeholder for the parameter.
Deriving Parameter Information
Parameters can also be derived from a stored procedure using the CommandBuilder class. Both the SqlCommandBuilder and OleDbCommandBuilder classes provide a static method, DeriveParameters, which will automatically populate the Parameters collection of a Command object with parameter information from a stored procedure. Note that DeriveParameters will overwrite any existing parameter information for the Command.
Deriving parameter information does require an added trip to the data source for the information. If parameter information is known at design-time, you can improve the performance of your application by setting the parameters explicitly.
The following code example shows how to populate the Parameters collection of a Command object using CommandBuilder.DeriveParameters.


[Visual Basic]


Dim nwindConn As SqlConnection = New SqlConnection("Data Source=localhost;Initial Catalog=Northwind;Integrated Security=SSPI;")
Dim salesCMD As SqlCommand = New SqlCommand("Sales By Year", nwindConn)
salesCMD.CommandType = CommandType.StoredProcedure

nwindConn.Open()
SqlCommandBuilder.DeriveParameters(salesCMD)
nwindConn.Close()


[C#]


SqlConnection nwindConn = new SqlConnection("Data Source=localhost;Initial Catalog=Northwind;Integrated Security=SSPI;");
SqlCommand salesCMD = new SqlCommand("Sales By Year", nwindConn);
salesCMD.CommandType = CommandType.StoredProcedure;

nwindConn.Open();
SqlCommandBuilder.DeriveParameters(salesCMD);
nwindConn.Close();


49. How can we fine tune the command object when we are expecting a single row or a single value ?

CommandBehaviour enumeration provides two values SingleResult and SingleRow.If you are expecting a single value then pass "CommandBehaviour.SingleResult" and the query is optimized accordingly, if you are expecting single row then pass "CommandBehaviour.SingleRow" and query is optimized according to single row.

50. How can you Obtaining Data as XML from SQL Server?

[Visual Basic]


Dim custCMD As SqlCommand = New SqlCommand("SELECT * FROM Customers FOR XML AUTO, ELEMENTS", nwindConn)
Dim myXR As System.Xml.XmlReader = custCMD.ExecuteXmlReader()


[C#]


SqlCommand custCMD = new SqlCommand("SELECT * FROM Customers FOR XML AUTO, ELEMENTS", nwindConn);
System.Xml.XmlReader myXR = custCMD.ExecuteXmlReader();


51. How to add Existing Constraints to a DataSet?

The Fill method of the DataAdapter fills a DataSet only with table columns and rows from a data source; though constraints are commonly set by the data source, the Fill method does not add this schema information to the DataSet by default. To populate a DataSet with existing primary key constraint information from a data source, you can either call the FillSchema method of the DataAdapter, or set the MissingSchemaAction property of the DataAdapter to AddWithKey before calling Fill. This will ensure that primary key constraints in the DataSet reflect those at the data source. Foreign key constraint information is not included and will need to be created explicitly

Adding schema information to a DataSet before filling it with data ensures that primary key constraints are included with the DataTable objects in the DataSet. As a result, when additional calls to Fill the DataSet are made, the primary key column information is used to match new rows from the data source with current rows in each DataTable, and current data in the tables is overwritten with data from the data source. Without the schema information, the new rows from the data source are appended to the DataSet, resulting in duplicate rows.

Using FillSchema or setting the MissingSchemaAction to AddWithKey requires extra processing at the data source to determine primary key column information. This additional processing can hinder performance. If you know the primary key information at design-time, it is recommended that you specify the primary key column or columns explicitly in order to achieve optimal performance. For information about explicitly setting primary key information for a table



[Visual Basic]
Dim custDS As DataSet = New DataSet()

custDA.FillSchema(custDS, SchemaType.Source, "Customers")
custDA.Fill(custDS, "Customers")


[C#]
DataSet custDS = new DataSet();

custDA.FillSchema(custDS, SchemaType.Source, "Customers");
custDA.Fill(custDS, "Customers");

[Visual Basic]
Dim custDS As DataSet = New DataSet()

custDA.MissingSchemaAction = MissingSchemaAction.AddWithKey
custDA.Fill(custDS, "Customers")


[C#]
DataSet custDS = new DataSet();

custDA.MissingSchemaAction = MissingSchemaAction.AddWithKey;
custDA.Fill(custDS, "Customers");


52. How to add relation between tables?

In a DataSet that contains multiple DataTable objects, you can use DataRelation objects to relate one table to another, to navigate through the tables, and to return child or parent rows from a related table.
Adding a DataRelation to a DataSet adds, by default, a UniqueConstraint to the parent table and a ForeignKeyConstraint to the child table. For more information about these default constraints



[Visual Basic]
custDS.Relations.Add("CustOrders", _
custDS.Tables("Customers").Columns("CustID"), _
custDS.Tables("Orders").Columns("CustID"))

[C#]
custDS.Relations.Add("CustOrders",
custDS.Tables["Customers"].Columns["CustID"],
custDS.Tables["Orders"].Columns["CustID"]);


53. How to get the data changes in dataset?

GetChanges : Gets a copy of the DataSet containing all changes made to it since it was last loaded, or since AcceptChanges was called.



[Visual Basic]
Private Sub UpdateDataSet(ByVal myDataSet As DataSet)
' Check for changes with the HasChanges method first.
If Not myDataSet.HasChanges(DataRowState.Modified) Then Exit Sub
' Create temporary DataSet variable.
Dim xDataSet As DataSet
' GetChanges for modified rows only.
xDataSet = myDataSet.GetChanges(DataRowState.Modified)
' Check the DataSet for errors.
If xDataSet.HasErrors Then
' Insert code to resolve errors.
End If
' After fixing errors, update the data source with the DataAdapter
' used to create the DataSet.
myOleDbDataAdapter.Update(xDataSet)
End Sub



[C#]
private void UpdateDataSet(DataSet myDataSet){
// Check for changes with the HasChanges method first.
if(!myDataSet.HasChanges(DataRowState.Modified)) return;
// Create temp


54. What are the various methods provided by the dataset object to generate XML?

ReadXML : Read's a XML document in to Dataset.
GetXML : This is function's which return's a string containing XML document.
WriteXML : This write's a XML data to disk.


55. What is Dataview and what’s the use of Dataview?

Represents a databindable, customized view of a DataTable for sorting, filtering, searching, editing, and navigation. A major function of the DataView is to allow data binding on both Windows Forms and Web Forms.

Dataview has 4 main method's :-
Find
Take's a array of value's and return's the index of the row.
FindRow
This also takes array of values but returns a collection of "DataRow".
If we want to manipulate data of "DataTable" object create "DataView" (Using the "DefaultView" we can create "DataView" object) of the "DataTable" object, and use the following functionalities:-
AddNew
Add's a new row to the "DataView" object.
Delete
Deletes the specified row from "DataView" object.

Additionally, a DataView can be customized to present a subset of data from the DataTable. This capability allows you to have two controls bound to the same DataTable, but showing different versions of the data. For example, one control may be bound to a DataView showing all of the rows in the table, while a second may be configured to display only the rows that have been deleted from the DataTable. The DataTable also has a DefaultView property which returns the default DataView for the table. For example, if you wish to create a custom view on the table, set the RowFilter on the DataView returned by the DefaultView.

To create a filtered and sorted view of data, set the RowFilter and Sort properties. Then use the Item property to return a single DataRowView.

You can also add and delete from the set of rows using the AddNew and Delete methods. When you use those methods, the RowStateFilter property can set to specify that only deleted rows or new rows be displayed by the DataView.



56. What is CommandBuilder?

What the CommandBuilder can do is relieve you of the responsibility of writing your own action queries by automatically constructing the SQL code, ADO.NET Command objects, and their associated Parameters collections given a SelectCommand.



The CommandBuilder expects you to provide a viable, executable, and simple SelectCommand associated with a DataAdapter. It also expects a viable Connection. That's because the CommandBuilder opens the Connection associated with the DataAdapter and makes a round trip to the server each and every time it's asked to construct the action queries. It closes the Connection when it's done.



Dim cn As SqlConnection
Dim da As SqlDataAdapter
Dim cb As SqlCommandBuilder
cn = New SqlConnection("data source=demoserver…")
da = New SqlDataAdapter("SELECT Au_ID, au_lname, City FROM authors", cn)


57. what’s the difference between optimistic locking and pessimistic locking?

In pessimistic locking when user wants to update data it locks the record and till then no one can update data. Other user's can only view the data when there is pessimistic locking.


In optimistic locking multiple user's can open the same record for updating, thus increase maximum concurrency. Record is only locked when updating the record. This is the most preferred way of locking practically. Now a days browser based application are very common and having pessimistic locking is not a practical solution.


The basic difference between Optimistic and Pessimistic locking is the time at which the lock on a row or page occurs. A Pessimistic lock is enforced when the row is being edited while an Optimistic lock occurs at the time the row is being updated. Obviously the time between an Edit and Update can be very short, but Pessimistic locking will allow the database provider to prevent a user from overwriting changes to a row by another user that occurred while he was updating it. There is no
provision for this under Optimistic locking and the last user to perform the update wins.


58. How to implement pessimistic locking?

The basics steps for pessimistic locking are as follows:

Create a transaction with an IsolationLevel of RepeatableRead.
Set the DataAdapter’s SelectCommand property to use the transaction you created.
Make the changes to the data.
Set DataAdapter’s Insert, Update, and Delete command properties to use the transaction you created.
Call the DataAdapter’s Update method.
Commit the transaction.


59. How to use transactions in ADO.net?

Transactions are a feature offered by most enterprise-class databases for making sure data integrity is maintained when data is modified. A transaction at its most basic level consists of two required steps—Begin, and then either Commit or Rollback. The Begin call defines the start of the transaction boundary, and the call to either Commit or Rollback defines the end of it. Within the transaction boundary, all of the statements executed are considered to be part of a unit for accomplishing the given task, and must succeed or fail as one. Commit (as the name suggests) commits the data modifications if everything was successful, and Rollback undoes the data modifications if an error occurs. All of the .NET data providers provide similar classes and methods to accomplish these operations.



The ADO.NET data providers offer transaction functionality through the Connection, Command, and Transaction classes. A typical transaction would follow a process similar to this:



Open the transaction using Connection.BeginTransaction().
Enlist statements or stored procedure calls in the transaction by setting the Command.Transaction property of the Command objects associated with them.
Depending on the provider, optionally use Transaction.Save() or Transaction.Begin() to create a savepoint or a nested transaction to enable a partial rollback.
Commit or roll back the transaction using Transaction.Commit() or Transaction.Rollback().


using System;
using System.Drawing;
using System.Collections;
using System.ComponentModel;
using System.Windows.Forms;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;

…public void SPTransaction(int partID, int numberMoved, int siteID)
{
// Create and open the connection.
SqlConnection conn = new SqlConnection();
string connString = "Server=SqlInstance;Database=Test;"
+ "Integrated Security=SSPI";
conn.ConnectionString = connString;
conn.Open();

// Create the commands and related parameters.
// cmdDebit debits inventory from the WarehouseInventory
// table by calling the DebitWarehouseInventory
// stored procedure.
SqlCommand cmdDebit =
new SqlCommand("DebitWarehouseInventory", conn);
cmdDebit.CommandType = CommandType.StoredProcedure;
cmdDebit.Parameters.Add("@PartID", SqlDbType.Int, 0, "PartID");
cmdDebit.Parameters["@PartID"].Direction =
ParameterDirection.Input;
cmdDebit.Parameters.Add("@Debit", SqlDbType.Int, 0, "Quantity");
cmdDebit.Parameters["@Debit"].Direction =
ParameterDirection.Input;

// cmdCredit adds inventory to the SiteInventory
// table by calling the CreditSiteInventory
// stored procedure.
SqlCommand cmdCredit =
new SqlCommand("CreditSiteInventory", conn);
cmdCredit.CommandType = CommandType.StoredProcedure;
cmdCredit.Parameters.Add("@PartID", SqlDbType.Int, 0, "PartID");
cmdCredit.Parameters["@PartID"].Direction =
ParameterDirection.Input;
cmdCredit.Parameters.Add
("@Credit", SqlDbType.Int, 0, "Quantity");
cmdCredit.Parameters["@Credit"].Direction =
ParameterDirection.Input;
cmdCredit.Parameters.Add("@SiteID", SqlDbType.Int, 0, "SiteID");
cmdCredit.Parameters["@SiteID"].Direction =
ParameterDirection.Input;

// Begin the transaction and enlist the commands.
SqlTransaction tran = conn.BeginTransaction();
cmdDebit.Transaction = tran;
cmdCredit.Transaction = tran;

try
{
// Execute the commands.
cmdDebit.Parameters["@PartID"].Value = partID;
cmdDebit.Parameters["@Debit"].Value = numberMoved;
cmdDebit.ExecuteNonQuery();

cmdCredit.Parameters["@PartID"].Value = partID;
cmdCredit.Parameters["@Credit"].Value = numberMoved;
cmdCredit.Parameters["@SiteID"].Value = siteID;
cmdCredit.ExecuteNonQuery();

// Commit the transaction.
tran.Commit();
}
catch(SqlException ex)
{
// Roll back the transaction.
tran.Rollback();

// Additional error handling if needed.
}
finally
{
// Close the connection.
conn.Close();
}
}


60. Whats the difference between Dataset.clone and Dataset.copy ?

The Clone method of the DataSet class copies only the schema of a DataSet object. It returns a new DataSet object that has the same schema as the existing DataSet object, including all DataTable schemas, relations, and constraints. It does not copy any data from the existing DataSet object into the new DataSet.

The Copy method of the DataSet class copies both the structure and data of a DataSet object. It returns a new DataSet object having the same structure (including all DataTable schemas, relations, and constraints) and data as the existing DataSet object.



61. Difference between OLEDB Provider and SqlClient ?

SQLClient .NET classes are highly optimized for the .net / sqlserver combination and achieve optimal results. The SqlClient data provider is fast. It's faster than the Oracle provider, and faster than accessing database via the OleDb layer. It's faster because it accesses the native library (which automatically gives you better performance), and it was written with lots of help from the SQL Server team.

62. What are the different namespaces used in the project to connect the database? What data providers available in .net to connect to database?

System.Data.OleDb – classes that make up the .NET Framework Data Provider for OLE DB-compatible data sources. These classes allow you to connect to an OLE DB data source, execute commands against the source, and read the results.
System.Data.SqlClient – classes that make up the .NET Framework Data Provider for SQL Server, which allows you to connect to SQL Server 7.0, execute commands, and read results. The System.Data.SqlClient namespace is similar to the System.Data.OleDb namespace, but is optimized for access to SQL Server 7.0 and later.
System.Data.Odbc - classes that make up the .NET Framework Data Provider for ODBC. These classes allow you to access ODBC data source in the managed space.
System.Data.OracleClient - classes that make up the .NET Framework Data Provider for Oracle. These classes allow you to access an Oracle data source in the managed space.


63. How to check if a datareader is closed or opened?

IsClosed()

.Net Framework Interview Questions

1. When was .NET announced?

Bill Gates delivered a keynote at Forum 2000, held June 22, 2000, outlining the .NET 'vision'. The July 2000 PDC had a number of sessions on .NET technology, and delegates were given CDs containing a pre-release version of the .NET framework/SDK and Visual Studio.NET.

2. When was the first version of .NET released?

The final version of the 1.0 SDK and runtime was made publicly available around 6pm PST on 15-Jan-2002. At the same time, the final version of Visual Studio.NET was made available to MSDN subscribers.

3. What platforms does the .NET Framework run on?

The final version of the 1.0 SDK and runtime was made publicly available around 6pm PST on 15-Jan-2002. At the same time, the final version of Visual Studio.NET was made available to MSDN subscribers.

4. Explain .NET Framework architecture?

The .NET Framework is an integral Windows component that supports building and running the next generation of applications and XML Web services. The .NET Framework is designed to fulfill the following objectives:
• To provide a consistent object-oriented programming environment whether object code is stored and executed locally, executed locally but Internet-distributed, or executed remotely.
• To provide a code-execution environment that minimizes software deployment and versioning conflicts.
• To provide a code-execution environment that promotes safe execution of code, including code created by an unknown or semi-trusted third party.
• To provide a code-execution environment that eliminates the performance problems of scripted or interpreted environments.
• To make the developer experience consistent across widely varying types of applications, such as Windows-based applications and Web-based applications.
• To build all communication on industry standards to ensure that code based on the .NET Framework can integrate with any other code.
The .NET Framework has two main components: the common language runtime and the .NET Framework class library. The common language runtime is the foundation of the .NET Framework. You can think of the runtime as an agent that manages code at execution time, providing core services such as memory management, thread management, and remoting, while also enforcing strict type safety and other forms of code accuracy that promote security and robustness. In fact, the concept of code management is a fundamental principle of the runtime. Code that targets the runtime is known as managed code, while code that does not target the runtime is known as unmanaged code. The class library, the other main component of the .NET Framework, is a comprehensive, object-oriented collection of reusable types that you can use to develop applications ranging from traditional command-line or graphical user interface (GUI) applications to applications based on the latest innovations provided by ASP.NET, such as Web Forms and XML Web services.

The .NET Framework can be hosted by unmanaged components that load the common language runtime into their processes and initiate the execution of managed code, thereby creating a software environment that can exploit both managed and unmanaged features. The .NET Framework not only provides several runtime hosts, but also supports the development of third-party runtime hosts.

For example, ASP.NET hosts the runtime to provide a scalable, server-side environment for managed code. ASP.NET works directly with the runtime to enable ASP.NET applications and XML Web services, both of which are discussed later in this topic.

Internet Explorer is an example of an unmanaged application that hosts the runtime (in the form of a MIME type extension). Using Internet Explorer to host the runtime enables you to embed managed components or Windows Forms controls in HTML documents. Hosting the runtime in this way makes managed mobile code (similar to Microsoft® ActiveX® controls) possible, but with significant improvements that only managed code can offer, such as semi-trusted execution and isolated file storage.

The following illustration shows the relationship of the common language runtime and the class library to your applications and to the overall system. The illustration also shows how managed code operates within a larger architecture.

.NET Framework in context

The common language runtime manages memory, thread execution, code execution, code safety verification, compilation, and other system services. These features are intrinsic to the managed code that runs on the common language runtime.

With regards to security, managed components are awarded varying degrees of trust, depending on a number of factors that include their origin (such as the Internet, enterprise network, or local computer). This means that a managed component might or might not be able to perform file-access operations, registry-access operations, or other sensitive functions, even if it is being used in the same active application.

The runtime enforces code access security. For example, users can trust that an executable embedded in a Web page can play an animation on screen or sing a song, but cannot access their personal data, file system, or network. The security features of the runtime thus enable legitimate Internet-deployed software to be exceptionally feature rich.

The runtime also enforces code robustness by implementing a strict type-and-code-verification infrastructure called the common type system (CTS). The CTS ensures that all managed code is self-describing. The various Microsoft and third-party language compilers generate managed code that conforms to the CTS. This means that managed code can consume other managed types and instances, while strictly enforcing type fidelity and type safety.

In addition, the managed environment of the runtime eliminates many common software issues. For example, the runtime automatically handles object layout and manages references to objects, releasing them when they are no longer being used. This automatic memory management resolves the two most common application errors, memory leaks and invalid memory references.

The runtime also accelerates developer productivity. For example, programmers can write applications in their development language of choice, yet take full advantage of the runtime, the class library, and components written in other languages by other developers. Any compiler vendor who chooses to target the runtime can do so. Language compilers that target the .NET Framework make the features of the .NET Framework available to existing code written in that language, greatly easing the migration process for existing applications.

While the runtime is designed for the software of the future, it also supports software of today and yesterday. Interoperability between managed and unmanaged code enables developers to continue to use necessary COM components and DLLs.

The runtime is designed to enhance performance. Although the common language runtime provides many standard runtime services, managed code is never interpreted. A feature called just-in-time (JIT) compiling enables all managed code to run in the native machine language of the system on which it is executing. Meanwhile, the memory manager removes the possibilities of fragmented memory and increases memory locality-of-reference to further increase performance.

Finally, the runtime can be hosted by high-performance, server-side applications, such as Microsoft® SQL Server™ and Internet Information Services (IIS). This infrastructure enables you to use managed code to write your business logic, while still enjoying the superior performance of the industry's best enterprise servers that support runtime hosting.


The following sections describe the main components and features of the .NET Framework in greater detail.
Features of the Common Language Runtime

The common language runtime manages memory, thread execution, code execution, code safety verification, compilation, and other system services. These features are intrinsic to the managed code that runs on the common language runtime.

With regards to security, managed components are awarded varying degrees of trust, depending on a number of factors that include their origin (such as the Internet, enterprise network, or local computer). This means that a managed component might or might not be able to perform file-access operations, registry-access operations, or other sensitive functions, even if it is being used in the same active application.

The runtime enforces code access security. For example, users can trust that an executable embedded in a Web page can play an animation on screen or sing a song, but cannot access their personal data, file system, or network. The security features of the runtime thus enable legitimate Internet-deployed software to be exceptionally feature rich.

The runtime also enforces code robustness by implementing a strict type-and-code-verification infrastructure called the common type system (CTS). The CTS ensures that all managed code is self-describing. The various Microsoft and third-party language compilers generate managed code that conforms to the CTS. This means that managed code can consume other managed types and instances, while strictly enforcing type fidelity and type safety.

In addition, the managed environment of the runtime eliminates many common software issues. For example, the runtime automatically handles object layout and manages references to objects, releasing them when they are no longer being used. This automatic memory management resolves the two most common application errors, memory leaks and invalid memory references.

The runtime also accelerates developer productivity. For example, programmers can write applications in their development language of choice, yet take full advantage of the runtime, the class library, and components written in other languages by other developers. Any compiler vendor who chooses to target the runtime can do so. Language compilers that target the .NET Framework make the features of the .NET Framework available to existing code written in that language, greatly easing the migration process for existing applications.

While the runtime is designed for the software of the future, it also supports software of today and yesterday. Interoperability between managed and unmanaged code enables developers to continue to use necessary COM components and DLLs.

The runtime is designed to enhance performance. Although the common language runtime provides many standard runtime services, managed code is never interpreted. A feature called just-in-time (JIT) compiling enables all managed code to run in the native machine language of the system on which it is executing. Meanwhile, the memory manager removes the possibilities of fragmented memory and increases memory locality-of-reference to further increase performance.

Finally, the runtime can be hosted by high-performance, server-side applications, such as Microsoft® SQL Server™ and Internet Information Services (IIS). This infrastructure enables you to use managed code to write your business logic, while still enjoying the superior performance of the industry's best enterprise servers that support runtime hosting.

.NET Framework Class Library

The .NET Framework class library is a collection of reusable types that tightly integrate with the common language runtime. The class library is object oriented, providing types from which your own managed code can derive functionality. This not only makes the .NET Framework types easy to use, but also reduces the time associated with learning new features of the .NET Framework. In addition, third-party components can integrate seamlessly with classes in the .NET Framework.

For example, the .NET Framework collection classes implement a set of interfaces that you can use to develop your own collection classes. Your collection classes will blend seamlessly with the classes in the .NET Framework.

As you would expect from an object-oriented class library, the .NET Framework types enable you to accomplish a range of common programming tasks, including tasks such as string management, data collection, database connectivity, and file access. In addition to these common tasks, the class library includes types that support a variety of specialized development scenarios. For example, you can use the .NET Framework to develop the following types of applications and services:

• Console applications.
• Windows GUI applications (Windows Forms).
• ASP.NET applications.
• XML Web services.
• Windows services.
For example, the Windows Forms classes are a comprehensive set of reusable types that vastly simplify Windows GUI development. If you write an ASP.NET Web Form application, you can use the Web Forms classes.
Client Application Development

Client applications are the closest to a traditional style of application in Windows-based programming. These are the types of applications that display windows or forms on the desktop, enabling a user to perform a task. Client applications include applications such as word processors and spreadsheets, as well as custom business applications such as data-entry tools, reporting tools, and so on. Client applications usually employ windows, menus, buttons, and other GUI elements, and they likely access local resources such as the file system and peripherals such as printers.

Another kind of client application is the traditional ActiveX control (now replaced by the managed Windows Forms control) deployed over the Internet as a Web page. This application is much like other client applications: it is executed natively, has access to local resources, and includes graphical elements.

In the past, developers created such applications using C/C++ in conjunction with the Microsoft Foundation Classes (MFC) or with a rapid application development (RAD) environment such as Microsoft® Visual Basic®. The .NET Framework incorporates aspects of these existing products into a single, consistent development environment that drastically simplifies the development of client applications.

The Windows Forms classes contained in the .NET Framework are designed to be used for GUI development. You can easily create command windows, buttons, menus, toolbars, and other screen elements with the flexibility necessary to accommodate shifting business needs.

For example, the .NET Framework provides simple properties to adjust visual attributes associated with forms. In some cases the underlying operating system does not support changing these attributes directly, and in these cases the .NET Framework automatically recreates the forms. This is one of many ways in which the .NET Framework integrates the developer interface, making coding simpler and more consistent.

Unlike ActiveX controls, Windows Forms controls have semi-trusted access to a user's computer. This means that binary or natively executing code can access some of the resources on the user's system (such as GUI elements and limited file access) without being able to access or compromise other resources. Because of code access security, many applications that once needed to be installed on a user's system can now be deployed through the Web. Your applications can implement the features of a local application while being deployed like a Web page.

Server Application Development

Server-side applications in the managed world are implemented through runtime hosts. Unmanaged applications host the common language runtime, which allows your custom managed code to control the behavior of the server. This model provides you with all the features of the common language runtime and class library while gaining the performance and scalability of the host server.

The following illustration shows a basic network schema with managed code running in different server environments. Servers such as IIS and SQL Server can perform standard operations while your application logic executes through the managed code.



ASP.NET is the hosting environment that enables developers to use the .NET Framework to target Web-based applications. However, ASP.NET is more than just a runtime host; it is a complete architecture for developing Web sites and Internet-distributed objects using managed code. Both Web Forms and XML Web services use IIS and ASP.NET as the publishing mechanism for applications, and both have a collection of supporting classes in the .NET Framework.

XML Web services, an important evolution in Web-based technology, are distributed, server-side application components similar to common Web sites. However, unlike Web-based applications, XML Web services components have no UI and are not targeted for browsers such as Internet Explorer and Netscape Navigator. Instead, XML Web services consist of reusable software components designed to be consumed by other applications, such as traditional client applications, Web-based applications, or even other XML Web services. As a result, XML Web services technology is rapidly moving application development and deployment into the highly distributed environment of the Internet.

If you have used earlier versions of ASP technology, you will immediately notice the improvements that ASP.NET and Web Forms offer. For example, you can develop Web Forms pages in any language that supports the .NET Framework. In addition, your code no longer needs to share the same file with your HTTP text (although it can continue to do so if you prefer). Web Forms pages execute in native machine language because, like any other managed application, they take full advantage of the runtime. In contrast, unmanaged ASP pages are always scripted and interpreted. ASP.NET pages are faster, more functional, and easier to develop than unmanaged ASP pages because they interact with the runtime like any managed application.

The .NET Framework also provides a collection of classes and tools to aid in development and consumption of XML Web services applications. XML Web services are built on standards such as SOAP (a remote procedure-call protocol), XML (an extensible data format), and WSDL ( the Web Services Description Language). The .NET Framework is built on these standards to promote interoperability with non-Microsoft solutions.

For example, the Web Services Description Language tool included with the .NET Framework SDK can query an XML Web service published on the Web, parse its WSDL description, and produce C# or Visual Basic source code that your application can use to become a client of the XML Web service. The source code can create classes derived from classes in the class library that handle all the underlying communication using SOAP and XML parsing. Although you can use the class library to consume XML Web services directly, the Web Services Description Language tool and the other tools contained in the SDK facilitate your development efforts with the .NET Framework.

If you develop and publish your own XML Web service, the .NET Framework provides a set of classes that conform to all the underlying communication standards, such as SOAP, WSDL, and XML. Using those classes enables you to focus on the logic of your service, without concerning yourself with the communications infrastructure required by distributed software development.

Finally, like Web Forms pages in the managed environment, your XML Web service will run with the speed of native machine language using the scalable communication of IIS.


5. What are the mobile devices supported by .net platform?

The Microsoft .NET Compact Framework is designed to run on mobile devices such as mobile phones, Personal Digital Assistants (PDAs), and embedded devices. The easiest way to develop and test a Smart Device Application is to use an emulator.
These devices are divided into two main divisions:

1. Those that are directly supported by .NET (Pocket PCs, i-Mode phones, and WAP devices)
2. Those that are not (Palm OS and J2ME-powered devices).

6. What is the CLR?

Common Language Runtime (CLR) is a run-time environment that manages the execution of .NET code and provides services like memory management, debugging, security, etc. The CLR is also known as Virtual Execution System (VES). The CLR is a multi-language execution environment. There are currently over 15 compilers being built by Microsoft and other companies that produce code that will execute in the CLR.

The Common Language Runtime (CLR) provides a solid foundation for developers to build various types of applications. Whether you're writing an ASP.Net application , a Windows Forms application, a Web Service, a mobile code application, a distributed application, or an application that combines several of these application models, the CLR provides the following benefits for application developers:

• Vastly simplified development
• Seamless integration of code written in various languages
• Evidence-based security with code identity
• Assembly-based deployment that eliminates DLL Hell
• Side-by-side versioning of reusable components
• Code reuse through implementation inheritance
• Automatic object lifetime management
• Self describing objects

All Languages have runtime and its the responsibility of the runtime to take care of the code execution of the program. For example VC + + has MSCRT40.DLL,VB6 has MSVBVM60.DLL, Java has Java Virtual Machine etc. Similarly .NET has CLR.
Following are the responsibilities of CLR

• Garbage Collection :- CLR automatically manages memory thus eliminating memory leakage. When objects are not referred GC automatically releases those memory thus providing efficient memory management.
• Code Access Security :- CAS grants rights to program depending on the security configuration of the machine. Example the program has rights to edit or create a new file but the security configuration of machine does not allow the program to delete a file. CAS will take care that the code runs under the environment of machines security configuration.
• Code Verification :- This ensures proper code execution and type safety while the code runs. It prevents the source code to perform illegal operation such as accessing invalid memory locations etc.
• IL( Intermediate language )-to-native translators and optimizer's :- CLR uses JIT and compiles the IL code to machine code and then executes. CLR also determines depending on platform what is optimized way of running the IL code.

7. What is BCL?

The Base Class Library (BCL) is a library of types and functionalities available to all languages using the .NET Framework. In order to make the programmer's job easier, .NET includes the BCL in order to encapsulate a large number of common functions, such as file reading and writing, graphic rendering, database interaction, and XML document manipulation. It is much larger in scope than standard libraries for most other languages, including C++, and would be comparable in scope to the standard libraries of Java. The BCL is sometimes incorrectly referred to as the Framework Class Library (FCL), which is a superset including the Microsoft namespaces.

The Base Class Libraries (BCL) provides the fundamental building blocks for any application you develop, be it an ASP.NET application, a Windows Forms application, or a Web Service. The BCL generally serves as your main point of interaction with the runtime. BCL classes include:

• System
• System.CodeDom
• System.Collections
• System.Diagnostics
• System.Globalization
• System.IO
• System.Resources
• System.Text
• System.Text.RegularExpressions

8. What is the CLS?

Common Language Specification. This is a subset of the CTS which all .NET languages are expected to support. The idea is that any program which uses CLS-compliant types can interoperate with any .NET program written in any language.In theory this allows very tight interop between different .NET languages - for example allowing a C# class to inherit from a VB class.





9. What is a CTS?

Common Type System. This is the range of types that the .NET runtime understands, and therefore that .NET applications can use. However note that not all .NET languages will support all the types in the CTS. The CTS is a superset of the CLS.

10. What is a lL? ( MSIL, CIL )


Intermediate Language. Also known as MSIL (Microsoft Intermediate Language) or CIL (Common Intermediate Language). All .NET source code (of any language) is compiled to MSIL. When compiling the source code to managed code, the compiler translates the source into Microsoft intermediate language (MSIL). This is a CPU-independent set of instructions that can efficiently be converted to native code. Microsoft intermediate language (MSIL) is a translation used as the output of a number of compilers. It is the input to a just-in-time (JIT) compiler. The Common Language Runtime includes a JIT compiler for the conversion of MSIL to native code.

Before Microsoft Intermediate Language (MSIL) can be executed it, must be converted by the .NET Framework just-in-time (JIT) compiler to native code. This is CPU-specific code that runs on the same computer architecture as the JIT compiler. Rather than using time and memory to convert all of the MSIL in a portable executable (PE) file to native code. It converts the MSIL as needed whilst executing, then caches the resulting native code so its accessible for any subsequent calls.


11. What is MSIL Assembler (Ilasm.exe)

The MSIL Assembler generates a portable executable (PE) file from MSIL assembly language. You can run the resulting executable, which contains MSIL and the required metadata, to determine whether the MSIL performs as expected.

12. What is MSIL Disassembler (Ilasm.exe) ?

The MSIL Disassembler is a companion tool to the MSIL Assembler (Ilasm.exe). Ildasm.exe takes a portable executable (PE) file that contains Microsoft intermediate language (MSIL) code and creates a text file suitable as input to Ilasm.exe.

13. Can I look at the IL for an assembly?

Yes. MS supply a tool called Ildasm that can be used to view the metadata and IL for an assembly.

14. Can source code be reverse-engineered from IL?

Yes, it is often relatively straightforward to regenerate high-level source from IL. Lutz Roeder's Reflector does a very good job of turning IL into C# or VB.NET.

15. How can I stop my code being reverse-engineered from IL?

You can buy an IL obfuscation tool. These tools work by 'optimising' the IL in such a way that reverse-engineering becomes much more difficult. Of course if you are writing web services then reverse-engineering is not a problem as clients do not have access to your IL.

16. Can I write IL programs directly?

Yes
.assembly MyAssembly {}
.class MyApp {
.method static void Main() {
.entrypoint
ldstr "Hello, IL!"
call void System.Console::WriteLine(class System.Object)
ret
}
}

Just put this into a file called hello.il, and then run ilasm hello.il. An exe assembly will be generated.

17. Can I do things in IL that I can't do in C#?

Yes. A couple of simple examples are that you can throw exceptions that are not derived from System.Exception, and you can have non-zero-based arrays.

18. What is JIT?

Just-In-Time compiler- it converts the language that you write in .Net into machine language that a computer can understand. there are tqo types of JITs one is memory optimized & other is performace optimized.

JIT (Just - In - Time) is a compiler which converts MSIL code to Native Code (ie.. CPU-specific code that runs on the same computer architecture).

Because the common language runtime supplies a JIT compiler for each supported CPU architecture, developers can write a set of MSIL that can be JIT-compiled and run on computers with different architectures. However, your managed code will run only on a specific operating system if it calls platform-specific native APIs, or a platform-specific class library.

JIT compilation takes into account the fact that some code might never get called during execution. Rather than using time and memory to convert all the MSIL in a portable executable (PE) file to native code, it converts the MSIL as needed during execution and stores the resulting native code so that it is accessible for subsequent calls. The loader creates and attaches a stub to each of a type's methods when the type is loaded. On the initial call to the method, the stub passes control to the JIT compiler, which converts the MSIL for that method into native code and modifies the stub to direct execution to the location of the native code. Subsequent calls of the JIT-compiled method proceed directly to the native code that was previously generated, reducing the time it takes to JIT-compile and run the code.

19. What is a Managed Code?

Managed code is code that has its execution managed by the .NET Framework Common Language Runtime. It refers to a contract of cooperation between natively executing code and the runtime. This contract specifies that at any point of execution, the runtime may stop an executing CPU and retrieve information specific to the current CPU instruction address. Information that must be query-able generally pertains to runtime state, such as register or stack memory contents.

The necessary information is encoded in an Intermediate Language (IL) and associated metadata, or symbolic information that describes all of the entry points and the constructs exposed in the IL (e.g., methods, properties) and their characteristics. The Common Language Infrastructure (CLI) Standard (which the CLR is the primary commercial implementation) describes how the information is to be encoded, and programming languages that target the runtime emit the correct encoding.

Managed code runs in the Common Language Runtime. The runtime offers a wide variety of services to your running code. In the usual course of events, it first loads and verifies the assembly to make sure the IL is okay. Then, just in time, as methods are called, the runtime arranges for them to be compiled to machine code suitable for the machine the assembly is running on, and caches this machine code to be used the next time the method is called. (This is called Just In Time, or JIT compiling, or often just Jitting.)

As the assembly runs, the runtime continues to provide services such as security, memory management, threading, and the like. The application is managed by the runtime.

20. What is a Unmanaged Code?

Unmanaged code is what you use to make before Visual Studio .NET 2002 was released. Visual Basic 6, Visual C++ 6. It compiled directly to machine code that ran on the machine where you compiled it—and on other machines as long as they had the same chip, or nearly the same. It didn't get services such as security or memory management from an invisible runtime; it got them from the operating system. And importantly, it got them from the operating system explicitly, by asking for them, usually by calling an API provided in the Windows SDK. More recent unmanaged applications got operating system services through COM calls.

Unlike the other Microsoft languages in Visual Studio, Visual C++ can create unmanaged applications. When you create a project and select an application type whose name starts with MFC, ATL, or Win32, you're creating an unmanaged application.

This can lead to some confusion: When you create a .Managed C++ application., the build product is an assembly of IL with an .exe extension. When you create an MFC application, the build product is a Windows executable file of native code, also with an .exe extension. The internal layout of the two files is utterly different. You can use the Intermediate Language Disassembler, ildasm, to look inside an assembly and see the metadata and IL. Try pointing ildasm at an unmanaged exe and you'll be told it has no valid CLR (Common Language Runtime) header and can't be disassembled Same extension, completely different files.

21. What is portable executable (PE)?

The file format defining the structure that all executable files (EXE) and Dynamic Link Libraries (DLL) must use to allow them to be loaded and executed by Windows. PE is derived from the Microsoft Common Object File Format (COFF). The EXE and DLL files created using the .NET Framework obey the PE/COFF formats and also add additional header and data sections to the files that are only used by the CLR.

22. What is a Assembly ?

An assembly consists of one or more files (dlls, exe's, html files etc.), and represents a group of resources, type definitions, and implementations of those types. An assembly may also contain references to other assemblies. These resources, types and references are described in a block of data called a manifest. The manifest is part of the assembly, thus making the assembly self-describing.

An assembly is completely self-describing.An assembly contains metadata information, Which is used by the CLR for everything from type checking and security to actually invoking the components methods.As all information is in assembly itself it is independent of registry. This is the basic advantage as compared to COM where the version was stored in registry.

In the Microsoft .NET framework an assembly is a partially compiled code library for use in deployment, versioning and security. In the Microsoft Windows implementation of .NET, an assembly is a PE (portable executable) file. There are two types, process assemblies (EXE) and library assemblies (DLL). A process assembly represents a process which will use classes defined in library assemblies. In version 1.1 of the CLR classes can only be exported from library assemblies; in version 2.0 this restriction is relaxed. The compiler will have a switch to determine if the assembly is a process or library and will set a flag in the PE file. .NET does not use the extension to determine if the file is a process or library. This means that a library may have either .dll or .exe as its extension.

The code in an assembly is compiled into MSIL, which is then compiled into machine language at runtime by the CLR.

An assembly can consist of one or more files. Code files are called modules. An assembly can contain more than one code module and since it is possible to use different languages to create code modules this means that it is technically possible to use several different languages to create an assembly. In practice this rarely happens, principally because Visual Studio only allows developers to create assemblies that consist of a single code module.

23. What is GAC?

Each computer where the common language runtime is installed has a machine-wide code cache called the global assembly cache. The global assembly cache stores assemblies specifically designated to be shared by several applications on the computer. You should share assemblies by installing them into the global assembly cache only when you need to.

There are three ways to add an assembly to the GAC:

• Install them with the Windows Installer 2.0
• Use the Gacutil.exe tool
• Drag and drop the assemblies to the cache with Windows Explorer
- Create a strong name using sn.exe tool
eg: sn -k keyPair.snk
- with in AssemblyInfo.cs add the generated file name
eg: [assembly: AssemblyKeyFile("abc.snk")]
- recompile project, then install it to GAC by either
drag & drop it to assembly folder (C:\WINDOWS\assembly OR C:\WINNT\assembly) (shfusion.dll tool)
or
gacutil -i abc.dll


24. What are different types of Assembly?

Private assemblies :
When a developer compiles code the compiler will put the name of every library assembly it uses in the compiled assembly's .NET metadata. When the CLR executes the code in the assembly it will use this metadata to locate the assembly using a technology called Fusion. If the called assembly does not have a strong name, then Fusion will only use the short name (the PE file name) to locate the library. In effect this means that the assembly can only exist in the application folder, or in a subfolder, and hence it is called a private assembly because it can only be used by a specific application. Versioning is switched off for assemblies that do not have strong names, and so this means that it is possible for a different version of an assembly to be loaded than the one that was used to create the calling assembly.

The compiler will store the complete name (including version) of strongly named assembly in the metadata of the calling assembly. When the called assembly is loaded, Fusion will ensure that only an assembly with the exact name, including the version, is loaded. Fusion is configurable, and so you can provide an application configuration file to tell Fusion to use a specific version of a library when another version is requested.

Shared assemblies :

Shared assemblies are stored in the GAC. This is a system-wide cache and all applications on the machine can use any assembly in the cache. To the casual user it appears that the GAC is a single folder, however, it is actually implemented using FAT32 or NTFS nested folders which means that there can be multiple versions (or cultures) of the same assembly.

25. What is Fusion ?

Filesystems in common use by Windows (FAT32, NTFS, CDFS, etc.) are restrictive because the names of files do not include information like versioning or localization. This means that two different versions of a file cannot exist in the same folder unless their names have versioning information. Fusion is the Windows loader technology that allows versioning and culture information to be used in the name of a .NET assembly that is stored on these filesystems. Despite being the exlusive system for loading a managed assembly into a process, Fusion is also currently used to load Win32 assemblies independent of managed assembly loading.

Fusion uses a specific search order when it looks for an assembly.

1. If the assembly is strongly named it will first look in the GAC.
2. Fusion will then look for redirection information in the application's configuration file. If the library is strongly named then this can specify that another version should be loaded, or it can specify an absolute address of a folder on the local hard disk, or the URL of a file on a web server. If the library is not strongly named, then the configuration file can specify a subfolder beneath the application folder to be used in the search path.
3. Fusion will then look for the assembly in the application folder with either the extension .exe or .dll.
4. Fusion will look for a subfolder with the same name as the short name (PE file name) of the folder and look for the assembly in that folder with either the extension .exe or
5. .dll.

If Fusion cannot find the assembly, the assembly image is bad, or if the reference to the assembly doesn't match the version of the assembly found, it will throw an exception. In addition, information about the name of the assembly, and the paths that it checked, will be stored. This information may be viewed by using the Fusion log viewer (fuslogvw), or if a custom location is configured, directly from the HTML log files generated.

26. What is a satellite assembly?

In multilingual application in .NET to support multilingual functionality you can have modules which are customized for localization.These assemblies are called as satellite assemblies. You can distribute these assemblies separately than the core modules.

A definition from MSDN says something like this: "A .NET Framework assembly containing resources specific to a given language. Using satellite assemblies, you can place the resources for different languages in different assemblies, and the correct assembly is loaded into memory only if the user elects to view the application in that language."

This means that you develop your application in a default language and add flexibility to react with change in the locale. Say, for example, you developed your application in an en-US locale. Now, your application has multilingual support. When you deploy your code in, say, India, you want to show labels, messages shown in the national language which is other than English.

Satellite assemblies give this flexibility. You create any simple text file with translated strings, create resources, and put them into the bin\debug folder. That's it. The next time, your code will read the CurrentCulture property of the current thread and accordingly load the appropriate resource.

This is called the hub and spoke model. It requires that you place resources in specific locations so that they can be located and used easily. If you do not compile and name resources as expected, or if you do not place them in the correct locations, the common language runtime will not be able to locate them. As a result, the runtime uses the default resource set.

Every assembly contains an assembly manifest, a set of metadata with information about the assembly. The assembly manifest contains these items:

• The assembly name and version
• The culture or language the assembly supports (not required in all assemblies)
• The public key for any strong name assigned to the assembly (not required in all assemblies)
• A list of files in the assembly with hash information
• Information on exported types
• Information on referenced assemblies
In addition, you can add other information to the manifest by using assembly attributes. Assembly attributes are declared inside of a file in an assembly, and are text strings that describe the assembly. For example, you can set a friendly name for an assembly with the AssemblyTitle attribute:


27. How to create satellite assembly?

• Create a folder with a specific culture name (for example, en-US) in the application's bin\debug folder.
• Create a .resx file in that folder. Place all translated strings into it.
• Create a .resources file by using the following command from the .NET command prompt. (localizationsample is the name of the application namespace. If your application uses a nested namespace structure like MyApp.YourApp.MyName.YourName as the type of namespace, just use the uppermost namespace for creating resources files—MyApp.)

resgen Strings.en-US.resx LocalizationSample.
Strings.en-US.resources
al /embed:LocalizationSample.Strings.en-US.resources
/out:LocalizationSample.resources.dll /c:en-US



The above step will create two files, LocalizationSample.Strings.en-US.resources and LocalizationSample.resources.dll. Here, LocalizationSample is the name space of the application.

• In the code, find the user's language; for example, en-US. This is culture specific.
• Give the assembly name as the name of .resx file. In this case, it is Strings.

Using a Satellite Assembly

Follow these steps:

Thread.CurrentThread.CurrentCulture =
CultureInfo.CreateSpecificCulture(specCult);
Thread.CurrentThread.CurrentUICulture =
new CultureInfo(specCult);
ResourceManager resMgr =
new ResourceManager(typeof(Form1).Namespace + "." +
asmName, this.GetType().Assembly);
btnTest.Text = resMgr.GetString("Jayant");


28. What is Shadow Copy?

In order to replace a COM component on a live web server, it was necessary to stop the entire website, copy the new files and then restart the website. This is not feasible for the web servers that need to be always running. .NET components are different. They can be overwritten at any time using a mechanism called Shadow Copy. It prevents the Portable Executable (PE) files like DLLs and EXEs from being locked. Whenever new versions of the PEs are released, they are automatically detected by the CLR and the changed components will be automatically loaded. They will be used to process all new requests not currently executing, while the older version still runs the currently executing requests. By bleeding out the older version, the update is completed.

29. What is DLL Hell?

DLL hell is the problem that occurs when an installation of a newer application might break or hinder other applications as newer DLLs are copied into the system and the older applications do not support or are not compatible with them. .NET overcomes this problem by supporting multiple versions of an assembly at any given time. This is also called side-by-side component versioning.

30. What is GUID , why we use it and where?

GUID is Short form of Globally Unique Identifier, a unique 128-bit number that is produced by the Windows OS or by some Windows applications to identify a particular component, application, file, database entry, and/or user. For instance, a Web site may generate a GUID and assign it to a user's browser to record and track the session. A GUID is also used in a Windows registry to identify COM DLLs. Knowing where to look in the registry and having the correct GUID yields a lot information about a COM object (i.e., information in the type library, its physical location, etc.). Windows also identifies user accounts by a username (computer/domain and username) and assigns it a GUID. Some database administrators even will use GUIDs as primary key values in databases.

GUIDs can be created in a number of ways, but usually they are a combination of a few unique settings based on specific point in time (e.g., an IP address, network MAC address, clock date/time, etc.).

31. How to create GUID?

Start guidgen.exe, or when you click the New GUID button in the Create GUID dialog box, guidgen.exe generates a GUID.

To run guidgen.exe from the IDE

• On the Tools menu, click Create GUID. The Create GUID tool appears with a GUID in the Result box.
• Select the format you want for the GUID.
• Click Copy.
• The GUID is copied to the Clipboard so that you can paste it into your source code.
• If you want to generate another GUID, click New GUID.

32. What is NameSpace?

Namespace is a logical naming scheme for group related types. Some class types that logically belong together they can be put into a common namespace. They prevent namespace collisions and they provide scoping. They are imported as "using" in C# or "Imports" in Visual Basic. It seems as if these directives specify a particular assembly, but they don't. A namespace can span multiple assemblies, and an assembly can define multiple namespaces. When the compiler needs the definition for a class type, it tracks through each of the different imported namespaces to the type name and searches each referenced assembly until it is found. Namespaces can be nested. This is very similar to packages in Java as far as scoping is concerned.

33. What is Difference between NameSpace and Assembly ?

The concept of a namespace is not related to that of an assembly. A single assembly may contain many types whose hierarchical names have different namespace roots, and a logical namespace root may span multiple assemblies. In the .NET Framework, a namespace is a logical design-time naming convention, whereas an assembly establishes the name scope for types at run time.
Namespace:
• Namespace is logical grouping unit.
• It is a Collection of names wherein each name is Unique.
• They form the logical boundary for a Group of classes.
• Namespace must be specified in Project-Properties.

Assembly:
• Assembly is physical grouping unit.
• It is an Output Unit. It is a unit of Deployment & a unit of versioning. Assemblies contain MSIL code.
• Assemblies are Self-Describing. [e.g. metadata,manifest]
• An assembly is the primary building block of a .NET Framework application. It is a collection of functionality that is built, versioned, and deployed as a single implementation unit (as one or more files). All managed types and resources are marked either as accessible only within their implementation unit, or by code outside that unit.

34. How can you view Assembly?

Using ILDAM tool.

35. What is Manifest?

Every assembly, whether static or dynamic, contains a collection of data that describes how the elements in the assembly relate to each other. The assembly manifest contains this assembly metadata. An assembly manifest contains all the metadata needed to specify the assembly's version requirements and security identity, and all metadata needed to define the scope of the assembly and resolve references to resources and classes. The assembly manifest can be stored in either a PE file (an .exe or .dll) with Microsoft intermediate language (MSIL) code or in a standalone PE file that contains only assembly manifest information.

For an assembly with one associated file, the manifest is incorporated into the PE file to form a single-file assembly. You can create a multifile assembly with a standalone manifest file or with the manifest incorporated into one of the PE files in the assembly.

Each assembly's manifest performs the following functions:
• Enumerates the files that make up the assembly.
• Governs how references to the assembly's types and resources map to the files that contain their declarations and implementations.
• Enumerates other assemblies on which the assembly depends.
• Provides a level of indirection between consumers of the assembly and the assembly's implementation details.
• Renders the assembly self-describing.
Assembly Manifest Contents
The following table shows the information contained in the assembly manifest. The first four items — the assembly name, version number, culture, and strong name information — make up the assembly's identity.

Information Description
Assembly name A text string specifying the assembly's name.
Version number A major and minor version number, and a revision and build number. The common language runtime uses these numbers to enforce version policy.
Culture Information on the culture or language the assembly supports. This information should be used only to designate an assembly as a satellite assembly containing culture- or language-specific information. (An assembly with culture information is automatically assumed to be a satellite assembly.)
Strong name information The public key from the publisher if the assembly has been given a strong name.
List of all files in the assembly A hash of each file contained in the assembly and a file name. Note that all files that make up the assembly must be in the same directory as the file containing the assembly manifest.
Type reference information Information used by the runtime to map a type reference to the file that contains its declaration and implementation. This is used for types that are exported from the assembly.
Information on referenced assemblies A list of other assemblies that are statically referenced by the assembly. Each reference includes the dependent assembly's name, assembly metadata (version, culture, operating system, and so on), and public key, if the assembly is strong named.


36. Where is version information stored of a assembly ?

Manifest contains the version details.

37. Is versioning applicable to private assemblies?

Versioning concepts apply only to public assembly.

38. What is strong names?

Strong Name is a technology introduced with the .NET platform and it brings many possibilities into .NET applications. But many .NET developers still see Strong Names as security enablers (which is very wrong!) and not as a technology uniquely identifying assemblies.

Assemblies can be assigned a cryptographic signature, called a strong name, which provides name uniqueness for the assembly and prevents someone from taking over the name of your assembly (name spoofing). If you are deploying an assembly that will be shared among many applications on the same machine, it must have a strong name. Even if you only use the assembly within your application, using a strong name ensures that the correct version of the assembly gets loaded

Strong Names are not any security enhancement; they enable unique identification and side-by-side code execution.
Strong Namesare used for :
• Versioning
• Authentication

Versioning solves known problem called as "DLL hell". Signed assemblies are unique and Strong Names solves problem with namespace collisions (developers can distribute their assemblies even with the same file names as shown of figure below). Assemblies signed with Strong Names are uniquely identified and are protected and stored in different spaces.


Versioning solves known problem called as "DLL hell". Signed assemblies are unique and Strong Names solves problem with namespace collisions (developers can distribute their assemblies even with the same file names as shown of figure below). Assemblies signed with Strong Names are uniquely identified and are protected and stored in different spaces.

39. 31) What is the difference between C# Boolean and C++ Boolean?

In C# Boolean values (true, false) do not equate to integer variables.

40. What is Delay signing ?

An organization can have a closely guarded key pair that developers do not have access to on a daily basis. The public key is often available, but access to the private key is restricted to only a few individuals. When developing assemblies with strong names, each assembly that references the strong-named target assembly contains the token of the public key used to give the target assembly a strong name. This requires that the public key be available during the development process.

You can use delayed or partial signing at build time to reserve space in the portable executable (PE) file for the strong name signature, but defer the actual signing until some later stage (typically just before shipping the assembly).

The following steps outline the process to delay sign an assembly:

1. Obtain the public key portion of the key pair from the organization that will do the eventual signing. Typically this key is in the form of an .snk file, which can be created using the Strong Name tool (Sn.exe) provided by the .NET Framework SDK.

2. Annotate the source code for the assembly with two custom attributes from System.Reflection:

• AssemblyKeyFileAttribute, which passes the name of the file containing the public key as a parameter to its constructor.
• AssemblyDelaySignAttribute, which indicates that delay signing is being used by passing true as a parameter to its constructor.
For example:

[Visual Basic]



[C#]
[assembly:AssemblyKeyFileAttribute("myKey.snk")]
[assembly:AssemblyDelaySignAttribute(true)]

3. The compiler inserts the public key into the assembly manifest and reserves space in the PE file for the full strong name signature. The real public key must be stored while the assembly is built so that other assemblies that reference this assembly can obtain the key to store in their own assembly reference.
4. Because the assembly does not have a valid strong name signature, the verification of that signature must be turned off. You can do this by using the –Vr option with the Strong Name tool.
The following example turns off verification for an assembly called myAssembly.dll.

sn –Vr myAssembly.dll

5. Later, usually just before shipping, you submit the assembly to your organization's signing authority for the actual strong name signing using the –R option with the Strong Name tool.
The following example signs an assembly called myAssembly.dll with a strong name using the sgKey.snk key pair.

sn -R myAssembly.dll sgKey.snk



41. What is garbage collection?

Short :
Garbage collection is a CLR feature which automatically manages memory. Programmers forget
to release the objects while coding. CLR automatically releases objects when they are no longer referenced and in use. CLR runs on non-deterministic to see the unused objects and cleans them. One side effect of this non-deterministic feature is that we cannot assume an object is destroyed when it goes out of the scope of a function. Therefore, we should not put code into a class destructor to release resources.
Detailed :
Every program uses resources of one sort or another -- memory buffers, network connections, database resources and so on. In fact, in an object-oriented environment, every type identifies some resource available for a program's use. To use any of these resources, memory must be allocated to represent the type.
The steps required to access a resource are as follows:
1. Allocate memory for the type that represents the resource.
2. Initialize the memory to set the initial state of the resource and to make the resource usable.
3. Use the resource by accessing the instance members of the type (repeat as necessary).
4. Tear down the state of the resource to clean up.
5. Free the memory.
The garbage collector (GC) of .NET completely absolves the developer from tracking memory usage and knowing when to free memory.
The Microsoft .NET CLR (common language runtime) requires that all resources be allocated from the managed heap. You never free objects from the managed heap -- objects are automatically freed when they are no longer needed by the application.

Memory is not infinite. The garbage collector must perform a collection in order to free some memory. The garbage collector's optimizing engine determines the best time to perform a collection, (the exact criteria is guarded by Microsoft) based upon the allocations being made. When the garbage collector performs a collection, it checks for objects in the managed heap that are no longer being used by the application and performs the necessary operations to reclaim their memory.

However, for automatic memory management, the garbage collector has to know the location of the roots -- i.e. it should know when an object is no longer in use by the application. This knowledge is made available to the GC in .NET by the inclusion of a concept know as metadata. Every data type used in .NET software includes metadata that describes it. With the help of metadata, the CLR knows the layout of each of the objects in memory, which helps the garbage collector in the compaction phase of Garbage collection. Without this knowledge the garbage collector wouldn't know where one object instance ends and the next begins.

Garbage collection algorithm

Application roots
Every application has a set of roots. Roots identify storage locations, which refer to objects on the managed heap or to objects that are set to null.
For example:
• All the global and static object pointers in an application.
• Any local variable/parameter object pointers on a thread's stack.
• Any CPU registers containing pointers to objects in the managed heap.
• Pointers to the objects from Freachable queue

The list of active roots is maintained by the just-in-time (JIT) compiler and common language runtime, and is made accessible to the garbage collector's algorithm.

Implementation

Garbage collection in .NET is done using tracing collection and specifically the CLR implements the mark/compact collector. This method consists of two phases as described below.
Phase 1: Mark

Find memory that can be reclaimed.
When the garbage collector starts running, it makes the assumption that all objects in the heap are garbage. In other words, it assumes that none of the application's roots refer to any objects in the heap.

The following steps are included in phase one:
1. The GC identifies live object references or application roots.
2. It starts walking the roots and building a graph of all objects reachable from the roots.
3. If the GC attempts to add an object already present in the graph, then it stops walking down that path. This serves two purposes. First, it helps performance significantly, since it doesn't walk through a set of objects more than once. Second, it prevents infinite loops should you have any circular linked lists of objects. Thus cycles are handles properly.

Once all the roots have been checked, the garbage collector's graph contains the set of all objects that are somehow reachable from the application's roots; any objects that are not in the graph are not accessible by the application, and are therefore considered garbage.

Finalization

The .NET Framework's garbage collection implicitly keeps track of the lifetime of the objects that an application creates, but fails when it comes to the unmanaged resources (i.e. a file, a window or a network connection) that objects encapsulate.

The unmanaged resources must be explicitly released once the application has finished using them. The .NET Framework provides the Object.Finalize method, a method that the garbage collector must run on the object to clean up its unmanaged resources, prior to reclaiming the memory used up by the object. Since finalize method does nothing, by default, this method must be overridden if explicit cleanup is required.

It would not be surprising if you will consider finalize just another name for destructors in C++. Though, both have been assigned the responsibility of freeing the resources used by the objects, they have very different semantics. In C++, destructors are executed immediately when the object goes out of scope, whereas a finalize method is called once garbage collection gets around to cleaning up an object.

The potential existence of finalizers complicates the job of garbage collection in .NET by adding some extra steps before freeing an object.

Whenever a new object with a finalize method is allocated on the heap, a pointer to the object is placed in an internal data structure called the finalization queue. When an object is not reachable, the garbage collector considers the object garbage. The garbage collector scans the finalization queue looking for pointers to these objects. When a pointer is found, the pointer is removed from the finalization queue and appended to another internal data structure called Freachable queue, making the object no longer a part of the garbage. At this point, the garbage collector has finished identifying garbage. The garbage collector compacts the reclaimable memory and the special runtime thread empties the freachable queue, executing each object's finalize method.

The next time the garbage collector is invoked, it sees that the finalized objects are truly garbage and the memory for those objects is then, simply freed.

Thus when an object requires finalization, it dies, then lives (resurrects) and finally dies again. It is recommended to avoid using finalize method, unless required. Finalize methods increase memory pressure by not letting the memory and the resources used by that object to be released, until two garbage collections. Since you do not have control on the order in which the finalize methods are executed, it may lead to unpredictable results.

Weak references

Weak references are a means of performance enhancement, used to reduce the pressure placed on the managed heap by large objects.

When a root points to an object, it's called a strong reference to the object, and the object cannot be collected because the application's code can reach the object.

When an object has a weak reference to it, it basically means that if there is a memory requirement and the garbage collector runs, the object can be collected; and when the application later attempts to access the object, the access will fail. On the other hand, to access a weakly referenced object, the application must obtain a strong reference to the object. If the application obtains this strong reference before the garbage collector collects the object, then the GC cannot collect the object because a strong reference to the object exists.

The managed heap contains two internal data structures whose sole purpose is to manage weak references: the short weak reference table and the long weak reference table.

Weak references are of two types:

1. A short weak reference doesn't track resurrection -- i.e. the object which has a short weak reference to itself is collected immediately without running its finalization method.
2. A long weak reference tracks resurrection -- i.e. the garbage collector collects object pointed to by the long weak reference table only after determining that the object's storage is reclaimable. If the object has a finalize method, the finalize method has been called and the object was not resurrected.

These two tables simply contain pointers to objects allocated within the managed heap. Initially, both tables are empty. When you create a WeakReference object, an object is not allocated from the managed heap. Instead, an empty slot in one of the weak reference tables is located; short weak references use the short weak reference table and long weak references use the long weak reference table.

Consider an example of what happens when the garbage collector runs. The diagrams (Figure 1 & 2) below show the state of all the internal data structures before and after the GC runs.

Now here's what happens when a garbage collection (GC) runs:
1. The garbage collector builds a graph of all the reachable objects. In the above example, the graph will include objects B, C, E, G.
2. The garbage collector scans the short weak reference table. If a pointer in the table refers to an object that is not part of the graph, then the pointer identifies an unreachable object and the slot in the short weak reference table is set to null. In the above example, slot of object D is set to null since it is not a part of the graph.
3. The garbage collector scans the finalization queue. If a pointer in the queue refers to an object that is not part of the graph, then the pointer identifies an unreachable object and the pointer is moved from the finalization queue to the freachable queue. At this point, the object is added to the graph, since the object is now considered reachable. In the above example, though objects A, D, F are not included in the graph, they are treated as reachable objects because they are part of the finalization queue. Finalization queue thus gets emptied.
4. The garbage collector scans the long weak reference table. If a pointer in the table refers to an object that is not part of the graph (which now contains the objects pointed to by entries in the freachable queue), then the pointer identifies an unreachable object and the slot is set to null. Since both the objects C and F are a part of the graph (of the previous step), none of them are set to null in the long reference table.
5. The garbage collector compacts the memory, squeezing out the holes left by the unreachable objects. In the above example, object H is the only object that gets removed from the heap and it's memory is reclaimed.

Generations

Since garbage collection cannot complete without stopping the entire program, it can cause arbitrarily long pauses at arbitrary times during the execution of the program. Garbage collection pauses can also prevent programs from responding to events quickly enough to satisfy the requirements of real-time systems.

One feature of the garbage collector that exists purely to improve performance is called generations. A generational garbage collector takes into account two facts that have been empirically observed in most programs in a variety of languages:
1. Newly created objects tend to have short lives.
2. The older an object is, the longer it will survive.

Generational collectors group objects by age and collect younger objects more often than older objects. When initialized, the managed heap contains no objects. All new objects added to the heap can be said to be in generation 0, until the heap gets filled up which invokes garbage collection. As most objects are short-lived, only a small percentage of young objects are likely to survive their first collection. Once an object survives the first garbage collection, it gets promoted to generation 1.Newer objects after GC can then be said to be in generation 0. The garbage collector gets invoked next only when the sub-heap of generation 0 gets filled up. All objects in generation 1 that survive get compacted and promoted to generation 2. All survivors in generation 0 also get compacted and promoted to generation 1. Generation 0 then contains no objects, but all newer objects after GC go into generation 0.

Thus, as objects "mature" (survive multiple garbage collections) in their current generation, they are moved to the next older generation. Generation 2 is the maximum generation supported by the runtime's garbage collector. When future collections occur, any surviving objects currently in generation 2 simply stay in generation 2.
Therefore, dividing the heap into generations of objects and collecting and compacting younger generation objects improves the efficiency of the basic underlying garbage collection algorithm by reclaiming a significant amount of space from the heap, and also being faster than if the collector had examined the objects in all generations.

A garbage collector that can perform generational collections, each of which is guaranteed (or at least very likely) to require less than a certain maximum amount of time, can help make runtime suitable for real-time environment and also prevent pauses that are noticeable to the user.

Myths related to garbage collection


GC is necessarily slower than manual memory management.

Counter explanation: Not necessarily. Modern garbage collectors appear to run as quickly as manual storage allocators (malloc/free or new/delete). Garbage collection probably will not run as quickly as customized memory allocator designed for use in a specific program. On the other hand, the extra code required to make manual memory management work properly (for example, explicit reference counting) is often more expensive than a garbage collector would be.

GC will necessarily make my program pause.

Counter explanation: Since garbage collectors usually stop the entire program while seeking and collecting garbage objects, they cause pauses long enough to be noticed by the users. But with the advent of modern optimization techniques, these noticeable pauses can be eliminated.

Manual memory management won't cause pauses.

Counter explanation: Manual memory management does not guarantee performance. It may cause pauses for considerable periods either on allocation or deallocation.

Programs with GC are huge and bloated; GC isn't suitable for small programs or systems.

Counter explanation: Though using garbage collection is advantageous in complex systems, there is no reason for garbage collection to introduce any significant overhead at any scale.

I've heard that GC uses twice as much memory.

Counter explanation: This may be true of primitive GCs, but this is not generally true of .NET garbage collection. The data structures used for GC need be no larger than those for manual memory management.


42. Is it true that objects don't always get destroyed immediately when the last reference goes away?

Yes. The garbage collector offers no guarantees about the time when an object will be destroyed and its memory reclaimed.

43. Why doesn't the .NET runtime offer deterministic destruction?

Because of the garbage collection algorithm. The .NET garbage collector works by periodically running through a list of all the objects that are currently being referenced by an application. All the objects that it doesn't find during this search are ready to be destroyed and the memory reclaimed. The implication of this algorithm is that the runtime doesn't get notified immediately when the final reference on an object goes away - it only finds out during the next 'sweep' of the heap.

Futhermore, this type of algorithm works best by performing the garbage collection sweep as rarely as possible. Normally heap exhaustion is the trigger for a collection sweep.

44. Is the lack of deterministic destruction in .NET a problem?

It's certainly an issue that affects component design. If you have objects that maintain expensive or scarce resources (e.g. database locks), you need to provide some way to tell the object to release the resource when it is done. Microsoft recommend that you provide a method called Dispose() for this purpose. However, this causes problems for distributed objects - in a distributed system who calls the Dispose() method? Some form of reference-counting or ownership-management mechanism is needed to handle distributed objects - unfortunately the runtime offers no help with this.

45. Should I implement Finalize on my class? Should I implement IDisposable?

This issue is a little more complex than it first appears. There are really two categories of class that require deterministic destruction - the first category manipulate unmanaged types directly, whereas the second category manipulate managed types that require deterministic destruction. An example of the first category is a class with an IntPtr member representing an OS file handle. An example of the second category is a class with a System.IO.FileStream member.

For the first category, it makes sense to implement IDisposable and override Finalize. This allows the object user to 'do the right thing' by calling Dispose, but also provides a fallback of freeing the unmanaged resource in the Finalizer, should the calling code fail in its duty. However this logic does not apply to the second category of class, with only managed resources. In this case implementing Finalize is pointless, as managed member objects cannot be accessed in the Finalizer. This is because there is no guarantee about the ordering of Finalizer execution. So only the Dispose method should be implemented. (If you think about it, it doesn't really make sense to call Dispose on member objects from a Finalizer anyway, as the member object's Finalizer will do the required cleanup.)

Note that some developers argue that implementing a Finalizer is always a bad idea, as it hides a bug in your code (i.e. the lack of a Dispose call). A less radical approach is to implement Finalize but include a Debug.Assert at the start, thus signalling the problem in developer builds but allowing the cleanup to occur in release builds.


46. Do I have any control over the garbage collection algorithm?

A little. For example the System.GC class exposes a Collect method, which forces the garbage collector to collect all unreferenced objects immediately.

Also there is a gcConcurrent setting that can be specified via the application configuration file. This specifies whether or not the garbage collector performs some of its collection activities on a separate thread. The setting only applies on multi-processor machines, and defaults to true.

47. How can I find out what the garbage collector is doing?

Lots of interesting statistics are exported from the .NET runtime via the '.NET CLR xxx' performance counters. Use Performance Monitor to view them.

48. What is the lapsed listener problem?

The lapsed listener problem is one of the primary causes of leaks in .NET applications. It occurs when a subscriber (or 'listener') signs up for a publisher's event, but fails to unsubscribe. The failure to unsubscribe means that the publisher maintains a reference to the subscriber as long as the publisher is alive. For some publishers, this may be the duration of the application.

This situation causes two problems. The obvious problem is the leakage of the subscriber object. The other problem is the performance degredation due to the publisher sending redundant notifications to 'zombie' subscribers.

There are at least a couple of solutions to the problem. The simplest is to make sure the subscriber is unsubscribed from the publisher, typically by adding an Unsubscribe() method to the subscriber.

49. What is the difference between Finalize and Dispose (Garbage collection) ?

Class instances often encapsulate control over resources that are not managed by the runtime, such as window handles (HWND), database connections, and so on. Therefore, you should provide both an explicit and an implicit way to free those resources. Provide implicit control by implementing the protected Finalize Method on an object (destructor syntax in C# and the Managed Extensions for C++). The garbage collector calls this method at some point after there are no longer any valid references to the object. In some cases, you might want to provide programmers using an object with the ability to explicitly release these external resources before the garbage collector frees the object. If an external resource is scarce or expensive, better performance can be achieved if the programmer explicitly releases resources when they are no longer being used. To provide explicit control, implement the Dispose method provided by the IDisposable Interface. The consumer of the object should call this method when it is done using the object.

Dispose can be called even if other references to the object are alive. Note that even when you provide explicit control by way of Dispose, you should provide implicit cleanup using the Finalize method. Finalize provides a backup to prevent resources from permanently leaking if the programmer fails to call Dispose.

50. What is Reflection in .NET? Namespace? How will you load an assembly which is not referenced by current assembly?

All .NET compilers produce metadata about the types defined in the modules they produce. This metadata is packaged along with the module (modules in turn are packaged together in assemblies), and can be accessed by a mechanism called reflection. The System.Reflection namespace contains classes that can be used to interrogate the types for a module/assembly.

Using reflection to access .NET metadata is very similar to using ITypeLib/ITypeInfo to access type library data in COM, and it is used for similar purposes - e.g. determining data type sizes for marshaling data across context/process/machine boundaries.

Reflection can also be used to dynamically invoke methods (see System.Type.InvokeMember), or even create types dynamically at run-time (see System.Reflection.Emit.TypeBuilder).

Reflection generally means that a program can gain knowledge about its own structure. With .NET, Reflection describes the possibility - depending on security features - to detect mega data of a .NET-application and of the data types and functions contained therein.

During the runtime, it is therefore possible that an application can determine its own functionality. For example, an application could be developed, which can be extended by functionality through the adding of the respective Assemblies - without changing the main program.

51. When do I need to use GC.KeepAlive?

It's very unintuitive, but the runtime can decide that an object is garbage much sooner than you expect. More specifically, an object can become garbage while a method is executing on the object, which is contrary to most developers' expectations.

Example:

using System;
using System.Runtime.InteropServices;
class Win32
{
[DllImport("kernel32.dll")]
public static extern IntPtr CreateEvent( IntPtr lpEventAttributes,
bool bManualReset,bool bInitialState, string lpName);
[DllImport("kernel32.dll", SetLastError=true)]
public static extern bool CloseHandle(IntPtr hObject);
[DllImport("kernel32.dll")]
public static extern bool SetEvent(IntPtr hEvent);
}
class EventUser
{
public EventUser()
{
hEvent = Win32.CreateEvent( IntPtr.Zero, false, false, null );
}

~EventUser()
{
Win32.CloseHandle( hEvent );
Console.WriteLine("EventUser finalized");
}
public void UseEvent()
{
UseEventInStatic( this.hEvent );
}
static void UseEventInStatic( IntPtr hEvent )
{
//GC.Collect();
bool bSuccess = Win32.SetEvent( hEvent );
Console.WriteLine( "SetEvent " + (bSuccess ? "succeeded" : "FAILED!") );
}
IntPtr hEvent;
}
class App
{
static void Main(string[] args)
{
EventUser eventUser = new EventUser();
eventUser.UseEvent();
}
}
If you run this code, it'll probably work fine, and you'll get the following output:
SetEvent succeeded
EventDemo finalized
However, if you uncomment the GC.Collect() call in the UseEventInStatic() method, you'll get this output:

EventDemo finalized
SetEvent FAILED!
(Note that you need to use a release build to reproduce this problem.)

So what's happening here? Well, at the point where UseEvent() calls UseEventInStatic(), a copy is taken of the hEvent field, and there are no further references to the EventUser object anywhere in the code. So as far as the runtime is concerned, the EventUser object is garbage and can be collected. Normally of course the collection won't happen immediately, so you'll get away with it, but sooner or later a collection will occur at the wrong time, and your app will fail.

A solution to this problem is to add a call to GC.KeepAlive(this) to the end of the UseEvent method

52. Explain how the objects are created and destroyed ? or Explain object life time?

An instance of a class, an object, is created by using the New keyword. Initialization tasks often must be performed on new objects before they are used. Common initialization tasks include opening files, connecting to databases, and reading values of registry keys. Microsoft Visual Basic 2005 controls the initialization of new objects using procedures called constructors (special methods that allow control over initialization).

After an object leaves scope, it is released by the common language runtime (CLR). Visual Basic 2005 controls the release of system resources using procedures called destructors. Together, constructors and destructors support the creation of robust and predictable class libraries.

Sub New and Sub Finalize

The Sub New and Sub Finalize procedures in Visual Basic 2005 initialize and destroy objects; they replace the Class_Initialize and Class_Terminate methods used in Visual Basic 6.0 and earlier versions. Unlike Class_Initialize, the Sub New constructor can run only once when a class is created. It cannot be called explicitly anywhere other than in the first line of code of another constructor from either the same class or from a derived class. Furthermore, the code in the Sub New method always runs before any other code in a class. Visual Basic 2005 implicitly creates a Sub New constructor at run time if you do not explicitly define a Sub New procedure for a class.

Before releasing objects, the CLR automatically calls the Finalize method for objects that define a Sub Finalize procedure. The Finalize method can contain code that needs to execute just before an object is destroyed, such as code for closing files and saving state information. There is a slight performance penalty for executing Sub Finalize, so you should define a Sub Finalize method only when you need to release objects explicitly.

The garbage collector in the CLR does not (and cannot) dispose of unmanaged objects, objects that the operating system executes directly, outside the CLR environment. This is because different unmanaged objects must be disposed of in different ways. That information is not directly associated with the unmanaged object; it must be found in the documentation for the object. A class that uses unmanaged objects must dispose of them in its Finalize method.

The Finalize destructor is a protected method that can be called only from the class it belongs to, or from derived classes. The system calls Finalize automatically when an object is destroyed, so you should not explicitly call Finalize from outside of a derived class's Finalize implementation.

Unlike Class_Terminate, which executes as soon as an object is set to nothing, there is usually a delay between when an object loses scope and when Visual Basic 2005 calls the Finalize destructor. Visual Basic 2005 allows for a second kind of destructor, Dispose, which can be explicitly called at any time to immediately release resources.

A Finalize destructor should not throw exceptions, because they cannot be handled by the application and can cause the application to terminate.

IDisposable Interface

Class instances often control resources not managed by the CLR, such as Windows handles and database connections. These resources must be disposed of in the Finalize method of the class, so that they will be released when the object is destroyed by the garbage collector. However, the garbage collector destroys objects only when the CLR requires more free memory. This means that the resources may not be released until long after the object goes out of scope.
To supplement garbage collection, your classes can provide a mechanism to actively manage system resources if they implement the IDisposable interface. IDisposable has one method, Dispose, which clients should call when they finish using an object. You can use the Dispose method to immediately release resources and perform tasks such as closing files and database connections. Unlike the Finalize destructor, the Dispose method is not called automatically. Clients of a class must explicitly call Dispose when you want to immediately release resources.
Implementing IDisposable
A class that implements the IDisposable interface should include these sections of code:

• A field for keeping track of whether the object has been disposed:
Protected disposed As Boolean = False
• An overload of the Dispose that frees the class's resources. This method should be called by the Dispose and Finalize methods of the base class:
Protected Overridable Sub Dispose(ByVal disposing As Boolean) If Not Me.disposed Then If disposing Then ' Insert code to free unmanaged resources. End If ' Insert code to free shared resources. End If Me.disposed = True End Sub
• An implementation of Dispose that contains only the following code:
Public Sub Dispose() Implements IDisposable.Dispose Dispose(True) GC.SuppressFinalize(Me) End Sub
o An override of the Finalize method that contains only the following code:
Protected Overrides Sub Finalize() Dispose(False) MyBase.Finalize() End Sub
Deriving from a Class that Implements IDisposable

A class that derives from a base class that implements the IDisposable interface does not need to override any of the base methods unless it uses additional resources that need to be disposed. In that situation, the derived class should override the base class's Dispose(disposing) method to dispose of the derived class's resources. This override must call the base class's Dispose(disposing) method.

Protected Overrides Sub Dispose(ByVal disposing As Boolean) If Not Me.disposed Then If disposing Then ' Insert code to free unmanaged resources. End If ' Insert code to free shared resources. End If MyBase.Dispose(disposing) End Sub
A derived class should not override the base class's Dispose and Finalize methods. When those methods are called from an instance of the derived class, the base class's implementation of those methods call the derived class's override of the Dispose(disposing) method.

Visualization

The following diagram shows which methods are inherited and which methods are overridden in the derived class.



When this Dispose Finalize pattern is followed, the resources of the derived class and base class are correctly disposed. The following diagram shows which methods get called when the classes are disposed and finalized.



Garbage Collection and the Finalize Destructor

The .NET Framework uses the reference-tracing garbage collection system to periodically release unused resources. Visual Basic 6.0 and earlier versions used a different system called reference counting to manage resources. Although both systems perform the same function automatically, there are a few important differences.

The CLR periodically destroys objects when the system determines that such objects are no longer needed. Objects are released more quickly when system resources are in short supply, and less frequently otherwise. The delay between when an object loses scope and when the CLR releases it means that, unlike with objects in Visual Basic 6.0 and earlier versions, you cannot determine exactly when the object will be destroyed. In such a situation, objects are said to have non-deterministic lifetime. In most cases, non-deterministic lifetime does not change how you write applications, as long as you remember that the Finalize destructor may not immediately execute when an object loses scope.

Another difference between the garbage-collection systems involves the use of Nothing. To take advantage of reference counting in Visual Basic 6.0 and earlier versions, programmers sometimes assigned Nothing to object variables to release the references those variables held. If the variable held the last reference to the object, the object's resources were released immediately. In Visual Basic 2005, while there may be cases in which this procedure is still valuable, performing it never causes the referenced object to release its resources immediately. To release resources immediately, use the object's Dispose method, if available. The only time you should set a variable to Nothing is when its lifetime is long relative to the time the garbage collector takes to detect orphaned objects.


53. What are different type of JIT ?

In Microsoft .NET there are three types of JIT compilers:
• Pre-JIT. Pre-JIT compiles complete source code into native code in a single compilation cycle. This is done at the time of deployment of the application.
• Kcono-JIT. Econo-JIT compiles only those methods that are called at runtime. However, these compiled methods are removed when they arc not required.
• Normal-JIT. Normal-JIT compiles only those methods that are called at runtime. These methods are compiled the first time they are called, and then they are stored in cache. When the same methods are called again, the compiled code from cache is used for execution.

54. What are Value types and Reference types ?

Reference Type:
Reference types are allocated on the managed CLR heap, just like object types.
A data type that is stored as a reference to the value's location. The value of a reference type is the location of the sequence of bits
that represent the type's data. Reference types can be self-describing types, pointer types, or interface types

Value Type:

Value types are allocated on the stack just like primitive types in VBScript, VB6 and C/C++. Value types are not instantiated using new go out of scope when the function they are defined within returns.
Value types in the CLR are defined as types that derive from system.valueType.

A data type that fully describes a value by specifying the sequence of bits that constitutes the value's representation. Type information for a value type instance is not stored with the instance at run time, but it is available in metadata. Value type instances can be treated as objects using boxing.

55. What is concept of Boxing and Unboxing ?

Boxing:

The conversion of a value type instance to an object, which implies that the instance will carry full type information at run time and will be allocated in the heap. The Microsoft intermediate language (MSIL) instruction set's box instruction converts a value type to an object by making a copy of the value type and embedding it in a newly allocated object.

Un-Boxing:

The conversion of an object instance to a value type.
Dim x As Integer
Dim y As Object
x = 10
boxing process
y = x
unboxing process
x = y


56. What is difference between constants, readonly and, static ?

Constants : The value can’t be changed
Read-only : The value will be initialized only once from the constructor of the class.
Static : Value can be initialized once.


57. What's difference between VB.NET and C# ?

Advantages VB.NET

• Has support for optional parameters which makes COM interoperability much easy.
• With Option Strict off late .binding is supported.Legacy VB functionalities can be used by using Microsoft.VisualBasic namespace.
• Has the WITH construct which is not in C#.
• The VB.NET part of Visual Studio .NET compiles your code in the background. While this is considered an advantage for small projects, people creating very large projects have found that the IDE slows down considerably as the project gets larger.
Advantages ofC#

• XMI. documentation is generated from source code but this is now been incorporated in Whidbey.
• Operator overloading which is not in current VB.NET but is been introduced in hidbey.
• The using statement, which makes unmanaged resource disposal simple.
• Access to Unsafe code. This allows pointer arithmetic etc, and can improve performance in some situations. However, it is not to be used lightly, as a lot of the normal safety of C# is lost (as the name implies).This is the major difference that you can access unmanaged code in C# and not in VB.NET.


58. What's difference between System exceptions and Application exceptions?

The difference between ApplicationException and SystemException is that SystemExceptions are thrown by the CLR, and ApplicationExceptions are thrown by Applications. For example, SqlException inherits from SystemException. Included here to make this list complete, there should not be any circumstances where one would need to inherit from SystemException.

System.Exception

If extending the base exception class with additional members, inherit from System.Exception. The name of such inherited classes should end with “Exception.”

Properly, System.Exception should have been declared as abstract with a recommendation to be inherited only by concrete exception classes. Doing so would have avoided the question of which to use right from the start, and would have also helped MS with versioning. However, since things are what they are, in this situation it makes sense to inherit and extend System.Exception. The rule to tend towards a flat hierarchy wins. Inherit from System.Exception when creating a class which adds members.

System.ApplicationException

Applications often provide their own custom exception types (e.g. CmsException, SharePointException) which do not add properties or methods, but simply subclass System.ApplicationException with a new name. If more detailed expections are required for the application, they will then inherit from this base class. This makes it convenient to throw application-specific exceptions that can be identified distinctly inside a try-catch block. When doing the same for your own applications, inherit from System.ApplicationException.
For both System.Exception and System.Application, the rules are consistent in this way: they work with the Framework as it is, not as we would like it to be. Since ApplicationException exists and is in common use, it would be a deviation to derive application-specific exceptions directly from System.Exception. So while you still won't catch ApplicationException directly, you should certainly catch its descendants.

59. What is namespace used for loading assemblies at run time and name the methods?

System.Reflection

60. What is Code Access security

CAS is the part of the .NET security model that determines whether or not a piece of code is allowed to run, and what resources it can use when it is running. For example, it is CAS that will prevent a .NET web applet from formatting your hard disk.

61. How does CAS work?

The CAS security policy revolves around two key concepts - code groups and permissions. Each .NET assembly is a member of a particular code group, and each code group is granted the permissions specified in a named permission set.

For example, using the default security policy, a control downloaded from a web site belongs to the 'Zone - Internet' code group, which adheres to the permissions defined by the 'Internet' named permission set. (Naturally the 'Internet' named permission set represents a very restrictive range of permissions.)

62. Who defines the CAS code groups?

Microsoft defines some default ones, but you can modify these and even create your own. To see the code groups defined on your system, run 'caspol -lg' from the command-line. On my system it looks like this:

Level = Machine
Code Groups:
1. All code: Nothing
1.1. Zone - MyComputer: FullTrust
1.1.1. Honor SkipVerification requests: SkipVerification
1.2. Zone - Intranet: LocalIntranet
1.3. Zone - Internet: Internet
1.4. Zone - Untrusted: Nothing
1.5. Zone - Trusted: Internet
1.6. StrongName -
0024000004800000940000000602000000240000525341310004000003
000000CFCB3291AA715FE99D40D49040336F9056D7886FED46775BC7BB5430BA4444FEF8348EBD06
F962F39776AE4DC3B7B04A7FE6F49F25F740423EBF2C0B89698D8D08AC48D69CED0FC8F83B465E08
07AC11EC1DCC7D054E807A43336DDE408A5393A48556123272CEEEE72F1660B71927D38561AABF5C
AC1DF1734633C602F8F2D5: Everything
Note the hierarchy of code groups - the top of the hierarchy is the most general ('All code'), which is then sub-divided into several groups, each of which in turn can be sub-divided. Also note that (somewhat counter-intuitively) a sub-group can be associated with a more permissive permission set than its parent.

63. How do I define my own code group?

Use caspol. For example, suppose you trust code from www.mydomain.com and you want it have full access to your system, but you want to keep the default restrictions for all other internet sites. To achieve this, you would add a new code group as a sub-group of the 'Zone - Internet' group, like this:
caspol -ag 1.3 -site www.mydomain.com FullTrust
Now if you run caspol -lg you will see that the new group has been added as group 1.3.1:
...
1.3. Zone - Internet: Internet
1.3.1. Site - www.mydomain.com: FullTrust
...
Note that the numeric label (1.3.1) is just a caspol invention to make the code groups easy to manipulate from the command-line. The underlying runtime never sees it.
 
Locations of visitors to this page