Tuesday, October 20, 2009

ADO.Net Interview Questions

1. What is ADO.Net?

ActiveX Data Object (ADO).NET is the primary relational data access model for Microsoft .NET-based applications. ADO.Net provides consistent data access from database management system (DBMS) such as SQL Server, Oracle etc. ADO.NET is exclusively designed to meet the requirements of web-based applications model such as disconnected data architecture, integration with XML, common data representation, combining data from multiple data sources, and optimization in interacting with the database.

2. Explain the ADO .Net Architecture?

ADO.NET Architecture includes three data providers for implementing connectivity with databases: SQL Server .NET Data Provider, OLEDB .NET Data Provider, and ODBC .Net Data Provider. You can access data through data provider in two ways either using a DataReader or DataAdapter.

Leverage current ADO knowledge
Support the N-Tier programming model
Provide support for XML

In distributed applications, the concept of working with disconnected data has become very common. A disconnected model means that once you have retrieved the data that you need, the connection to the data source is dropped—you work with the data locally. The reason why this model has become so popular is that it frees up precious database server resources, which leads to highly scalable applications. The ADO.NET solution for disconnected data is the DataSet object.

Data Access in ADO.NET relies on two components:

DataSet
Data Provider.
DataSet


The ADO.NET Data Set is explicitly designed for data access independent of any data source. As a result, it can be used with multiple and differing data sources, used with XML data, or used to manage data local to the application. The DataSet contains a collection of one or more objects made up of rows and columns of data, as well as primary key, foreign key, constraint, and relation information about the data in the DataTable objects.
The dataset is a disconnected, in-memory representation of data. It can be considered as a local copy of the relevant portions of the database. The DataSet is persisted in memory and the data in it can be manipulated and updated independent of the database. When the use of this DataSet is finished, changes can be made back to the central database for updating. The data in DataSet can be loaded from any valid data source like Microsoft SQL server database, an Oracle database or from a Microsoft Access database.



Data Provider

The Data Provider is responsible for providing and maintaining the connection to the database. A DataProvider is a set of related components that work together to provide data in an efficient and performance driven manner. The .NET Framework currently comes with two DataProviders: the SQL Data Provider which is designed only to work with Microsoft's SQL Server 7.0 or later and the OleDb DataProvider which allows us to connect to other types of databases like Access and Oracle. Each DataProvider consists of the following component classes:

The Connection object which provides a connection to the database
The Command object which is used to execute a command
The DataReader object which provides a forward-only, read only, connected recordset
The DataAdapter object which populates a disconnected DataSet with data and performs update

Data access with ADO.NET can be summarized as follows:
A connection object establishes the connection for the application with the database. The command object provides direct execution of the command to the database. If the command returns more than a single value, the command object returns a DataReader to provide the data. Alternatively, the DataAdapter can be used to fill the Dataset object. The database can be updated using the command object or the DataAdapter.


Component classes that make up the Data Providers

The Connection Object


The Connection object creates the connection to the database. Microsoft Visual Studio .NET provides two types of Connection classes: the SqlConnection object, which is designed specifically to connect to Microsoft SQL Server 7.0 or later, and the OleDbConnection object, which can provide connections to a wide range of database types like Microsoft Access and Oracle. The Connection object contains all of the information required to open a connection to the database.







The Command Object


The Command object is represented by two corresponding classes: SqlCommand and OleDbCommand. Command objects are used to execute commands to a database across a data connection. The Command objects can be used to execute stored procedures on the database, SQL commands, or return complete tables directly. Command objects provide three methods that are used to execute commands on the database:
ExecuteNonQuery: Executes commands that have no return values such as INSERT, UPDATE or DELETE
ExecuteScalar: Returns a single value from a database query
ExecuteReader: Returns a result set by way of a DataReader object



The DataReader Object


The DataReader object provides a forward-only, read-only, connected stream recordset from a database. Unlike other components of the Data Provider, DataReader objects cannot be directly instantiated. Rather, the DataReader is returned as the result of the Command object's ExecuteReader method. The SqlCommand.ExecuteReader method returns a SqlDataReader object, and the OleDbCommand.ExecuteReader method returns an OleDbDataReader object. The DataReader can provide rows of data directly to application logic when you do not need to keep the data cached in memory. Because only one row is in memory at a time, the DataReader provides the lowest overhead in terms of system performance but requires the exclusive use of an open Connection object for the lifetime of the DataReader.



The DataAdapter Object


The DataAdapter is the class at the core of ADO .NET's disconnected data access. It is essentially the middleman facilitating all communication between the database and a DataSet. The DataAdapter is used either to fill a DataTable or DataSet with data from the database with it's Fill method. After the memory-resident data has been manipulated, the DataAdapter can commit the changes to the database by calling the Update method. The DataAdapter provides four properties that represent database commands:



SelectCommand
InsertCommand
DeleteCommand
UpdateCommand
When the Update method is called, changes in the DataSet are copied back to the database and the appropriate InsertCommand, DeleteCommand, or UpdateCommand is executed.


3. What are the advantages and drawbacks of using ADO.NET?

Pros

ADO.NET is rich with plenty of features that are bound to impress even the most skeptical of programmers. If this weren’t the case, Microsoft wouldn’t even be able to get anyone to use the Beta. What we’ve done here is come up with a short list of some of the more outstanding benefits to using the ADO.NET architecture and the System.Data namespace.

* Performance – there is no doubt that ADO.NET is extremely fast. The actual figures vary depending on who performed the test and which benchmark was being used, but ADO.NET performs much, much faster at the same tasks than its predecessor, ADO. Some of the reasons why ADO.NET is faster than ADO are discussed in the ADO versus ADO.NET section later in this chapter.

* Optimized SQL Provider – in addition to performing well under general circumstances, ADO.NET includes a SQL Server Data Provider that is highly optimized for interaction with SQL Server. It uses SQL Server’s own TDS (Tabular Data Stream) format for exchanging information. Without question, your SQL Server 7 and above data access operations will run blazingly fast utilizing this optimized Data Provider.

* XML Support (and Reliance) – everything you do in ADO.NET at some point will boil down to the use of XML. In fact, many of the classes in ADO.NET, such as the DataSet, are so intertwined with XML that they simply cannot exist or function without utilizing the technology. You’ll see later when we compare and contrast the “old” and the “new” why the reliance on XML for internal storage provides many, many advantages, both to the framework and to the programmer utilizing the class library.

* Disconnected Operation Model – the core ADO.NET class, the DataSet, operates in an entirely disconnected fashion. This may be new to some programmers, but it is a remarkably efficient and scalable architecture. Because the disconnected model allows for the DataSet class to be unaware of the origin of its data, an unlimited number of supported data sources can be plugged into code without any hassle in the future.

* Rich Object Model – the entire ADO.NET architecture is built on a hierarchy of class inheritance and interface implementation. Once you start looking for things you need within this namespace, you’ll find that the logical inheritance of features and base class support makes the entire system extremely easy to use, and very customizable to suit your own needs. It is just another example of how everything in the .NET framework is pushing toward a trend of strong application design and strong OOP implementations.


Cons

Hard as it may be to believe, there are a couple of drawbacks or disadvantages to using the ADO.NET architecture. I’m sure others can find many more faults than we list here, but we decided to stick with a short list of some of the more obvious and important shortcomings of the technology.

* Managed-Only Access – for a few obvious reasons, and some far more technical, you cannot utilize the ADO.NET architecture from anything but managed code. This means that there is no COM interoperability allowed for ADO.NET. Therefore, in order to take advantage of the advanced SQL Server Data Provider and any other feature like DataSets, XML internal data storage, etc, your code must be running under the CLR.

* Only Three Managed Data Providers (so far) – unfortunately, if you need to access any data that requires a driver that cannot be used through either an OLEDB provider or the SQL Server Data Provider, then you may be out of luck. However, the good news is that the OLEDB provider for ODBC is available for download from Microsoft. At that point the down-side becomes one of performance, in which you are invoking multiple layers of abstraction as well as crossing the COM InterOp gap, incurring some initial overhead as well.

* Learning Curve – despite the misleading name, ADO.NET is not simply a new version of ADO, nor should it even be considered a direct successor. ADO.NET should be thought of more as the data access class library for use with the .NET framework. The difficulty in learning to use ADO.NET to its fullest is that a lot of it does seem familiar. It is this that causes some common pitfalls. Programmers need to learn that even though some syntax may appear the same, there is actually a considerable amount of difference in the internal workings of many classes. For example (this will be discussed in far more detail later), an ADO.NET DataSet is nothing at all like a disconnected ADO RecordSet. Some may consider a learning curve a drawback, but I consider learning curves more like scheduling issues. There’s a learning curve in learning anything new; it’s just up to you to schedule that curve into your time so that you can learn the new technology at a pace that fits your schedule.


4. Explain what a diffgram is and its usage ?

A DiffGram is an XML format that is used to identify current and original versions of data elements. The DataSet uses the DiffGram format to load and persist its contents, and to serialize its contents for transport across a network connection. When a DataSet is written as a DiffGram, it populates the DiffGram with all the necessary information to accurately recreate the contents, though not the schema, of the DataSet, including column values from both the Original and Current row versions, row error information, and row order.
When sending and retrieving a DataSet from an XML Web service, the DiffGram format is implicitly used. Additionally, when loading the contents of a DataSet from XML using the ReadXml method, or when writing the contents of a DataSet in XML using the WriteXml method, you can select that the contents be read or written as a DiffGram.
The DiffGram format is divided into three sections: the current data, the original (or "before") data, and an errors section, as shown in the following example.





xmlns:msdata="urn:schemas-microsoft-com:xml-msdata"
xmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">









The DiffGram format consists of the following blocks of data:

The name of this element, DataInstance, is used for explanation purposes in this documentation. A DataInstance element represents a DataSet or a row of a DataTable. Instead of DataInstance, the element would contain the name of the DataSet or DataTable. This block of the DiffGram format contains the current data, whether it has been modified or not. An element, or row, that has been modified is identified with the diffgr:hasChanges annotation.

This block of the DiffGram format contains the original version of a row. Elements in this block are matched to elements in the DataInstance block using the diffgr:id annotation.

This block of the DiffGram format contains error information for a particular row in the DataInstance block. Elements in this block are matched to elements in the DataInstance block using the diffgr:id annotation.



Attribute
Description

diffgr:hasChanges
The row has been modified (see related row in ) or inserted.

diffgr:hasErrors
The row has an error (see related row in ).

diffgr:id
Identifies the ID used to couple rows across sections: TableName+RowIdentifier.

diffgr:parentId
Identifies the ID used to identify the parent of the current row.

diffgr:error
Contains the error text for the row in .

msdata:rowOrder
Tracks the ordinal position of the row in the DataSet.

msdata:hidden
Identifies columns marked as hidden msdata:hiddenColumn=…



5. Can you edit data in the Repeater control?

NO.

6. Which method do you invoke on the DataAdapter control to load your generated dataset with data?

You have to use the Fill method of the DataAdapter control and pass the dataset object as an argument to load the generated data.

7. Which are the different IsolationLevels ?

Isolation Level
Description

ReadCommitted
The default for SQL Server. This level ensures that data written by one transaction will only be accessible in a second transaction after the first transaction commits.

ReadUncommitted
This permits your transaction to read data within the database, even data that has not yet been committed by another transaction. For example, if two users were accessing the same database, and the first inserted some data without concluding their transaction (by means of a Commit or Rollback), then the second user with their isolation level set to ReadUncommitted could read the data.

RepeatableRead
This level, which extends the ReadCommitted level, ensures that if the same statement is issued within the transaction, regardless of other poten- tial updates made to the database, the same data will always be returned. This level does require extra locks to be held on the data, which could adversely affect performance. This level guarantees that, for each row in the initial query, no changes can be made to that data. It does, however, permit "phantom" rows to show up — these are completely new rows that another transaction might have inserted while your transaction was running.

Serializable
This is the most "exclusive" transaction level, which in effect serializes access to data within the database. With this isolation level, phantom rows can never show up, so a SQL statement issued within a serializable transac- tion will always retrieve the same data. The negative performance impact of a Serializable transaction should not be underestimated — if you don't absolutely need to use this level of isolation, stay away from it.



8. How xml files and be read and write using dataset?.

DataSet exposes method like ReadXml and WriteXml to read and write xml

9. What are the different rowversions available?

DataRow Version Value
Description

Current
The value existing at present within the column. If no edit has occurred, this will be the same as the original value. If an edit (or edits) have occurred, the value will be the last valid value entered.

Default
The default value (in other words, any default set up for the column).

Original
The value of the column when originally selected from the database. If the DataRow's AcceptChanges method is called, this value will update to the Current value.

Proposed
When changes are in progress for a row, it is possible to retrieve this modified value. If you call BeginEdit() on the row and make changes, each column will have a proposed value until either EndEdit() or CancelEdit() is called.



10. Explain ACID properties?.

The ACID model is one of the oldest and most important concepts of database theory. It sets forward four goals that every database management system must strive to achieve: atomicity, consistency, isolation and durability. No database that fails to meet any of these four goals can be considered reliable.

Let’s take a moment to examine each one of these characteristics in detail:



Atomicity states that database modifications must follow an “all or nothing” rule. Each transaction is said to be “atomic.” If one part of the transaction fails, the entire transaction fails. It is critical that the database management system maintain the atomic nature of transactions in spite of any DBMS, operating system or hardware failure.


Consistency states that only valid data will be written to the database. If, for some reason, a transaction is executed that violates the database’s consistency rules, the entire transaction will be rolled back and the database will be restored to a state consistent with those rules. On the other hand, if a transaction successfully executes, it will take the database from one state that is consistent with the rules to another state that is also consistent with the rules.


Isolation requires that multiple transactions occurring at the same time not impact each other’s execution. For example, if Joe issues a transaction against a database at the same time that Mary issues a different transaction, both transactions should operate on the database in an isolated manner. The database should either perform Joe’s entire transaction before executing Mary’s or vice-versa. This prevents Joe’s transaction from reading intermediate data produced as a side effect of part of Mary’s transaction that will not eventually be committed to the database. Note that the isolation property does not ensure which transaction will execute first, merely that they will not interfere with each other.


Durability ensures that any transaction committed to the database will not be lost. Durability is ensured through the use of database backups and transaction logs that facilitate the restoration of committed transactions in spite of any subsequent software or hardware failures.


11. Differences Between ADO and ADO.NET

ADO.NET is an evolution of ADO. The following table lists several data access features and how each feature differs between ADO and ADO.NET.



Feature
ADO
ADO.NET

Memory-resident data representation
Uses the Recordset object, which holds single rows of data, much like a database table
Uses the DataSet object, which can contain one or more tables represented by DataTable objects

Relationships between multiple tables
Requires the JOIN query to assemble data from multiple database tables in a single result table. Also offers hierarchical recordsets, but they are hard to use
Supports the DataRelation object to associate rows in one DataTable object with rows in another DataTable object

Data navigation
Traverses rows in a Recordset sequentially, by using the .MoveNext method
The DataSet uses a navigation paradigm for nonsequential access to rows in a table. Accessing the data is more like accessing data in a collection or array. This is possible because of the Rows collection of the DataTable; it allows you to access rows by index. Follows relationships to navigate from rows in one table to corresponding rows in another table

Disconnected access
Provided by the Recordset but it has to be explicitly coded for. The default for a Recordset object is to be connected via the ActiveConnection property. You communicate to a database with calls to an OLE DB provider
Communicates to a database with standardized calls to the DataAdapter object, which communicates to an OLE DB data provider, or directly to a SQL Server data provider

Programmability
All Recordset field data types are COM Variant data types, and usually correspond to field names in a database table
Uses the strongly typed programming characteristic of XML. Data is self-describing because names for code items correspond to the business problem solved by the code. Data in DataSet and DataReader objects can be strongly typed, thus making code easier to read and to write

Sharing disconnected data between tiers or components
Uses COM marshaling to transmit a disconnected record set. This supports only those data types defined by the COM standard. Requires type conversions, which demand system resources
Transmits a DataSet as XML. The XML format places no restrictions on data types and requires no type conversions

Transmitting data through firewalls
Problematic, because firewalls are typically configured to prevent system-level requests such as COM marshaling
Supported, because ADO.NET DataSet objects use XML, which can pass through firewalls

Scalability
Since the defaults in ADO are to use connected Recordset objects, database locks, and active database connections for long durations contend for limited database resources
Disconnected access to database data without retaining database locks or active database connections for lengthy periods limits contention for limited database resources



12. Whate are different types of Commands available with DataAdapter ?

The SqlDataAdapter has four command objects



SelectCommand
InsertCommand
DeleteCommand
UpdateCommand


13. What is a Dataset?

Major component of ADO.NET is the DataSet object, which you can think of as being similar to an in-memory relational database. DataSet objects contain DataTable objects, relationships, and constraints, allowing them to replicate an entire data source, or selected parts of it, in a disconnected fashion.
A DataSet object is always disconnected from the source whose data it contains, and as a consequence it doesn't care where the data comes from—it can be used to manipulate data from a traditional database or an XML document, or anything in between. In order to connect a DataSet to a data source, you need to use a data adapter as an intermediary between the DataSet and the .NET data provider.

Datasets are the result of bringing together ADO and XML. A dataset contains one or more data of tabular XML, known as DataTables, these data can be treated separately, or can have relationships defined between them. Indeed these relationships give you ADO data SHAPING without needing to master the SHAPE language, which many people are not comfortable with.
The dataset is a disconnected in-memory cache database. The dataset object model looks like this:
Dataset
DataTableCollection
DataTable
DataView
DataRowCollection
DataRow
DataColumnCollection
DataColumn
ChildRelations
ParentRelations
Constraints
PrimaryKey
DataRelationCollection
Let’s take a look at each of these:
DataTableCollection: As we say that a DataSet is an in-memory database. So it has this collection, which holds data from multiple tables in a single DataSet object.
DataTable: In the DataTableCollection, we have DataTable objects, which represents the individual tables of the dataset.
DataView: The way we have views in database, same way we can have DataViews. We can use these DataViews to do Sort, filter data.
DataRowCollection: Similar to DataTableCollection, to represent each row in each Table we have DataRowCollection.
DataRow: To represent each and every row of the DataRowCollection, we have DataRows.
DataColumnCollection: Similar to DataTableCollection, to represent each column in each Table we have DataColumnCollection.
DataColumn: To represent each and every Column of the DataColumnCollection, we have DataColumn.
PrimaryKey: Dataset defines Primary key for the table and the primary key validation will take place without going to the database.
Constraints: We can define various constraints on the Tables, and can use Dataset.Tables(0).enforceConstraints. This will execute all the constraints, whenever we enter data in DataTable.
DataRelationCollection: as we know that we can have more than 1 table in the dataset, we can also define relationship between these tables using this collection and maintain a parent-child relationship.


14. How you will set the datarelation between two columns?

ADO.NET provides DataRelation object to set relation between two columns.It helps to enforce the following constraints,a unique constraint, which guarantees that a column in the table contains no duplicates and a foreign-key constraint,which can be used to maintain referential integrity.A unique constraint is implemented either by simply setting the Unique property of a data column to true, or by adding an instance of the UniqueConstraint class to the DataRelation object's ParentKeyConstraint. As part of the foreign-key constraint, you can specify referential integrity rules that are applied at three points,when a parent record is updated,when a parent record is deleted and when a change is accepted or rejected.

15. Which method do you invoke on the DataAdapter control to load your generated dataset with data?

Use the Fill method of the DataAdapter control and pass the dataset object as an argument to load the generated data.

16. How do you handle data concurrency in .NET ?

In general, there are three common ways to manage concurrency in a database:



Pessimistic concurrency control: A row is unavailable to users from the time the record is fetched until it is updated in the database.
Optimistic concurrency control: A row is unavailable to other users only while the data is actually being updated. The update examines the row in the database and determines whether any changes have been made. Attempting to update a record that has already been changed results in a concurrency violation.
"Last in wins": A row is unavailable to other users only while the data is actually being updated. However, no effort is made to compare updates against the original record; the record is simply written out, potentially overwriting any changes made by other users since you last refreshed the records.
Pessimistic Concurrency

Pessimistic concurrency is typically used for two reasons. First, in some situations there is high contention for the same records. The cost of placing locks on the data is less than the cost of rolling back changes when concurrency conflicts occur.
Pessimistic concurrency is also useful for situations where it is detrimental for the record to change during the course of a transaction. A good example is an inventory application. Consider a company representative checking inventory for a potential customer. You typically want to lock the record until an order is generated, which would generally flag the item with a status of ordered and remove it from available inventory. If no order is generated, the lock would be released so that other users checking inventory get an accurate count of available inventory.
However, pessimistic concurrency control is not possible in a disconnected architecture. Connections are open only long enough to read the data or to update it, so locks cannot be sustained for long periods. Moreover, an application that holds onto locks for long periods is not scalable.


Optimistic Concurrency


In optimistic concurrency, locks are set and held only while the database is being accessed. The locks prevent other users from attempting to update records at the same instant. The data is always available except for the exact moment that an update is taking place. For more information, see Using Optimistic Concurrency.
When an update is attempted, the original version of a changed row is compared against the existing row in the database. If the two are different, the update fails with a concurrency error. It is up to you at that point to reconcile the two rows, using business logic that you create.


Last in Wins


With "last in wins," no check of the original data is made and the update is simply written to the database. It is understood that the following scenario can occur:



User A fetches a record from the database.
User B fetches the same record from the database, modifies it, and writes the updated record back to the database.
User A modifies the 'old' record and writes it back to the database.
In the above scenario, the changes User B made were never seen by User A. Be sure that this situation is acceptable if you plan to use the "last in wins" approach of concurrency control.


Concurrency Control in ADO.NET and Visual Studio


ADO.NET and Visual Studio use optimistic concurrency, because the data architecture is based on disconnected data. Therefore, you need to add business logic to resolve issues with optimistic concurrency.
If you choose to use optimistic concurrency, there are two general ways to determine if changes have occurred: the version approach (true version numbers or date-time stamps) and the saving-all-values approach.


The Version Number Approach


In the version number approach, the record to be updated must have a column that contains a date-time stamp or version number. The date-time stamp or a version number is saved on the client when the record is read. This value is then made part of the update.
One way to handle concurrency is to update only if value in the WHERE clause matches the value on the record. The SQL representation of this approach is:



UPDATE Table1 SET Column1 = @newvalue1, Column2 = @newvalue2 WHERE DateTimeStamp = @origDateTimeStamp

Alternatively, the comparison can be made using the version number:


UPDATE Table1 SET Column1 = @newvalue1, Column2 = @newvalue2 WHERE RowVersion = @origRowVersionValue


If the date-time stamps or version numbers match, the record in the data store has not changed and can be safely updated with the new values from the dataset. An error is returned if they don't match. You can write code to implement this form of concurrency checking in Visual Studio. You will also have to write code to respond to any update conflicts. To keep the date-time stamp or version number accurate, you need to set up a trigger on the table to update it when a change to a row occurs.


The Saving-All-Values Approach


An alternative to using a date-time stamp or version number is to get copies of all the fields when the record is read. The DataSet object in ADO.NET maintains two versions of each modified record: an original version (that was originally read from the data source) and a modified version, representing the user updates. When attempting to write the record back to the data source, the original values in the data row are compared against the record in the data source. If they match, it means that the database record has not changed since it was read. In that case, the changed values from the dataset are successfully written to the database.
Each data adapter command has a parameters collection for each of its four commands (DELETE, INSERT, SELECT, and UPDATE). Each command has parameters for both the original values, as well as the current (or modified) values.
The following example shows the command text for a dataset command that updates a typical Customers table. The command is specified for dynamic SQL and optimistic concurrency.



UPDATE Customers SET CustomerID = @currCustomerID, CompanyName = @currCompanyName, ContactName = @currContactName, ContactTitle = currContactTitle, Address = @currAddress, City = @currCity, PostalCode = @currPostalCode, Phone = @currPhone, Fax = @currFax WHERE (CustomerID = @origCustomerID) AND (Address = @origAddress OR @origAddress IS NULL AND Address IS NULL) AND (City = @origCity OR @origCity IS NULL AND City IS NULL) AND (CompanyName = @origCompanyName OR @origCompanyName IS NULL AND CompanyName IS NULL) AND (ContactName = @origContactName OR @origContactName IS NULL AND ContactName IS NULL) AND (ContactTitle = @origContactTitle OR @origContactTitle IS NULL AND ContactTitle IS NULL) AND (Fax = @origFax OR @origFax IS NULL AND Fax IS NULL) AND (Phone = @origPhone OR @origPhone IS NULL AND Phone IS NULL) AND (PostalCode = @origPostalCode OR @origPostalCode IS NULL AND PostalCode IS NULL); SELECT CustomerID, CompanyName, ContactName, ContactTitle, Address, City, PostalCode, Phone, Fax FROM Customers WHERE (CustomerID = @currCustomerID)

Note that the nine SET statement parameters represent the current values that will be written to the database, whereas the nine WHERE statement parameters represent the original values that are used to locate the original record.
The first nine parameters in the SET statement correspond to the first nine parameters in the parameters collection. These parameters would have their SourceVersion property set to Current.
The next nine parameters in the WHERE statement are used for optimistic concurrency. These placeholders would correspond to the next nine parameters in the parameters collection, and each of these parameters would have their SourceVersion property set to Original.
The SELECT statement is used to refresh the dataset after the update has occurred. It is generated when you set the Refresh the DataSet option in the Advanced SQL Generations Options dialog box.


17. What are relation objects in dataset and how & where to use them?

In a DataSet that contains multiple DataTable objects, you can use DataRelation objects to relate one table to another, to navigate through the tables, and to return child or parent rows from a related table. Adding a DataRelation to a DataSet adds, by default, a UniqueConstraint to the parent table and a ForeignKeyConstraint to the child table.

The following code example creates a DataRelation using two DataTable objects in a DataSet. Each DataTable contains a column named CustID, which serves as a link between the two DataTable objects. The example adds a single DataRelation to the Relations collection of the DataSet. The first argument in the example specifies the name of the DataRelation being created. The second argument sets the parent DataColumn and the third argument sets the child DataColumn.`

custDS.Relations.Add("CustOrders",
custDS.Tables["Customers"].Columns["CustID"],
custDS.Tables["Orders"].Columns["CustID"]);


18. Difference between OLEDB Provider and SqlClient ?

SQLClient .NET classes are highly optimized for the .net / sqlserver combination and achieve optimal results. The SqlClient data provider is fast. It's faster than the Oracle provider, and faster than accessing database via the OleDb layer. It's faster because it accesses the native library (which automatically gives you better performance), and it was written with lots of help from the SQL Server team.

19. What are the different namespaces used in the project to connect the database? What data providers available in .net to connect to database?

Following are different Namespaces:



System.Data.OleDb - classes that make up the .NET Framework Data Provider for OLE DB-compatible data sources. These classes allow you to connect to an OLE DB data source, execute commands against the source, and read the results.
System.Data.SqlClient - classes that make up the .NET Framework Data Provider for SQL Server, which allows you to connect to SQL Server 7.0, execute commands, and read results. The System.Data.SqlClient namespace is similar to the System.Data.OleDb namespace, but is optimized for access to SQL Server 7.0 and later.
System.Data.Odbc - classes that make up the .NET Framework Data Provider for ODBC. These classes allow you to access ODBC data source in the managed space.
System.Data.OracleClient - classes that make up the .NET Framework Data Provider for Oracle. These classes allow you to access an Oracle data source in the managed space.


20. What is Data Reader?

You can use the ADO.NET DataReader to retrieve a read-only, forward-only stream of data from a database. Using the DataReader can increase application performance and reduce system overhead because only one row at a time is ever in memory.
After creating an instance of the Command object, you create a DataReader by calling Command. ExecuteReader to retrieve rows from a data source, as shown in the following example.



SqlDataReader myReader = myCommand.ExecuteReader();



You use the Read method of the DataReader object to obtain a row from the results of the query.


while (myReader.Read())
Console.WriteLine("\t{0}\t{1}", myReader.GetInt32(0), myReader.GetString(1));
myReader.Close();


21. What is Data Set?

The DataSet is a memory-resident representation of data that provides a consistent relational programming model regardless of the data source. It can be used with multiple and differing data sources, used with XML data, or used to manage data local to the application. The DataSet represents a complete set of data including related tables, constraints, and relationships among the tables. The methods and objects in a DataSet are consistent with those in the relational database model. The DataSet can also persist and reload its contents as XML and its schema as XML Schema definition language (XSD) schema.

22. What is Data Adapter?

The DataAdapter serves as a bridge between a DataSet and a data source for retrieving and saving data. The DataAdapter provides this bridge by mapping Fill, which changes the data in the DataSet to match the data in the data source, and Update, which changes the data in the data source to match the data in the DataSet. If you are connecting to a Microsoft SQL Server database, you can increase overall performance by using the SqlDataAdapter along with its associated SqlCommand and SqlConnection. For other OLE DB-supported databases, use the DataAdapter with its associated OleDbCommand and OleDbConnection objects.

23. Which method do you invoke on the DataAdapter control to load your generated dataset with data?

Fill() method is used to load the generated data set with Data.

24. Explain different methods and Properties of DataReader which you have used in your project?

Following are the methods and properties :

Read
GetString
GetInt32
while (myReader.Read())
Console.WriteLine("\t{0}\t{1}", myReader.GetInt32(0), myReader.GetString(1));
myReader.Close();


25. What happens when we issue Dataset.ReadXml command?

Reads XML schema and data into the DataSet

26. What Method checks if a datareader is closed or opened?

IsClosed() checks whether a datareader is closed or open.

27. What is method to get XML and schema from Dataset?

getXML () and get Schema ().

28. Differences between dataset.clone and dataset.copy?

The Difference is as follows:

Clone - Copies the structure of the DataSet, including all DataTable schemas, relations, and constraints. Does not copy any data.
Copy - Copies both the structure and data for this DataSet.


29. What are the differences between the Recordset and the DataSet objects?

Tables represented by the object: The ADO Recordset object represents only one table at a given time, while the DataSet object in ADO.NET can represent any number of tables, keys, constraints and relations which makes it very much like an RDBMS.
Navigation: Navigating the Recordset object depend on the cursor used to create the object with limited functionality in moving back and forth while the DataSet represents data in “collections” that could be accessed through indexers in a random-access fashion
Connection Model: The Recordset is designed to work as a “connected object” with a server-side cursor in mind while the DataSet is designed to work as a disconnected object containing hierarchy of data in XML format.
Database Updates: Updating a database through the use of a Recordset object is direct since it is tied to the database. On the other hand, the DataSet as an independent data store must use a database-specific DataAdapter object to post updates to the database.


30. Which ADO.Net objects fall under connected database model and disconnected database model?

The DataReader object falls under connected model and DataSet, DataTable, DataAdapter objects fall under disconnected database model.


31. How to use ImportRow method?

The ImportRow method of DataTable copies a row into a DataTable with all of the properties and data of the row. It actually calls NewRow method on destination DataTable with current table schema and sets DataRowState to Added.


DataTable dt; /// fill the table before you use it
DataTable copyto;

foreach(DataRow dr in dt.Rows)
{
copyto.ImportRow(dr);
}


32. What are the pros and cons of using DataReader object?

The DataReader object is a forward-only resultset and is faster to traverse as compared to its counterpart DataTable. However it holds an active connection to the database until all records are retrieved from it or closed explicitly. This could be a problem when the resultset holds large number of records and the application has many concurrent users.

33. What are different execute methods of ADO.NET command object?

ExecuteScalar method returns a single value from the first row and first column of the resultset obtained from the execution of SQL query.
ExecuteNonQuery method executes the DML SQL query such as insert, delete or update and returns the number of rows affected by this action.
ExecuteReader method returns DataReader object which is a forward-only resultset.
ExecuteXMLReader method is available for SQL Server 2000 or later. Upon execution it builds XMLReader object from standard SQL query.


34. What is the difference between data reader and data adapter?

DateReader is an forward only and read only cursor type if you are accessing data through DataRead it shows the data on the web form/control but you can not perform the paging feature on that record(because it's forward only type).

Reader is best fit to show the Data (where no need to work on data)

DataAdapter is not only connect with the Databse(through Command object) it provide four types of command (InsertCommand, UpdateCommand, DeleteCommand, SelectCommand), It supports to the disconnected Architecture of .NET show we can populate the records to the DataSet. where as Dataadapter is best fit to work on data.


35. Difference between SqlCommand and SqlCommandBuilder?

SQLCommand is used to retrieve or update the data from database.

You can use the SELECT / INSERT,UPDATE,DELETE command with SQLCommand. SQLCommand will execute these commnds in the database.

SQLBUILDER is used to build the SQL Command like SELECT/ INSERTR, UPDATE etc.


36. Can you edit data in the Repeater control?

NO.

37. What are the different rowversions available?

There are four types of Rowversions.
Current:
The current values for the row. This row version does not exist for rows with a RowState of Deleted.
Default :
The row the default version for the current DataRowState. For a DataRowState value of Added, Modified or Current, the default version is Current. For a DataRowState of Deleted, the version is Original. For a DataRowState value of Detached, the version is Proposed.
Original:
The row contains its original values.
Proposed:
The proposed values for the row. This row version exists during an edit operation on a row, or for a row that is not part of a DataRowCollection


38. Explain DataSet.AcceptChanges and DataAdapter.Update methods?

Dataset maintains the rowstate of each row with in a table. as a dataset is loaded its rowstate version is unchanged . whenever there is a modification in a paricular row with in a datatable , dataset changes the rowversion as modified, Added or deleted based on the particular action performed on the particular row.

AcceptChanges() method again change the Rowversion back to Unchanged.

Update() method is for updation of any changes made to the dataset in the database. This function checks the rowversion of each row with in a table. If it finds any row with added rowstate then that particular row is inserted else if it is Modified it is upadted if deleted then a delete statement is executed.

But if Acceptchanges() is done before an update function it wold not update anything to database since rowstate becomes unchanged


39. How you will set the datarelation between two columns?

ADO.NET provides DataRelation object to set relation between two columns.It helps to enforce the following constraints,a unique constraint, which guarantees that a column in the table contains no duplicates and a foreign-key constraint,which can be used to maintain referential integrity.A unique constraint is implemented either by simply setting the Unique property of a data column to true, or by adding an instance of the UniqueConstraint class to the DataRelation object's ParentKeyConstraint. As part of the foreign-key constraint, you can specify referential integrity rules that are applied at three points,when a parent record is updated,when a parent record is deleted and when a change is accepted or rejected.

40. What connections does Microsoft SQL Server support?

Windows Authentication (via Active Directory) and SQL Server authentication (via Microsoft SQL Server username and passwords).



41. Which one is trusted and which one is untrusted?

Windows Authentication is trusted because the username and password are checked with the Active Directory, the SQL Server authentication is untrusted, since SQL Server is the only verifier participating in the transaction.

42. What is Connection pooling?

The connection represents an open and unique link to a data source. In a distributed system, this often involves a network connection. Depending on the underlying data source, the programming interface of the various connection objects may differ quite a bit. A connection object is specific to a particular type of data source, such as SQL Server and Oracle. Connection objects can't be used interchangeably across different data sources, but all share a common set of methods and properties grouped in the IDbConnection interface.
In ADO.NET, connection objects are implemented within data providers as sealed classes (that is, they are not further inheritable). This means that the behavior of a connection class can never be modified or overridden, just configured through properties and attributes. In ADO.NET, all connection classes support connection pooling, although each class may implement it differently. Connection pooling is implicit, meaning that you don't need to enable it because the provider manages this automatically.
ADO.NET pools connections with the same connection or configuration (connection string). It can maintain more than one pool (actually, one for each configuration). An interesting note: Connection pooling is utilized (by default) unless otherwise specified. If you close and dispose of all connections, then there will be no pool (since there are no available connections).
While leaving database connections continuously open can be troublesome, it can be advantageous for applications that are in constant communication with a database by negating the need to re-open connections. Some database administrators may frown on the practice since multiple connections (not all of which may be useful) to the database are open. Using connection pooling depends upon available server resources and application requirements (i.e., does it really need it).

Using connection pooling

Connection pooling is enabled by default. You may override the default behavior with the pooling setting in the connection string. The following SQL Server connection string does not utilize connection pooling:
Data Source=TestServer;Initial Catalog=Northwind;
User ID=Chester;Password=Tester;Pooling=False;
You can use the same approach with other .NET Data Providers. You may enable it by setting it to True (or eliminating the Pooling variable to use the default behavior). In addition, the default size of the connection pool is 100, but you may override this as well with connection string variables. You may use the following variables to control the minimum and maximum size of the pool as well as transaction support:


• Max Pool Size: The maximum number of connections allowed in the pool. The default value is 100.
• Min Pool Size: The minimum number of connections allowed in the pool. The default value is zero.
• Enlist: Signals whether the pooler automatically enlists the connection in the creation thread's current transaction context. The default value is true.


The following SQL Server connection string uses connection pooling with a minimum size of five and a maximum size of 100:


Data Source=TestServer;Initial Catalog=Northwind;
User ID=Chester;Password=Tester;Max Pool Size=50;
Min Pool Size=5;Pooling=True;


43. What are the two fundamental objects in ADO.NET ?

Datareader and Dataset are the two fundamental objects in ADO.NET.

44. What is the use of connection object ?

They are used to connect a data to a Command object.

An OleDbConnection object is used with an OLE-DB provider
A SqlConnection object uses Tabular Data Services (TDS) with MS SQL Server


45. What are the various objects in Dataset ?

Dataset has a collection of DataTable object within the Tables collection. Each DataTable object contains a collection of DataRow objects and a collection of DataColumn objects. There are also collections for the primary keys, constraints, and default values used in this table which is called as constraint collection, and the parent and child relationships between the tables. Finally, there is a DefaultView object for each table. This is used to create a Data View object based on the table, so that the data can be searched, filtered or otherwise manipulated while displaying the data.


46. How can we force the connection object to close after my datareader is closed ?

Command method Executereader takes a parameter called as CommandBehavior where in we can specify saying close connection automatically after the Datareader is close.
pobjDataReader =pobjCommand.ExecuteReader(CommandBehavior.CloseConnection)


47. How can we get only schema using dataReader?

pobjDataReader = pobjCommand.ExecuteReader(Co-mmandBehavior.SchemaOnly)

48. Explain how to use stored procedures with ADO.net?

Using Stored Procedures with a Command


Stored procedures offer many advantages in data-driven applications. Using stored procedures, database operations can be encapsulated in a single command, optimized for best performance, and enhanced with additional security. While a stored procedure can be called by simply passing the stored procedure name followed by parameter arguments as an SQL statement, using the Parameters collection of the ADO.NET Command object enables you to more explicitly define stored procedure parameters as well as to access output parameters and return values.
To call a stored procedure, set the CommandType of the Command object to StoredProcedure. Once the CommandType is set to StoredProcedure, you can use the Parameters collection to define parameters, as in the following example.
Note The OdbcCommand requires that you supply the full ODBC CALL syntax when calling a stored procedure.


SqlClient
[Visual Basic]
Dim nwindConn As SqlConnection = New SqlConnection("Data Source=localhost;Integrated Security=SSPI;" & _
"Initial Catalog=northwind")

Dim salesCMD As SqlCommand = New SqlCommand("SalesByCategory", nwindConn)
salesCMD.CommandType = CommandType.StoredProcedure

Dim myParm As SqlParameter = salesCMD.Parameters.Add("@CategoryName", SqlDbType.NVarChar, 15)
myParm.Value = "Beverages"

nwindConn.Open()

Dim myReader As SqlDataReader = salesCMD.ExecuteReader()

Console.WriteLine("{0}, {1}", myReader.GetName(0), myReader.GetName(1))

Do While myReader.Read()
Console.WriteLine("{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1))
Loop

myReader.Close()
nwindConn.Close()


[C#]


SqlConnection nwindConn = new SqlConnection("Data Source=localhost;Integrated Security=SSPI;Initial Catalog=northwind");

SqlCommand salesCMD = new SqlCommand("SalesByCategory", nwindConn);
salesCMD.CommandType = CommandType.StoredProcedure;

SqlParameter myParm = salesCMD.Parameters.Add("@CategoryName", SqlDbType.NVarChar, 15);
myParm.Value = "Beverages";

nwindConn.Open();

SqlDataReader myReader = salesCMD.ExecuteReader();

Console.WriteLine("{0}, {1}", myReader.GetName(0), myReader.GetName(1));

while (myReader.Read())
{
Console.WriteLine("{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1));
}

myReader.Close();
nwindConn.Close();


OleDb
[Visual Basic]
Dim nwindConn As OleDbConnection = New OleDbConnection("Provider=SQLOLEDB;Data Source=localhost;Integrated Security=SSPI;" & _
"Initial Catalog=northwind")

Dim salesCMD As OleDbCommand = New OleDbCommand("SalesByCategory", nwindConn)
salesCMD.CommandType = CommandType.StoredProcedure

Dim myParm As OleDbParameter = salesCMD.Parameters.Add("@CategoryName", OleDbType.VarChar, 15)
myParm.Value = "Beverages"

nwindConn.Open()

Dim myReader As OleDbDataReader = salesCMD.ExecuteReader()

Console.WriteLine("{0}, {1}", myReader.GetName(0), myReader.GetName(1))

Do While myReader.Read()
Console.WriteLine("{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1))
Loop

myReader.Close()
nwindConn.Close()


[C#]


OleDbConnection nwindConn = new OleDbConnection("Provider=SQLOLEDB;Data Source=localhost;Integrated Security=SSPI;" +
"Initial Catalog=northwind");

OleDbCommand salesCMD = new OleDbCommand("SalesByCategory", nwindConn);
salesCMD.CommandType = CommandType.StoredProcedure;

OleDbParameter myParm = salesCMD.Parameters.Add("@CategoryName", OleDbType.VarChar, 15);
myParm.Value = "Beverages";

nwindConn.Open();

OleDbDataReader myReader = salesCMD.ExecuteReader();

Console.WriteLine("\t{0}, {1}", myReader.GetName(0), myReader.GetName(1));

while (myReader.Read())
{
Console.WriteLine("\t{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1));
}

myReader.Close();
nwindConn.Close();


Odbc
[Visual Basic]


Dim nwindConn As OdbcConnection = New OdbcConnection("Driver={SQL Server};Server=localhost;Trusted_Connection=yes;" & _
"Database=northwind")
nwindConn.Open()

Dim salesCMD As OdbcCommand = New OdbcCommand("{ CALL SalesByCategory(?) }", nwindConn)
salesCMD.CommandType = CommandType.StoredProcedure

Dim myParm As OdbcParameter = salesCMD.Parameters.Add("@CategoryName", OdbcType.VarChar, 15)
myParm.Value = "Beverages"

Dim myReader As OdbcDataReader = salesCMD.ExecuteReader()

Console.WriteLine("{0}, {1}", myReader.GetName(0), myReader.GetName(1))

Do While myReader.Read()
Console.WriteLine("{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1))
Loop

myReader.Close()
nwindConn.Close()
[C#]
OdbcConnection nwindConn = new OdbcConnection("Driver={SQL Server};Server=localhost;Trusted_Connection=yes;" +
"Database=northwind");
nwindConn.Open();

OdbcCommand salesCMD = new OdbcCommand("{ CALL SalesByCategory(?) }", nwindConn);
salesCMD.CommandType = CommandType.StoredProcedure;

OdbcParameter myParm = salesCMD.Parameters.Add("@CategoryName", OdbcType.VarChar, 15);
myParm.Value = "Beverages";

OdbcDataReader myReader = salesCMD.ExecuteReader();

Console.WriteLine("\t{0}, {1}", myReader.GetName(0), myReader.GetName(1));

while (myReader.Read())
{
Console.WriteLine("\t{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1));
}

myReader.Close();
nwindConn.Close();


A Parameter object can be created using the Parameter constructor, or by calling the Add method of the Parameters collection of a Command. Parameters.Add will take as input either constructor arguments or an existing Parameter object. When setting the Value of a Parameter to a null reference, use DBNull.Value.
For parameters other than Input parameters, you must set the ParameterDirection property to specify whether the parameter type is InputOutput, Output, or ReturnValue. The following example shows the difference between creating Input, Output, and ReturnValue parameters.


[Visual Basic]


Dim sampleCMD As SqlCommand = New SqlCommand("SampleProc", nwindConn)
sampleCMD.CommandType = CommandType.StoredProcedure

Dim sampParm As SqlParameter = sampleCMD.Parameters.Add("RETURN_VALUE", SqlDbType.Int)
sampParm.Direction = ParameterDirection.ReturnValue

sampParm = sampleCMD.Parameters.Add("@InputParm", SqlDbType.NVarChar, 12)
sampParm.Value = "Sample Value"

sampParm = sampleCMD.Parameters.Add("@OutputParm", SqlDbType.NVarChar, 28)
sampParm.Direction = ParameterDirection.Output

nwindConn.Open()

Dim sampReader As SqlDataReader = sampleCMD.ExecuteReader()

Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1))

Do While sampReader.Read()
Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1))
Loop

sampReader.Close()
nwindConn.Close()

Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters("@OutputParm").Value)
Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters("RETURN_VALUE").Value)


[C#]


SqlCommand sampleCMD = new SqlCommand("SampleProc", nwindConn);
sampleCMD.CommandType = CommandType.StoredProcedure;

SqlParameter sampParm = sampleCMD.Parameters.Add("RETURN_VALUE", SqlDbType.Int);
sampParm.Direction = ParameterDirection.ReturnValue;

sampParm = sampleCMD.Parameters.Add("@InputParm", SqlDbType.NVarChar, 12);
sampParm.Value = "Sample Value";

sampParm = sampleCMD.Parameters.Add("@OutputParm", SqlDbType.NVarChar, 28);
sampParm.Direction = ParameterDirection.Output;

nwindConn.Open();

SqlDataReader sampReader = sampleCMD.ExecuteReader();

Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1));

while (sampReader.Read())
{
Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1));
}

sampReader.Close();
nwindConn.Close();

Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters["@OutputParm"].Value);
Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters["RETURN_VALUE"].Value);
OleDb


[Visual Basic]


Dim sampleCMD As OleDbCommand = New OleDbCommand("SampleProc", nwindConn)
sampleCMD.CommandType = CommandType.StoredProcedure

Dim sampParm As OleDbParameter = sampleCMD.Parameters.Add("RETURN_VALUE", OleDbType.Integer)
sampParm.Direction = ParameterDirection.ReturnValue

sampParm = sampleCMD.Parameters.Add("@InputParm", OleDbType.VarChar, 12)
sampParm.Value = "Sample Value"

sampParm = sampleCMD.Parameters.Add("@OutputParm", OleDbType.VarChar, 28)
sampParm.Direction = ParameterDirection.Output

nwindConn.Open()

Dim sampReader As OleDbDataReader = sampleCMD.ExecuteReader()

Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1))

Do While sampReader.Read()
Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1))
Loop

sampReader.Close()
nwindConn.Close()

Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters("@OutputParm").Value)
Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters("RETURN_VALUE").Value)


[C#]


OleDbCommand sampleCMD = new OleDbCommand("SampleProc", nwindConn);
sampleCMD.CommandType = CommandType.StoredProcedure;

OleDbParameter sampParm = sampleCMD.Parameters.Add("RETURN_VALUE", OleDbType.Integer);
sampParm.Direction = ParameterDirection.ReturnValue;

sampParm = sampleCMD.Parameters.Add("@InputParm", OleDbType.VarChar, 12);
sampParm.Value = "Sample Value";

sampParm = sampleCMD.Parameters.Add("@OutputParm", OleDbType.VarChar, 28);
sampParm.Direction = ParameterDirection.Output;

nwindConn.Open();

OleDbDataReader sampReader = sampleCMD.ExecuteReader();

Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1));

while (sampReader.Read())
{
Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1));
}

sampReader.Close();
nwindConn.Close();

Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters["@OutputParm"].Value);
Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters["RETURN_VALUE"].Value);


Odbc
[Visual Basic]
Dim sampleCMD As OdbcCommand = New OdbcCommand("{ ? = CALL SampleProc(?, ?) }", nwindConn)
sampleCMD.CommandType = CommandType.StoredProcedure

Dim sampParm As OdbcParameter = sampleCMD.Parameters.Add("RETURN_VALUE", OdbcType.Int)
sampParm.Direction = ParameterDirection.ReturnValue

sampParm = sampleCMD.Parameters.Add("@InputParm", OdbcType.VarChar, 12)
sampParm.Value = "Sample Value"

sampParm = sampleCMD.Parameters.Add("@OutputParm", OdbcType.VarChar, 28)
sampParm.Direction = ParameterDirection.Output

nwindConn.Open()

Dim sampReader As OdbcDataReader = sampleCMD.ExecuteReader()

Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1))

Do While sampReader.Read()
Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1))
Loop

sampReader.Close()
nwindConn.Close()

Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters("@OutputParm").Value)
Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters("RETURN_VALUE").Value)


[C#]


OdbcCommand sampleCMD = new OdbcCommand("{ ? = CALL SampleProc(?, ?) }", nwindConn);
sampleCMD.CommandType = CommandType.StoredProcedure;

OdbcParameter sampParm = sampleCMD.Parameters.Add("RETURN_VALUE", OdbcType.Int);
sampParm.Direction = ParameterDirection.ReturnValue;

sampParm = sampleCMD.Parameters.Add("@InputParm", OdbcType.VarChar, 12);
sampParm.Value = "Sample Value";

sampParm = sampleCMD.Parameters.Add("@OutputParm", OdbcType.VarChar, 28);
sampParm.Direction = ParameterDirection.Output;

nwindConn.Open();

OdbcDataReader sampReader = sampleCMD.ExecuteReader();

Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1));

while (sampReader.Read())
{
Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1));
}

sampReader.Close();
nwindConn.Close();

Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters["@OutputParm"].Value);
Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters["RETURN_VALUE"].Value);


Using Parameters with a SqlCommand


When using parameters with a SqlCommand, the names of the parameters added to the Parameters collection must match the names of the parameter markers in your stored procedure. The .NET Framework Data Provider for SQL Server treats parameters in the stored procedure as named parameters and searches for the matching parameter markers.
The .NET Framework Data Provider for SQL Server does not support the question mark (?) placeholder for passing parameters to an SQL statement or a stored procedure. In this case, you must use named parameters, as in the following example.
SELECT * FROM Customers WHERE CustomerID = @CustomerID
Using Parameters with an OleDbCommand or OdbcCommand
When using parameters with an OleDbCommand or OdbcCommand, the order of the parameters added to the Parameters collection must match the order of the parameters defined in your stored procedure. The .NET Framework Data Provider for OLE DB and .NET Framework Data Provider for ODBC treat parameters in a stored procedure as placeholders and applies parameter values in order. In addition, return value parameters must be the first parameters added to the Parameters collection.
The .NET Framework Data Provider for OLE DB and .NET Framework Data Provider for ODBC do not support named parameters for passing parameters to an SQL statement or a stored procedure. In this case, you must use the question mark (?) placeholder, as in the following example.
SELECT * FROM Customers WHERE CustomerID = ?
As a result, the order in which Parameter objects are added to the Parameters collection must directly correspond to the position of the question mark placeholder for the parameter.
Deriving Parameter Information
Parameters can also be derived from a stored procedure using the CommandBuilder class. Both the SqlCommandBuilder and OleDbCommandBuilder classes provide a static method, DeriveParameters, which will automatically populate the Parameters collection of a Command object with parameter information from a stored procedure. Note that DeriveParameters will overwrite any existing parameter information for the Command.
Deriving parameter information does require an added trip to the data source for the information. If parameter information is known at design-time, you can improve the performance of your application by setting the parameters explicitly.
The following code example shows how to populate the Parameters collection of a Command object using CommandBuilder.DeriveParameters.


[Visual Basic]


Dim nwindConn As SqlConnection = New SqlConnection("Data Source=localhost;Initial Catalog=Northwind;Integrated Security=SSPI;")
Dim salesCMD As SqlCommand = New SqlCommand("Sales By Year", nwindConn)
salesCMD.CommandType = CommandType.StoredProcedure

nwindConn.Open()
SqlCommandBuilder.DeriveParameters(salesCMD)
nwindConn.Close()


[C#]


SqlConnection nwindConn = new SqlConnection("Data Source=localhost;Initial Catalog=Northwind;Integrated Security=SSPI;");
SqlCommand salesCMD = new SqlCommand("Sales By Year", nwindConn);
salesCMD.CommandType = CommandType.StoredProcedure;

nwindConn.Open();
SqlCommandBuilder.DeriveParameters(salesCMD);
nwindConn.Close();


49. How can we fine tune the command object when we are expecting a single row or a single value ?

CommandBehaviour enumeration provides two values SingleResult and SingleRow.If you are expecting a single value then pass "CommandBehaviour.SingleResult" and the query is optimized accordingly, if you are expecting single row then pass "CommandBehaviour.SingleRow" and query is optimized according to single row.

50. How can you Obtaining Data as XML from SQL Server?

[Visual Basic]


Dim custCMD As SqlCommand = New SqlCommand("SELECT * FROM Customers FOR XML AUTO, ELEMENTS", nwindConn)
Dim myXR As System.Xml.XmlReader = custCMD.ExecuteXmlReader()


[C#]


SqlCommand custCMD = new SqlCommand("SELECT * FROM Customers FOR XML AUTO, ELEMENTS", nwindConn);
System.Xml.XmlReader myXR = custCMD.ExecuteXmlReader();


51. How to add Existing Constraints to a DataSet?

The Fill method of the DataAdapter fills a DataSet only with table columns and rows from a data source; though constraints are commonly set by the data source, the Fill method does not add this schema information to the DataSet by default. To populate a DataSet with existing primary key constraint information from a data source, you can either call the FillSchema method of the DataAdapter, or set the MissingSchemaAction property of the DataAdapter to AddWithKey before calling Fill. This will ensure that primary key constraints in the DataSet reflect those at the data source. Foreign key constraint information is not included and will need to be created explicitly

Adding schema information to a DataSet before filling it with data ensures that primary key constraints are included with the DataTable objects in the DataSet. As a result, when additional calls to Fill the DataSet are made, the primary key column information is used to match new rows from the data source with current rows in each DataTable, and current data in the tables is overwritten with data from the data source. Without the schema information, the new rows from the data source are appended to the DataSet, resulting in duplicate rows.

Using FillSchema or setting the MissingSchemaAction to AddWithKey requires extra processing at the data source to determine primary key column information. This additional processing can hinder performance. If you know the primary key information at design-time, it is recommended that you specify the primary key column or columns explicitly in order to achieve optimal performance. For information about explicitly setting primary key information for a table



[Visual Basic]
Dim custDS As DataSet = New DataSet()

custDA.FillSchema(custDS, SchemaType.Source, "Customers")
custDA.Fill(custDS, "Customers")


[C#]
DataSet custDS = new DataSet();

custDA.FillSchema(custDS, SchemaType.Source, "Customers");
custDA.Fill(custDS, "Customers");

[Visual Basic]
Dim custDS As DataSet = New DataSet()

custDA.MissingSchemaAction = MissingSchemaAction.AddWithKey
custDA.Fill(custDS, "Customers")


[C#]
DataSet custDS = new DataSet();

custDA.MissingSchemaAction = MissingSchemaAction.AddWithKey;
custDA.Fill(custDS, "Customers");


52. How to add relation between tables?

In a DataSet that contains multiple DataTable objects, you can use DataRelation objects to relate one table to another, to navigate through the tables, and to return child or parent rows from a related table.
Adding a DataRelation to a DataSet adds, by default, a UniqueConstraint to the parent table and a ForeignKeyConstraint to the child table. For more information about these default constraints



[Visual Basic]
custDS.Relations.Add("CustOrders", _
custDS.Tables("Customers").Columns("CustID"), _
custDS.Tables("Orders").Columns("CustID"))

[C#]
custDS.Relations.Add("CustOrders",
custDS.Tables["Customers"].Columns["CustID"],
custDS.Tables["Orders"].Columns["CustID"]);


53. How to get the data changes in dataset?

GetChanges : Gets a copy of the DataSet containing all changes made to it since it was last loaded, or since AcceptChanges was called.



[Visual Basic]
Private Sub UpdateDataSet(ByVal myDataSet As DataSet)
' Check for changes with the HasChanges method first.
If Not myDataSet.HasChanges(DataRowState.Modified) Then Exit Sub
' Create temporary DataSet variable.
Dim xDataSet As DataSet
' GetChanges for modified rows only.
xDataSet = myDataSet.GetChanges(DataRowState.Modified)
' Check the DataSet for errors.
If xDataSet.HasErrors Then
' Insert code to resolve errors.
End If
' After fixing errors, update the data source with the DataAdapter
' used to create the DataSet.
myOleDbDataAdapter.Update(xDataSet)
End Sub



[C#]
private void UpdateDataSet(DataSet myDataSet){
// Check for changes with the HasChanges method first.
if(!myDataSet.HasChanges(DataRowState.Modified)) return;
// Create temp


54. What are the various methods provided by the dataset object to generate XML?

ReadXML : Read's a XML document in to Dataset.
GetXML : This is function's which return's a string containing XML document.
WriteXML : This write's a XML data to disk.


55. What is Dataview and what’s the use of Dataview?

Represents a databindable, customized view of a DataTable for sorting, filtering, searching, editing, and navigation. A major function of the DataView is to allow data binding on both Windows Forms and Web Forms.

Dataview has 4 main method's :-
Find
Take's a array of value's and return's the index of the row.
FindRow
This also takes array of values but returns a collection of "DataRow".
If we want to manipulate data of "DataTable" object create "DataView" (Using the "DefaultView" we can create "DataView" object) of the "DataTable" object, and use the following functionalities:-
AddNew
Add's a new row to the "DataView" object.
Delete
Deletes the specified row from "DataView" object.

Additionally, a DataView can be customized to present a subset of data from the DataTable. This capability allows you to have two controls bound to the same DataTable, but showing different versions of the data. For example, one control may be bound to a DataView showing all of the rows in the table, while a second may be configured to display only the rows that have been deleted from the DataTable. The DataTable also has a DefaultView property which returns the default DataView for the table. For example, if you wish to create a custom view on the table, set the RowFilter on the DataView returned by the DefaultView.

To create a filtered and sorted view of data, set the RowFilter and Sort properties. Then use the Item property to return a single DataRowView.

You can also add and delete from the set of rows using the AddNew and Delete methods. When you use those methods, the RowStateFilter property can set to specify that only deleted rows or new rows be displayed by the DataView.



56. What is CommandBuilder?

What the CommandBuilder can do is relieve you of the responsibility of writing your own action queries by automatically constructing the SQL code, ADO.NET Command objects, and their associated Parameters collections given a SelectCommand.



The CommandBuilder expects you to provide a viable, executable, and simple SelectCommand associated with a DataAdapter. It also expects a viable Connection. That's because the CommandBuilder opens the Connection associated with the DataAdapter and makes a round trip to the server each and every time it's asked to construct the action queries. It closes the Connection when it's done.



Dim cn As SqlConnection
Dim da As SqlDataAdapter
Dim cb As SqlCommandBuilder
cn = New SqlConnection("data source=demoserver…")
da = New SqlDataAdapter("SELECT Au_ID, au_lname, City FROM authors", cn)


57. what’s the difference between optimistic locking and pessimistic locking?

In pessimistic locking when user wants to update data it locks the record and till then no one can update data. Other user's can only view the data when there is pessimistic locking.


In optimistic locking multiple user's can open the same record for updating, thus increase maximum concurrency. Record is only locked when updating the record. This is the most preferred way of locking practically. Now a days browser based application are very common and having pessimistic locking is not a practical solution.


The basic difference between Optimistic and Pessimistic locking is the time at which the lock on a row or page occurs. A Pessimistic lock is enforced when the row is being edited while an Optimistic lock occurs at the time the row is being updated. Obviously the time between an Edit and Update can be very short, but Pessimistic locking will allow the database provider to prevent a user from overwriting changes to a row by another user that occurred while he was updating it. There is no
provision for this under Optimistic locking and the last user to perform the update wins.


58. How to implement pessimistic locking?

The basics steps for pessimistic locking are as follows:

Create a transaction with an IsolationLevel of RepeatableRead.
Set the DataAdapter’s SelectCommand property to use the transaction you created.
Make the changes to the data.
Set DataAdapter’s Insert, Update, and Delete command properties to use the transaction you created.
Call the DataAdapter’s Update method.
Commit the transaction.


59. How to use transactions in ADO.net?

Transactions are a feature offered by most enterprise-class databases for making sure data integrity is maintained when data is modified. A transaction at its most basic level consists of two required steps—Begin, and then either Commit or Rollback. The Begin call defines the start of the transaction boundary, and the call to either Commit or Rollback defines the end of it. Within the transaction boundary, all of the statements executed are considered to be part of a unit for accomplishing the given task, and must succeed or fail as one. Commit (as the name suggests) commits the data modifications if everything was successful, and Rollback undoes the data modifications if an error occurs. All of the .NET data providers provide similar classes and methods to accomplish these operations.



The ADO.NET data providers offer transaction functionality through the Connection, Command, and Transaction classes. A typical transaction would follow a process similar to this:



Open the transaction using Connection.BeginTransaction().
Enlist statements or stored procedure calls in the transaction by setting the Command.Transaction property of the Command objects associated with them.
Depending on the provider, optionally use Transaction.Save() or Transaction.Begin() to create a savepoint or a nested transaction to enable a partial rollback.
Commit or roll back the transaction using Transaction.Commit() or Transaction.Rollback().


using System;
using System.Drawing;
using System.Collections;
using System.ComponentModel;
using System.Windows.Forms;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;

…public void SPTransaction(int partID, int numberMoved, int siteID)
{
// Create and open the connection.
SqlConnection conn = new SqlConnection();
string connString = "Server=SqlInstance;Database=Test;"
+ "Integrated Security=SSPI";
conn.ConnectionString = connString;
conn.Open();

// Create the commands and related parameters.
// cmdDebit debits inventory from the WarehouseInventory
// table by calling the DebitWarehouseInventory
// stored procedure.
SqlCommand cmdDebit =
new SqlCommand("DebitWarehouseInventory", conn);
cmdDebit.CommandType = CommandType.StoredProcedure;
cmdDebit.Parameters.Add("@PartID", SqlDbType.Int, 0, "PartID");
cmdDebit.Parameters["@PartID"].Direction =
ParameterDirection.Input;
cmdDebit.Parameters.Add("@Debit", SqlDbType.Int, 0, "Quantity");
cmdDebit.Parameters["@Debit"].Direction =
ParameterDirection.Input;

// cmdCredit adds inventory to the SiteInventory
// table by calling the CreditSiteInventory
// stored procedure.
SqlCommand cmdCredit =
new SqlCommand("CreditSiteInventory", conn);
cmdCredit.CommandType = CommandType.StoredProcedure;
cmdCredit.Parameters.Add("@PartID", SqlDbType.Int, 0, "PartID");
cmdCredit.Parameters["@PartID"].Direction =
ParameterDirection.Input;
cmdCredit.Parameters.Add
("@Credit", SqlDbType.Int, 0, "Quantity");
cmdCredit.Parameters["@Credit"].Direction =
ParameterDirection.Input;
cmdCredit.Parameters.Add("@SiteID", SqlDbType.Int, 0, "SiteID");
cmdCredit.Parameters["@SiteID"].Direction =
ParameterDirection.Input;

// Begin the transaction and enlist the commands.
SqlTransaction tran = conn.BeginTransaction();
cmdDebit.Transaction = tran;
cmdCredit.Transaction = tran;

try
{
// Execute the commands.
cmdDebit.Parameters["@PartID"].Value = partID;
cmdDebit.Parameters["@Debit"].Value = numberMoved;
cmdDebit.ExecuteNonQuery();

cmdCredit.Parameters["@PartID"].Value = partID;
cmdCredit.Parameters["@Credit"].Value = numberMoved;
cmdCredit.Parameters["@SiteID"].Value = siteID;
cmdCredit.ExecuteNonQuery();

// Commit the transaction.
tran.Commit();
}
catch(SqlException ex)
{
// Roll back the transaction.
tran.Rollback();

// Additional error handling if needed.
}
finally
{
// Close the connection.
conn.Close();
}
}


60. Whats the difference between Dataset.clone and Dataset.copy ?

The Clone method of the DataSet class copies only the schema of a DataSet object. It returns a new DataSet object that has the same schema as the existing DataSet object, including all DataTable schemas, relations, and constraints. It does not copy any data from the existing DataSet object into the new DataSet.

The Copy method of the DataSet class copies both the structure and data of a DataSet object. It returns a new DataSet object having the same structure (including all DataTable schemas, relations, and constraints) and data as the existing DataSet object.



61. Difference between OLEDB Provider and SqlClient ?

SQLClient .NET classes are highly optimized for the .net / sqlserver combination and achieve optimal results. The SqlClient data provider is fast. It's faster than the Oracle provider, and faster than accessing database via the OleDb layer. It's faster because it accesses the native library (which automatically gives you better performance), and it was written with lots of help from the SQL Server team.

62. What are the different namespaces used in the project to connect the database? What data providers available in .net to connect to database?

System.Data.OleDb – classes that make up the .NET Framework Data Provider for OLE DB-compatible data sources. These classes allow you to connect to an OLE DB data source, execute commands against the source, and read the results.
System.Data.SqlClient – classes that make up the .NET Framework Data Provider for SQL Server, which allows you to connect to SQL Server 7.0, execute commands, and read results. The System.Data.SqlClient namespace is similar to the System.Data.OleDb namespace, but is optimized for access to SQL Server 7.0 and later.
System.Data.Odbc - classes that make up the .NET Framework Data Provider for ODBC. These classes allow you to access ODBC data source in the managed space.
System.Data.OracleClient - classes that make up the .NET Framework Data Provider for Oracle. These classes allow you to access an Oracle data source in the managed space.


63. How to check if a datareader is closed or opened?

IsClosed()

No comments:

Post a Comment

 
Locations of visitors to this page