Thursday, June 25, 2009

SQL Interview Questions with Answers

What is RDBMS?

Relational Data Base Management Systems (RDBMS) are database management systems that maintain data records and indices in tables. Relationships may be created and maintained across and among the data and tables. In a relational database, relationships between data items are expressed by means of
tables. Interdependencies among these tables are expressed by data values rather than by pointers. This allows a high degree of data independence. An RDBMS has the capability to recombine the data items from different files, providing powerful tools for data usage.

What is normalization?

Database normalization is a data design and organization process applied to data structures based on rules that help build relational databases. In relational database design, the process of organizing data to minimize redundancy. Normalization usually involves dividing a database into two or more tables and
defining relationships between the tables. The objective is to isolate data so that additions, deletions, and modifications of a field can be made in just one table and then propagated through the rest of the database via the defined relationships.

What are different normalization forms?

1NF: Eliminate Repeating Groups

Make a separate table for each set of related attributes, and give each table a primary key. Each field contains at most one value from its attribute domain.

2NF: Eliminate Redundant Data

If an attribute depends on only part of a multi-valued key, remove it to a separate table.

3NF: Eliminate Columns Not Dependent On Key

If attributes do not contribute to a description of the key, remove them to a separate table. All attributes must be directly dependent on the primary key

BCNF: Boyce-Codd Normal Form

If there are non-trivial dependencies between candidate key attributes, separate them out into distincttables.

4NF: Isolate Independent Multiple Relationships

No table may contain two or more 1:n or n:m relationships that are not directly related.

5NF: Isolate Semantically Related Multiple Relationships

There may be practical constrains on information that justify separating logically related many-to-many relationships.

ONF: Optimal Normal Form

A model limited to only simple (elemental) facts, as expressed in Object Role Model notation.

DKNF: Domain-Key Normal Form

A model free from all modification anomalies.
Remember, these normalization guidelines are cumulative. For a database to be in
3NF, it must first fulfill all the criteria of a 2NF and 1NF database.

What is Stored Procedure?

A stored procedure is a named group of SQL statements that have been previously created and stored in the server database. Stored procedures accept input parameters so that a single procedure can be used over the network by several clients using different input data. And when the procedure is modified, all clients automatically get the new version. Stored procedures reduce network traffic and
improve performance. Stored procedures can be used to help ensure the integrity of the database.
e.g. sp_helpdb, sp_renamedb, sp_depends etc.

What is Trigger?

A trigger is a SQL procedure that initiates an action when an event (INSERT, DELETE or UPDATE) occurs. Triggers are stored in and managed by the DBMS.Triggers are used to maintain the referential integrity of data by changing the data in a systematic fashion. A trigger cannot be called or executed;
the DBMS automatically fires the trigger as a result of a data modification to the associated table.
Triggers can be viewed as similar to stored procedures in that both consist of procedural logic that is stored at the database level. Stored procedures, however, are not event-drive and are not attached to a specific table as triggers are. Stored procedures are explicitly executed by invoking a CALL to the
procedure while triggers are implicitly executed. In addition, triggers can also execute stored procedures.

Nested Trigger: A trigger can also contain INSERT, UPDATE and DELETE logic within itself, so when the trigger is fired because of data modification it can also cause another data modification, thereby firing another trigger. A trigger that contains data modification logic within itself is called a nested trigger.

What is View?

A simple view can be thought of as a subset of a table. It can be used for retrieving data, as well as updating or deleting rows. Rows updated or deleted in the view are updated or deleted in the table the view was created with. It should also be noted that as data in the original table changes, so does data
in the view, as views are the way to look at part of the original table. The results of using a view are not permanently stored in the database. The data accessed through a view is actually constructed using
standard T-SQL select command and can come from one to many different base tables or even other views.

What is Index?

An index is a physical structure containing pointers to the data. Indices are created in an existing table
to locate rows more quickly and efficiently. It is possible to create an index on one or more columns of
a table, and each index is given a name. The users cannot see the indexes, they are just used to speed
up queries. Effective indexes are one of the best ways to improve performance in a database
application. A table scan happens when there is no index available to help a query. In a table scan SQL
Server examines every row in the table to satisfy the query results. Table scans are sometimes
unavoidable, but on large tables, scans have a terrific impact on performance.
Clustered indexes define the physical sorting of a database table’s rows in the storage media. For this
reason, each database table may have only one clustered index.
Non-clustered indexes are created outside of the database table and contain a sorted list of references
to the table itself.
What is the difference between clustered and a non-clustered index?
A clustered index is a special type of index that reorders the way records in the table are physically
stored. Therefore table can have only one clustered index. The leaf nodes of a clustered index contain
the data pages.
A nonclustered index is a special type of index in which the logical order of the index does not match
the physical stored order of the rows on disk. The leaf node of a nonclustered index does not consist of
the data pages. Instead, the leaf nodes contain index rows.
What are the different index configurations a table can have?
A table can have one of the following index configurations:
No indexes
A clustered index
A clustered index and many nonclustered indexes
A nonclustered index
Many nonclustered indexes
What is cursors?
Cursor is a database object used by applications to manipulate data in a set on a row-by-row basis,
instead of the typical SQL commands that operate on all the rows in the set at one time.


In order to work with a cursor we need to perform some steps in the following order:
Declare cursor
Open cursor
Fetch row from the cursor
Process fetched row
Close cursor
Deallocate cursor
What is the use of DBCC commands?
DBCC stands for database consistency checker. We use these commands to check the consistency of
the databases, i.e., maintenance, validation task and status checks.
E.g. DBCC CHECKDB - Ensures that tables in the db and the indexes are correctly linked.
DBCC CHECKALLOC - To check that all pages in a db are correctly allocated.
DBCC CHECKFILEGROUP - Checks all tables file group for any damage.
What is a Linked Server?
Linked Servers is a concept in SQL Server by which we can add other SQL Server to a Group and query
both the SQL Server dbs using T-SQL Statements. With a linked server, you can create very clean, easy
to follow, SQL statements that allow remote data to be retrieved, joined and combined with local data.
Storped Procedure sp_addlinkedserver, sp_addlinkedsrvlogin will be used add new Linked Server.

What is Collation?

Collation refers to a set of rules that determine how data is sorted and compared. Character data is
sorted using rules that define the correct character sequence, with options for specifying casesensitivity,
accent marks, kana character types and character width.
What are different type of Collation Sensitivity?
Case sensitivity
A and a, B and b, etc.
Accent sensitivity
a and á, o and ó, etc.
Kana Sensitivity
When Japanese kana characters Hiragana and Katakana are treated differently, it is called Kana
sensitive.
Width sensitivity
When a single-byte character (half-width) and the same character when represented as a double-byte
character (full-width) are treated differently then it is width sensitive.
What's the difference between a primary key and a unique key?
Both primary key and unique enforce uniqueness of the column on which they are defined. But by
default primary key creates a clustered index on the column, where are unique creates a nonclustered
index by default. Another major difference is that, primary key doesn't allow NULLs, but unique key
allows one NULL only.
How to implement one-to-one, one-to-many and many-to-many relationships while
designing tables?
One-to-One relationship can be implemented as a single table and rarely as two tables with primary
and foreign key relationships.
One-to-Many relationships are implemented by splitting the data into two tables with primary key and
foreign key relationships.
Many-to-Many relationships are implemented using a junction table with the keys from both the tables
forming the composite primary key of the junction table.
What is a NOLOCK?


Using the NOLOCK query optimiser hint is generally considered good practice in order to improve
concurrency on a busy system. When the NOLOCK hint is included in a SELECT statement, no locks are
taken when data is read. The result is a Dirty Read, which means that another process could be
updating the data at the exact time you are reading it. There are no guarantees that your query will
retrieve the most recent data. The advantage to performance is that your reading of data will not block
updates from taking place, and updates will not block your reading of data. SELECT statements take
Shared (Read) locks. This means that multiple SELECT statements are allowed simultaneous access, but
other processes are blocked from modifying the data. The updates will queue until all the reads have
completed, and reads requested after the update will wait for the updates to complete. The result to
your system is delay(blocking).
What is difference between DELETE & TRUNCATE commands?
Delete command removes the rows from a table based on the condition that we provide with a WHERE
clause. Truncate will actually remove all the rows from a table and there will be no data in the table
after we run the truncate command.
TRUNCATE
TRUNCATE is faster and uses fewer system and transaction log resources than DELETE.
TRUNCATE removes the data by deallocating the data pages used to store the table’s data, and only the
page deallocations are recorded in the transaction log.
TRUNCATE removes all rows from a table, but the table structure and its columns, constraints, indexes
and so on remain. The counter used by an identity for new rows is reset to the seed for the column.
You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint.
Because TRUNCATE TABLE is not logged, it cannot activate a trigger.
TRUNCATE can not be Rolled back.
TRUNCATE is DDL Command.
TRUNCATE Resets identity of the table.
DELETE
DELETE removes rows one at a time and records an entry in the transaction log for each deleted row.
If you want to retain the identity counter, use DELETE instead. If you want to remove table definition
and its data, use the DROP TABLE statement.
DELETE Can be used with or without a WHERE clause
DELETE Activates Triggers.
DELETE Can be Rolled back.
DELETE is DML Command.
DELETE does not reset identity of the table.
Difference between Function and Stored Procedure?
UDF can be used in the SQL statements anywhere in the WHERE/HAVING/SELECT section where as
Stored procedures cannot be.
UDFs that return tables can be treated as another rowset. This can be used in JOINs with other tables.
Inline UDF's can be though of as views that take parameters and can be used in JOINs and other
Rowset operations.
When is the use of UPDATE_STATISTICS command?
This command is basically used when a large processing of data has occurred. If a large amount of
deletions any modification or Bulk Copy into the tables has occurred, it has to update the indexes to
take these changes into account. UPDATE_STATISTICS updates the indexes on these tables
accordingly.
What types of Joins are possible with Sql Server?
Joins are used in queries to explain how different tables are related. Joins also let you select data from
a table depending upon data from another table.
Types of joins: INNER JOINs, OUTER JOINs, CROSS JOINs. OUTER JOINs are further classified as LEFT
OUTER JOINS, RIGHT OUTER JOINS and FULL OUTER JOINS.


What is the difference between a HAVING CLAUSE and a WHERE CLAUSE?

Specifies a search condition for a group or an aggregate. HAVING can be used only with the SELECT
statement. HAVING is typically used in a GROUP BY clause. When GROUP BY is not used, HAVING
behaves like a WHERE clause. Having Clause is basically used only with the GROUP BY function in a
query. WHERE Clause is applied to each row before they are part of the GROUP BY function in a query.
What is sub-query? Explain properties of sub-query.
Sub-queries are often referred to as sub-selects, as they allow a SELECT statement to be executed
arbitrarily within the body of another SQL statement. A sub-query is executed by enclosing it in a set of
parentheses. Sub-queries are generally used to return a single row as an atomic value, though they
may be used to compare values against multiple rows with the IN keyword.
A subquery is a SELECT statement that is nested within another T-SQL statement. A subquery SELECT
statement if executed independently of the T-SQL statement, in which it is nested, will return a result
set. Meaning a subquery SELECT statement can standalone and is not depended on the statement in
which it is nested. A subquery SELECT statement can return any number of values, and can be found
in, the column list of a SELECT statement, a FROM, GROUP BY, HAVING, and/or ORDER BY clauses of a
T-SQL statement. A Subquery can also be used as a parameter to a function call. Basically a subquery
can be used anywhere an expression can be used.
Properties of Sub-Query
A subquery must be enclosed in the parenthesis.
A subquery must be put in the right hand of the comparison operator, and
A subquery cannot contain a ORDER-BY clause.
A query can contain more than one sub-queries.
What are types of sub-queries?
Single-row subquery, where the subquery returns only one row.
Multiple-row subquery, where the subquery returns multiple rows,.and
Multiple column subquery, where the subquery returns multiple columns.
What is SQL Profiler?
SQL Profiler is a graphical tool that allows system administrators to monitor events in an instance of
Microsoft SQL Server. You can capture and save data about each event to a file or SQL Server table to
analyze later. For example, you can monitor a production environment to see which stored procedures
are hampering performance by executing too slowly.
Use SQL Profiler to monitor only the events in which you are interested. If traces are becoming too
large, you can filter them based on the information you want, so that only a subset of the event data is
collected. Monitoring too many events adds overhead to the server and the monitoring process and can
cause the trace file or trace table to grow very large, especially when the monitoring process takes
place over a long period of time.
What is User Defined Functions?
User-Defined Functions allow to define its own T-SQL functions that can accept 0 or more parameters
and return a single scalar data value or a table data type.
What kind of User-Defined Functions can be created?
There are three types of User-Defined functions in SQL Server 2000 and they are Scalar, Inline Table-
Valued and Multi-statement Table-valued.
Scalar User-Defined Function
A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and timestamp
data types are not supported. These are the type of user-defined functions that most developers are
used to in other programming languages. You pass in 0 to many parameters and you get a return
value.


Inline Table-Value User-Defined Function
An Inline Table-Value user-defined function returns a table data type and is an exceptional alternative
to a view as the user-defined function can pass parameters into a T-SQL select command and in
essence provide us with a parameterized, non-updateable view of the underlying tables.
Multi-statement Table-Value User-Defined Function
A Multi-Statement Table-Value user-defined function returns a table and is also an exceptional
alternative to a view as the function can support multiple T-SQL statements to build the final result
where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a TSQL
select command or a group of them gives us the capability to in essence create a parameterized,
non-updateable view of the data in the underlying tables. Within the create function command you
must define the table structure that is being returned. After creating this type of user-defined function,
It can be used in the FROM clause of a T-SQL command unlike the behavior found when using a stored
procedure which can also return record sets.
Which TCP/IP port does SQL Server run on? How can it be changed?
SQL Server runs on port 1433. It can be changed from the Network Utility TCP/IP properties –> Port
number.both on client and the server.
What are the authentication modes in SQL Server? How can it be changed?
Windows mode and mixed mode (SQL & Windows).
To change authentication mode in SQL Server click Start, Programs, Microsoft SQL Server and click SQL
Enterprise Manager to run SQL Enterprise Manager from the Microsoft SQL Server program group.
Select the server then from the Tools menu select SQL Server Configuration Properties, and choose the
Security page.
Where are SQL server users names and passwords are stored in sql server?
They get stored in master db in the sysxlogins table.
Which command using Query Analyzer will give you the version of SQL server and operating
system?
SELECT SERVERPROPERTY('productversion'), SERVERPROPERTY ('productlevel'), SERVERPROPERTY
('edition')
What is SQL server agent?
SQL Server agent plays an important role in the day-to-day tasks of a database administrator (DBA). It
is often overlooked as one of the main tools for SQL Server management. Its purpose is to ease the
implementation of tasks for the DBA, with its full-function scheduling engine, which allows you to
schedule your own jobs and scripts.
Can a stored procedure call itself or recursive stored procedure? How many level SP nesting
possible?
Yes. Because Transact-SQL supports recursion, you can write stored procedures that call themselves.
Recursion can be defined as a method of problem solving wherein the solution is arrived at by
repetitively applying it to subsets of the problem. A common application of recursive logic is to perform
numeric computations that lend themselves to repetitive evaluation by the same processing steps.
Stored procedures are nested when one stored procedure calls another or executes managed code by
referencing a CLR routine, type, or aggregate. You can nest stored procedures and managed code
references up to 32 levels.
What is @@ERROR?
The @@ERROR automatic variable returns the error code of the last Transact-SQL statement. If there
was no error, @@ERROR returns zero. Because @@ERROR is reset after each Transact-SQL statement,
it must be saved to a variable if it is needed to process it further after checking it.


What is Raiseerror?

Stored procedures report errors to client applications via the RAISERROR command. RAISERROR
doesn't change the flow of a procedure; it merely displays an error message, sets the @@ERROR
automatic variable, and optionally writes the message to the SQL Server error log and the NT
application event log.
What is log shipping?
Log shipping is the process of automating the backup of database and transaction log files on a
production SQL server, and then restoring them onto a standby server. Enterprise Editions only
supports log shipping. In log shipping the transactional log file from one server is automatically updated
into the backup database on the other server. If one server fails, the other server will have the same db
can be used this as the Disaster Recovery plan. The key feature of log shipping is that is will
automatically backup transaction logs throughout the day and automatically restore them on the
standby server at defined interval.
What is the difference between a local and a global variable?
A local temporary table exists only for the duration of a connection or, if defined inside a compound
statement, for the duration of the compound statement.
A global temporary table remains in the database permanently, but the rows exist only within a given
connection. When connection are closed, the data in the global temporary table disappears. However,
the table definition remains with the database for access when database is opened next time.
What command do we use to rename a db?
sp_renamedb ‘oldname’ , ‘newname’
If someone is using db it will not accept sp_renmaedb. In that case first bring db to single user using
sp_dboptions. Use sp_renamedb to rename database. Use sp_dboptions to bring database to multi user
mode.
What is sp_configure commands and set commands?
Use sp_configure to display or change server-level settings. To change database-level settings, use
ALTER DATABASE. To change settings that affect only the current user session, use the SET statement.
What are the different types of replication? Explain.
The SQL Server 2000-supported replication types are as follows:
•Transactional
•Snapshot
•Merge
Snapshot replication distributes data exactly as it appears at a specific moment in time and does not
monitor for updates to the data. Snapshot replication is best used as a method for replicating data that
changes infrequently or where the most up-to-date values (low latency) are not a requirement. When
synchronization occurs, the entire snapshot is generated and sent to Subscribers.
Transactional replication, an initial snapshot of data is applied at Subscribers, and then when data
modifications are made at the Publisher, the individual transactions are captured and propagated to
Subscribers.
Merge replication is the process of distributing data from Publisher to Subscribers, allowing the
Publisher and Subscribers to make updates while connected or disconnected, and then merging the
updates between sites when they are connected.
What are the OS services that the SQL Server installation adds?
MS SQL SERVER SERVICE, SQL AGENT SERVICE, DTC (Distribution transac co-ordinator)
What are three SQL keywords used to change or set someone’s permissions?


GRANT, DENY, and REVOKE.
What does it mean to have quoted_identifier on? What are the implications of having it off?
When SET QUOTED_IDENTIFIER is ON, identifiers can be delimited by double quotation marks, and
literals must be delimited by single quotation marks. When SET QUOTED_IDENTIFIER is OFF, identifiers
cannot be quoted and must follow all Transact-SQL rules for identifiers.
What is the STUFF function and how does it differ from the REPLACE function?
STUFF function to overwrite existing characters. Using this syntax, STUFF(string_expression, start,
length, replacement_characters), string_expression is the string that will have characters substituted,
start is the starting position, length is the number of characters in the string that are substituted, and
replacement_characters are the new characters interjected into the string.
REPLACE function to replace existing characters of all occurance. Using this syntax
REPLACE(string_expression, search_string, replacement_string), where every incidence of
search_string found in the string_expression will be replaced with replacement_string.
Using query analyzer, name 3 ways to get an accurate count of the number of records in a
table?
SELECT * FROM table1
SELECT COUNT(*) FROM table1
SELECT rows FROM sysindexes WHERE id = OBJECT_ID(table1) AND indid < 2
How to rebuild Master Database?
Shutdown Microsoft SQL Server 2000, and then run Rebuildm.exe. This is located in the Program
Files\Microsoft SQL Server\80\Tools\Binn directory.
In the Rebuild Master dialog box, click Browse.
In the Browse for Folder dialog box, select the \Data folder on the SQL Server 2000 compact disc or in
the shared network directory from which SQL Server 2000 was installed, and then click OK.
Click Settings. In the Collation Settings dialog box, verify or change settings used for the master
database and all other databases.
Initially, the default collation settings are shown, but these may not match the collation selected during
setup. You can select the same settings used during setup or select new collation settings. When done,
click OK.
In the Rebuild Master dialog box, click Rebuild to start the process.
The Rebuild Master utility reinstalls the master database.
To continue, you may need to stop a server that is running.
Source: http://msdn2.microsoft.com/en-us/library/aa197950(SQL.80).aspx
What is the basic functions for master, msdb, model, tempdb databases?
The Master database holds information for all databases located on the SQL Server instance and is the
glue that holds the engine together. Because SQL Server cannot start without a functioning master
database, you must administer this database with care.
The msdb database stores information regarding database backups, SQL Agent information, DTS
packages, SQL Server jobs, and some replication information such as for log shipping.
The tempdb holds temporary objects such as global and local temporary tables and stored procedures.
The model is essentially a template database used in the creation of any new user database created in
the instance.
What are primary keys and foreign keys?
Primary keys are the unique identifiers for each row. They must contain unique values and cannot be
null. Due to their importance in relational databases, Primary keys are the most fundamental of all keys
and constraints. A table can have only one Primary key.
Foreign keys are both a method of ensuring data integrity and a manifestation of the relationship
between tables.
What is data integrity? Explain constraints?
Data integrity is an important feature in SQL Server. When used properly, it ensures that data is


accurate, correct, and valid. It also acts as a trap for otherwise undetectable bugs within applications.
A PRIMARY KEY constraint is a unique identifier for a row within a database table. Every table should
have a primary key constraint to uniquely identify each row and only one primary key constraint can be
created for each table. The primary key constraints are used to enforce entity integrity.
A UNIQUE constraint enforces the uniqueness of the values in a set of columns, so no duplicate values
are entered. The unique key constraints are used to enforce entity integrity as the primary key
constraints.
A FOREIGN KEY constraint prevents any actions that would destroy links between tables with the
corresponding data values. A foreign key in one table points to a primary key in another table. Foreign
keys prevent actions that would leave rows with foreign key values when there are no primary keys
with that value. The foreign key constraints are used to enforce referential integrity.
A CHECK constraint is used to limit the values that can be placed in a column. The check constraints
are used to enforce domain integrity.
A NOT NULL constraint enforces that the column will not accept null values. The not null constraints
are used to enforce domain integrity, as the check constraints.
What are the properties of the Relational tables?
Relational tables have six properties:
•Values are atomic.
•Column values are of the same kind.
•Each row is unique.
•The sequence of columns is insignificant.
•The sequence of rows is insignificant.
•Each column must have a unique name.
What is De-normalization?
De-normalization is the process of attempting to optimize the performance of a database by adding
redundant data. It is sometimes necessary because current DBMSs implement the relational model
poorly. A true relational DBMS would allow for a fully normalized database at the logical level, while
providing physical storage of data that is tuned for high performance. De-normalization is a technique
to move from higher to lower normal forms of database modeling in order to speed up database access.
How to get @@error and @@rowcount at the same time?
If @@Rowcount is checked after Error checking statement then it will have 0 as the value of
@@Recordcount as it would have been reset.
And if @@Recordcount is checked before the error-checking statement then @@Error would get reset.
To get @@error and @@rowcount at the same time do both in same statement and store them in local
variable. SELECT @RC = @@ROWCOUNT, @ER = @@ERROR
What is Identity?
Identity (or AutoNumber) is a column that automatically generates numeric values. A start and
increment value can be set, but most DBA leave these at 1. A GUID column also generates numbers,
the value of this cannot be controled. Identity/GUID columns do not need to be indexed.
What is a Scheduled Jobs or What is a Scheduled Tasks?
Scheduled tasks let user automate processes that run on regular or predictable cycles. User can
schedule administrative tasks, such as cube processing, to run during times of slow business activity.
User can also determine the order in which tasks run by creating job steps within a SQL Server Agent
job. E.g. Back up database, Update Stats of Tables. Job steps give user control over flow of execution.


If one job fails, user can configure SQL Server Agent to continue to run the remaining tasks or to stop
execution.
What is a table called, if it does not have neither Cluster nor Non-cluster Index? What is it
used for?
Unindexed table or Heap. Microsoft Press Books and Book On Line (BOL) refers it as Heap.
A heap is a table that does not have a clustered index and, therefore, the pages are not linked by
pointers. The IAM pages are the only structures that link the pages in a table together.
Unindexed tables are good for fast storing of data. Many times it is better to drop all indexes from table
and than do bulk of inserts and to restore those indexes after that.
What is BCP? When does it used?
BulkCopy is a tool used to copy huge amount of data from tables and views. BCP does not copy the
structures same as source to destination.
How do you load large data to the SQL server database?
BulkCopy is a tool used to copy huge amount of data from tables. BULK INSERT command helps to
Imports a data file into a database table or view in a user-specified format.
Can we rewrite subqueries into simple select statements or with joins?
Subqueries can often be re-written to use a standard outer join, resulting in faster performance. As we
may know, an outer join uses the plus sign (+) operator to tell the database to return all non-matching
rows with NULL values. Hence we combine the outer join with a NULL test in the WHERE clause to
reproduce the result set without using a sub-query.
Can SQL Servers linked to other servers like Oracle?
SQL Server can be lined to any server provided it has OLE-DB provider from Microsoft to allow a link.
E.g. Oracle has a OLE-DB provider for oracle that Microsoft provides to add it as linked server to SQL
Server group.
How to know which index a table is using?
SELECT table_name,index_name FROM user_constraints
How to copy the tables, schema and views from one SQL server to another?
Microsoft SQL Server 2000 Data Transformation Services (DTS) is a set of graphical tools and
programmable objects that lets user extract, transform, and consolidate data from disparate sources
into single or multiple destinations.
What is Self Join?
This is a particular case when one table joins to itself, with one or two aliases to avoid confusion. A self
join can be of any type, as long as the joined tables are the same. A self join is rather unique in that it
involves a relationship with only one table. The common example is when company have a hierarchal
reporting structure whereby one member of staff reports to another.
What is Cross Join?
A cross join that does not have a WHERE clause produces the Cartesian product of the tables involved
in the join. The size of a Cartesian product result set is the number of rows in the first table multiplied
by the number of rows in the second table. The common example is when company wants to combine
each product with a pricing table to analyze each product at each price.
Which virtual table does a trigger use?
Inserted and Deleted.
List few advantages of Stored Procedure.
•Stored procedure can reduced network traffic and latency, boosting application performance.
•Stored procedure execution plans can be reused, staying cached in SQL Server's memory,


reducing server overhead.
•Stored procedures help promote code reuse.
•Stored procedures can encapsulate logic. You can change stored procedure code without
affecting clients.
•Stored procedures provide better security to your data.
What is DataWarehousing?
•Subject-oriented, meaning that the data in the database is organized so that all the data
elements relating to the same real-world event or object are linked together;
•Time-variant, meaning that the changes to the data in the database are tracked and recorded
so that reports can be produced showing changes over time;
•Non-volatile, meaning that data in the database is never over-written or deleted, once
committed, the data is static, read-only, but retained for future reporting;
•Integrated, meaning that the database contains data from most or all of an organization's
operational applications, and that this data is made consistent.
What is OLTP(OnLine Transaction Processing)?
In OLTP - online transaction processing systems relational database design use the discipline of data
modeling and generally follow the Codd rules of data normalization in order to ensure absolute data
integrity. Using these rules complex information is broken down into its most simple structures (a table)
where all of the individual atomic level elements relate to each other and satisfy the normalization
rules.
How do SQL server 2000 and XML linked? Can XML be used to access data?
FOR XML (ROW, AUTO, EXPLICIT)
You can execute SQL queries against existing relational databases to return results as XML rather than
standard rowsets. These queries can be executed directly or from within stored procedures. To retrieve
XML results, use the FOR XML clause of the SELECT statement and specify an XML mode of RAW, AUTO,
or EXPLICIT.
OPENXML
OPENXML is a Transact-SQL keyword that provides a relational/rowset view over an in-memory XML
document. OPENXML is a rowset provider similar to a table or a view. OPENXML provides a way to
access XML data within the Transact-SQL context by transferring data from an XML document into the
relational tables. Thus, OPENXML allows you to manage an XML document and its interaction with the
relational environment.
What is an execution plan? When would you use it? How would you view the execution plan?
An execution plan is basically a road map that graphically or textually shows the data retrieval methods
chosen by the SQL Server query optimizer for a stored procedure or ad-hoc query and is a very useful
tool for a developer to understand the performance characteristics of a query or stored procedure since
the plan is the one that SQL Server will place in its cache and use to execute the stored procedure or
query. From within Query Analyzer is an option called "Show Execution Plan" (located on the Query
drop-down menu). If this option is turned on it will display query execution plan in separate window
when query is ran again.

Wednesday, June 24, 2009

RDBMS Interview Questions

1. What is database?
A database is a logically coherent collection of data with some inherent meaning, representing some aspect of real world and which is designed, built and populated with data for a specific purpose.
2. What is DBMS?
It is a collection of programs that enables user to create and maintain a database. In other words it is general-purpose software that provides the users with the processes of defining, constructing and manipulating the database for various applications.
3. What is a Database system?
The database and DBMS software together is called as Database system.
4. Advantages of DBMS?
Ø Redundancy is controlled.
Ø Unauthorised access is restricted.
Ø Providing multiple user interfaces.
Ø Enforcing integrity constraints.
Ø Providing backup and recovery.
5. Disadvantage in File Processing System?
Ø Data redundancy & inconsistency.
Ø Difficult in accessing data.
Ø Data isolation.
Ø Data integrity.
Ø Concurrent access is not possible.
Ø Security Problems.
6. Describe the three levels of data abstraction?
The are three levels of abstraction:
Ø Physical level: The lowest level of abstraction describes how data are stored.
Ø Logical level: The next higher level of abstraction, describes what data are stored in database and what relationship among those data.
Ø View level: The highest level of abstraction describes only part of entire database.
7. Define the "integrity rules"
There are two Integrity rules.
Ø Entity Integrity: States that "Primary key cannot have NULL value"
Ø Referential Integrity: States that "Foreign Key can be either a NULL value or should be Primary Key value of other relation.
8. What is extension and intension?
Extension -
It is the number of tuples present in a table at any instance. This is time dependent.
Intension -
It is a constant value that gives the name, structure of table and the constraints laid on it.
9. What is System R? What are its two major subsystems?
System R was designed and developed over a period of 1974-79 at IBM San Jose Research Center. It is a prototype and its purpose was to demonstrate that it is possible to build a Relational System that can be used in a real life environment to solve real life problems, with performance at least comparable to that of existing system.
Its two subsystems are
Ø Research Storage
Ø System Relational Data System.
10. How is the data structure of System R different from the relational structure?
Unlike Relational systems in System R
Ø Domains are not supported
Ø Enforcement of candidate key uniqueness is optional
Ø Enforcement of entity integrity is optional
Ø Referential integrity is not enforced
11. What is Data Independence?
Data independence means that "the application is independent of the storage structure and access strategy of data". In other words, The ability to modify the schema definition in one level should not affect the schema definition in the next higher level.
Two types of Data Independence:
Ø Physical Data Independence: Modification in physical level should not affect the logical level.
Ø Logical Data Independence: Modification in logical level should affect the view level.
NOTE: Logical Data Independence is more difficult to achieve
12. What is a view? How it is related to data independence?
A view may be thought of as a virtual table, that is, a table that does not really exist in its own right but is instead derived from one or more underlying base table. In other words, there is no stored file that direct represents the view instead a definition of view is stored in data dictionary.
Growth and restructuring of base tables is not reflected in views. Thus the view can insulate users from the effects of restructuring and growth in the database. Hence accounts for logical data independence.
13. What is Data Model?
A collection of conceptual tools for describing data, data relationships data semantics and constraints.
14. What is E-R model?
This data model is based on real world that consists of basic objects called entities and of relationship among these objects. Entities are described in a database by a set of attributes.
15. What is Object Oriented model?
This model is based on collection of objects. An object contains values stored in instance variables with in the object. An object also contains bodies of code that operate on the object. These bodies of code are called methods. Objects that contain same types of values and the same methods are grouped together into classes.
16. What is an Entity?
It is a 'thing' in the real world with an independent existence.
17. What is an Entity type?
It is a collection (set) of entities that have same attributes.
18. What is an Entity set?
It is a collection of all entities of particular entity type in the database.
19. What is an Extension of entity type?
The collections of entities of a particular entity type are grouped together into an entity set.
20. What is Weak Entity set?
An entity set may not have sufficient attributes to form a primary key, and its primary key compromises of its partial key and primary key of its parent entity, then it is said to be Weak Entity set.
21. What is an attribute?
It is a particular property, which describes the entity.
22. What is a Relation Schema and a Relation?
A relation Schema denoted by R(A1, A2, …, An) is made up of the relation name R and the list of attributes Ai that it contains. A relation is defined as a set of tuples. Let r be the relation which contains set tuples (t1, t2, t3, ..., tn). Each tuple is an ordered list of n-values t=(v1,v2, ..., vn).
23. What is degree of a Relation?
It is the number of attribute of its relation schema.
24. What is Relationship?
It is an association among two or more entities.
25. What is Relationship set?
The collection (or set) of similar relationships.
26. What is Relationship type?
Relationship type defines a set of associations or a relationship set among a given set of entity types.
27. What is degree of Relationship type?
It is the number of entity type participating.
25. What is DDL (Data Definition Language)?
A data base schema is specifies by a set of definitions expressed by a special language called DDL.
26. What is VDL (View Definition Language)?
It specifies user views and their mappings to the conceptual schema.
27. What is SDL (Storage Definition Language)?
This language is to specify the internal schema. This language may specify the mapping between two schemas.
28. What is Data Storage - Definition Language?
The storage structures and access methods used by database system are specified by a set of definition in a special type of DDL called data storage-definition language.
29. What is DML (Data Manipulation Language)?
This language that enable user to access or manipulate data as organised by appropriate data model.
Ø Procedural DML or Low level: DML requires a user to specify what data are needed and how to get those data.
Ø Non-Procedural DML or High level: DML requires a user to specify what data are needed without specifying how to get those data.
31. What is DML Compiler?
It translates DML statements in a query language into low-level instruction that the query evaluation engine can understand.
32. What is Query evaluation engine?
It executes low-level instruction generated by compiler.
33. What is DDL Interpreter?
It interprets DDL statements and record them in tables containing metadata.
34. What is Record-at-a-time?
The Low level or Procedural DML can specify and retrieve each record from a set of records. This retrieve of a record is said to be Record-at-a-time.
35. What is Set-at-a-time or Set-oriented?
The High level or Non-procedural DML can specify and retrieve many records in a single DML statement. This retrieve of a record is said to be Set-at-a-time or Set-oriented.
36. What is Relational Algebra?
It is procedural query language. It consists of a set of operations that take one or two relations as input and produce a new relation.
37. What is Relational Calculus?
It is an applied predicate calculus specifically tailored for relational databases proposed by E.F. Codd. E.g. of languages based on it are DSL ALPHA, QUEL.
38. How does Tuple-oriented relational calculus differ from domain-oriented relational calculus
The tuple-oriented calculus uses a tuple variables i.e., variable whose only permitted values are tuples of that relation. E.g. QUEL
The domain-oriented calculus has domain variables i.e., variables that range over the underlying domains instead of over relation. E.g. ILL, DEDUCE.
39. What is normalization?
It is a process of analysing the given relation schemas based on their Functional Dependencies (FDs) and primary key to achieve the properties
Ø Minimizing redundancy
Ø Minimizing insertion, deletion and update anomalies.
40. What is Functional Dependency?
A Functional dependency is denoted by X Y between two sets of attributes X and Y that are subsets of R specifies a constraint on the possible tuple that can form a relation state r of R. The constraint is for any two tuples t1 and t2 in r if t1[X] = t2[X] then they have t1[Y] = t2[Y]. This means the value of X component of a tuple uniquely determines the value of component Y.
41. When is a functional dependency F said to be minimal?
Ø Every dependency in F has a single attribute for its right hand side.
Ø We cannot replace any dependency X A in F with a dependency Y A where Y is a proper subset of X and still have a set of dependency that is equivalent to F.
Ø We cannot remove any dependency from F and still have set of dependency that is equivalent to F.
42. What is Multivalued dependency?
Multivalued dependency denoted by X Y specified on relation schema R, where X and Y are both subsets of R, specifies the following constraint on any relation r of R: if two tuples t1 and t2 exist in r such that t1[X] = t2[X] then t3 and t4 should also exist in r with the following properties
Ø t3[x] = t4[X] = t1[X] = t2[X]
Ø t3[Y] = t1[Y] and t4[Y] = t2[Y]
Ø t3[Z] = t2[Z] and t4[Z] = t1[Z]
where [Z = (R-(X U Y)) ]
43. What is Lossless join property?
It guarantees that the spurious tuple generation does not occur with respect to relation schemas after decomposition.
44. What is 1 NF (Normal Form)?
The domain of attribute must include only atomic (simple, indivisible) values.
45. What is Fully Functional dependency?
It is based on concept of full functional dependency. A functional dependency X Y is full functional dependency if removal of any attribute A from X means that the dependency does not hold any more.
46. What is 2NF?
A relation schema R is in 2NF if it is in 1NF and every non-prime attribute A in R is fully functionally dependent on primary key.
47. What is 3NF?
A relation schema R is in 3NF if it is in 2NF and for every FD X A either of the following is true
Ø X is a Super-key of R.
Ø A is a prime attribute of R.
In other words, if every non prime attribute is non-transitively dependent on primary key.
48. What is BCNF (Boyce-Codd Normal Form)?
A relation schema R is in BCNF if it is in 3NF and satisfies an additional constraint that for every FD X A, X must be a candidate key.
49. What is 4NF?
A relation schema R is said to be in 4NF if for every Multivalued dependency X Y that holds over R, one of following is true
Ø X is subset or equal to (or) XY = R.
Ø X is a super key.
50. What is 5NF?
A Relation schema R is said to be 5NF if for every join dependency {R1, R2, ..., Rn} that holds R, one the following is true
Ø Ri = R for some i.
Ø The join dependency is implied by the set of FD, over R in which the left side is key of R.
51. What is Domain-Key Normal Form?
A relation is said to be in DKNF if all constraints and dependencies that should hold on the the constraint can be enforced by simply enforcing the domain constraint and key constraint on the relation.
52. What are partial, alternate,, artificial, compound and natural key?
Partial Key:
It is a set of attributes that can uniquely identify weak entities and that are related to same owner entity. It is sometime called as Discriminator.
Alternate Key:
All Candidate Keys excluding the Primary Key are known as Alternate Keys.
Artificial Key:
If no obvious key, either stand alone or compound is available, then the last resort is to simply create a key, by assigning a unique number to each record or occurrence. Then this is known as developing an artificial key.
Compound Key:
If no single data element uniquely identifies occurrences within a construct, then combining multiple elements to create a unique identifier for the construct is known as creating a compound key.
Natural Key:
When one of the data elements stored within a construct is utilized as the primary key, then it is called the natural key.
53. What is indexing and what are the different kinds of indexing?
Indexing is a technique for determining how quickly specific data can be found.
Types:
Ø Binary search style indexing
Ø B-Tree indexing
Ø Inverted list indexing
Ø Memory resident table
Ø Table indexing
54. What is system catalog or catalog relation? How is better known as?
A RDBMS maintains a description of all the data that it contains, information about every relation and index that it contains. This information is stored in a collection of relations maintained by the system called metadata. It is also called data dictionary.
55. What is meant by query optimization?
The phase that identifies an efficient execution plan for evaluating a query that has the least estimated cost is referred to as query optimization.
56. What is join dependency and inclusion dependency?
Join Dependency:
A Join dependency is generalization of Multivalued dependency.A JD {R1, R2, ..., Rn} is said to hold over a relation R if R1, R2, R3, ..., Rn is a lossless-join decomposition of R . There is no set of sound and complete inference rules for JD.
Inclusion Dependency:
An Inclusion Dependency is a statement of the form that some columns of a relation are contained in other columns. A foreign key constraint is an example of inclusion dependency.
57. What is durability in DBMS?
Once the DBMS informs the user that a transaction has successfully completed, its effects should persist even if the system crashes before all its changes are reflected on disk. This property is called durability.
58. What do you mean by atomicity and aggregation?
Atomicity:
Either all actions are carried out or none are. Users should not have to worry about the effect of incomplete transactions. DBMS ensures this by undoing the actions of incomplete transactions.
Aggregation:
A concept which is used to model a relationship between a collection of entities and relationships. It is used when we need to express a relationship among relationships.
59. What is a Phantom Deadlock?
In distributed deadlock detection, the delay in propagating local information might cause the deadlock detection algorithms to identify deadlocks that do not really exist. Such situations are called phantom deadlocks and they lead to unnecessary aborts.
60. What is a checkpoint and When does it occur?
A Checkpoint is like a snapshot of the DBMS state. By taking checkpoints, the DBMS can reduce the amount of work to be done during restart in the event of subsequent crashes.
61. What are the different phases of transaction?
Different phases are
Ø Analysis phase
Ø Redo Phase
Ø Undo phase
62. What do you mean by flat file database?
It is a database in which there are no programs or user access languages. It has no cross-file capabilities but is user-friendly and provides user-interface management.
63. What is "transparent DBMS"?
It is one, which keeps its Physical Structure hidden from user.
64. Brief theory of Network, Hierarchical schemas and their properties
Network schema uses a graph data structure to organize records example for such a database management system is CTCG while a hierarchical schema uses a tree data structure example for such a system is IMS.
65. What is a query?
A query with respect to DBMS relates to user commands that are used to interact with a data base. The query language can be classified into data definition language and data manipulation language.
66. What do you mean by Correlated subquery?
Subqueries, or nested queries, are used to bring back a set of rows to be used by the parent query. Depending on how the subquery is written, it can be executed once for the parent query or it can be executed once for each row returned by the parent query. If the subquery is executed for each row of the parent, this is called a correlated subquery.
A correlated subquery can be easily identified if it contains any references to the parent subquery columns in its WHERE clause. Columns from the subquery cannot be referenced anywhere else in the parent query. The following example demonstrates a non-correlated subquery.
E.g. Select * From CUST Where '10/03/1990' IN (Select ODATE From ORDER Where CUST.CNUM = ORDER.CNUM)
67. What are the primitive operations common to all record management systems?
Addition, deletion and modification.
68. Name the buffer in which all the commands that are typed in are stored
'Edit' Buffer
69. What are the unary operations in Relational Algebra?
PROJECTION and SELECTION.
70. Are the resulting relations of PRODUCT and JOIN operation the same?
No.
PRODUCT: Concatenation of every row in one relation with every row in another.
JOIN: Concatenation of rows from one relation and related rows from another.
71. What is RDBMS KERNEL?
Two important pieces of RDBMS architecture are the kernel, which is the software, and the data dictionary, which consists of the system-level data structures used by the kernel to manage the database
You might think of an RDBMS as an operating system (or set of subsystems), designed specifically for controlling data access; its primary functions are storing, retrieving, and securing data. An RDBMS maintains its own list of authorized users and their associated privileges; manages memory caches and paging; controls locking for concurrent resource usage; dispatches and schedules user requests; and manages space usage within its table-space structures
.
72. Name the sub-systems of a RDBMS
I/O, Security, Language Processing, Process Control, Storage Management, Logging and Recovery, Distribution Control, Transaction Control, Memory Management, Lock Management
73. Which part of the RDBMS takes care of the data dictionary? How
Data dictionary is a set of tables and database objects that is stored in a special area of the database and maintained exclusively by the kernel.
74. What is the job of the information stored in data-dictionary?
The information in the data dictionary validates the existence of the objects, provides access to them, and maps the actual physical storage location.
75. Not only RDBMS takes care of locating data it also
determines an optimal access path to store or retrieve the data
76. How do you communicate with an RDBMS?
You communicate with an RDBMS using Structured Query Language (SQL)
77. Define SQL and state the differences between SQL and other conventional programming Languages
SQL is a nonprocedural language that is designed specifically for data access operations on normalized relational database structures. The primary difference between SQL and other conventional programming languages is that SQL statements specify what data operations should be performed rather than how to perform them.
78. Name the three major set of files on disk that compose a database in Oracle
There are three major sets of files on disk that compose a database. All the files are binary. These are
Ø Database files
Ø Control files
Ø Redo logs
The most important of these are the database files where the actual data resides. The control files and the redo logs support the functioning of the architecture itself.
All three sets of files must be present, open, and available to Oracle for any data on the database to be useable. Without these files, you cannot access the database, and the database administrator might have to recover some or all of the database using a backup, if there is one.
79. What is an Oracle Instance?
The Oracle system processes, also known as Oracle background processes, provide functions for the user processes-functions that would otherwise be done by the user processes themselves
Oracle database-wide system memory is known as the SGA, the system global area or shared global area. The data and control structures in the SGA are shareable, and all the Oracle background processes and user processes can use them.
The combination of the SGA and the Oracle background processes is known as an Oracle instance
80. What are the four Oracle system processes that must always be up and running for the database to be useable
The four Oracle system processes that must always be up and running for the database to be useable include DBWR (Database Writer), LGWR (Log Writer), SMON (System Monitor), and PMON (Process Monitor).
81. What are database files, control files and log files. How many of these files should a database have at least? Why?
Database Files
The database files hold the actual data and are typically the largest in size. Depending on their sizes, the tables (and other objects) for all the user accounts can go in one database file-but that's not an ideal situation because it does not make the database structure very flexible for controlling access to storage for different users, putting the database on different disk drives, or backing up and restoring just part of the database.
You must have at least one database file but usually, more than one files are used. In terms of accessing and using the data in the tables and other objects, the number (or location) of the files is immaterial.
The database files are fixed in size and never grow bigger than the size at which they were created
Control Files
The control files and redo logs support the rest of the architecture. Any database must have at least one control file, although you typically have more than one to guard against loss. The control file records the name of the database, the date and time it was created, the location of the database and redo logs, and the synchronization information to ensure that all three sets of files are always in step. Every time you add a new database or redo log file to the database, the information is recorded in the control files.
Redo Logs
Any database must have at least two redo logs. These are the journals for the database; the redo logs record all changes to the user objects or system objects. If any type of failure occurs, the changes recorded in the redo logs can be used to bring the database to a consistent state without losing any committed transactions. In the case of non-data loss failure, Oracle can apply the information in the redo logs automatically without intervention from the DBA.
The redo log files are fixed in size and never grow dynamically from the size at which they were created.
82. What is ROWID?
The ROWID is a unique database-wide physical address for every row on every table. Once assigned (when the row is first inserted into the database), it never changes until the row is deleted or the table is dropped.
The ROWID consists of the following three components, the combination of which uniquely identifies the physical storage location of the row.
Ø Oracle database file number, which contains the block with the rows
Ø Oracle block address, which contains the row
Ø The row within the block (because each block can hold many rows)
The ROWID is used internally in indexes as a quick means of retrieving rows with a particular key value. Application developers also use it in SQL statements as a quick way to access a row once they know the ROWID
83. What is Oracle Block? Can two Oracle Blocks have the same address?
Oracle "formats" the database files into a number of Oracle blocks when they are first created-making it easier for the RDBMS software to manage the files and easier to read data into the memory areas.
The block size should be a multiple of the operating system block size. Regardless of the block size, the entire block is not available for holding data; Oracle takes up some space to manage the contents of the block. This block header has a minimum size, but it can grow.
These Oracle blocks are the smallest unit of storage. Increasing the Oracle block size can improve performance, but it should be done only when the database is first created.
Each Oracle block is numbered sequentially for each database file starting at 1. Two blocks can have the same block address if they are in different database files.
84. What is database Trigger?
A database trigger is a PL/SQL block that can defined to automatically execute for insert, update, and delete statements against a table. The trigger can e defined to execute once for the entire statement or once for every row that is inserted, updated, or deleted. For any one table, there are twelve events for which you can define database triggers. A database trigger can call database procedures that are also written in PL/SQL.
85. Name two utilities that Oracle provides, which are use for backup and recovery.
Along with the RDBMS software, Oracle provides two utilities that you can use to back up and restore the database. These utilities are Export and Import.
The Export utility dumps the definitions and data for the specified part of the database to an operating system binary file. The Import utility reads the file produced by an export, recreates the definitions of objects, and inserts the data
If Export and Import are used as a means of backing up and recovering the database, all the changes made to the database cannot be recovered since the export was performed. The best you can do is recover the database to the time when the export was last performed.
86. What are stored-procedures? And what are the advantages of using them.
Stored procedures are database objects that perform a user defined operation. A stored procedure can have a set of compound SQL statements. A stored procedure executes the SQL commands and returns the result to the client. Stored procedures are used to reduce network traffic.
87. How are exceptions handled in PL/SQL? Give some of the internal exceptions' name
PL/SQL exception handling is a mechanism for dealing with run-time errors encountered during procedure execution. Use of this mechanism enables execution to continue if the error is not severe enough to cause procedure termination.
The exception handler must be defined within a subprogram specification. Errors cause the program to raise an exception with a transfer of control to the exception-handler block. After the exception handler executes, control returns to the block in which the handler was defined. If there are no more executable statements in the block, control returns to the caller.
User-Defined Exceptions
PL/SQL enables the user to define exception handlers in the declarations area of subprogram specifications. User accomplishes this by naming an exception as in the following example:
ot_failure EXCEPTION;
In this case, the exception name is ot_failure. Code associated with this handler is written in the EXCEPTION specification area as follows:
EXCEPTION
when OT_FAILURE then
out_status_code := g_out_status_code;
out_msg := g_out_msg;
The following is an example of a subprogram exception:
EXCEPTION
when NO_DATA_FOUND then
g_out_status_code := 'FAIL';
RAISE ot_failure;
Within this exception is the RAISE statement that transfers control back to the ot_failure exception handler. This technique of raising the exception is used to invoke all user-defined exceptions.
System-Defined Exceptions
Exceptions internal to PL/SQL are raised automatically upon error. NO_DATA_FOUND is a system-defined exception. Table below gives a complete list of internal exceptions.
PL/SQL internal exceptions.
Exception Name Oracle Error
CURSOR_ALREADY_OPEN ORA-06511
DUP_VAL_ON_INDEX ORA-00001
INVALID_CURSOR ORA-01001
INVALID_NUMBER ORA-01722
LOGIN_DENIED ORA-01017
NO_DATA_FOUND ORA-01403
NOT_LOGGED_ON ORA-01012
PROGRAM_ERROR ORA-06501
STORAGE_ERROR ORA-06500
TIMEOUT_ON_RESOURCE ORA-00051
TOO_MANY_ROWS ORA-01422
TRANSACTION_BACKED_OUT ORA-00061
VALUE_ERROR ORA-06502
ZERO_DIVIDE ORA-01476
In addition to this list of exceptions, there is a catch-all exception named OTHERS that traps all errors for which specific error handling has not been established.
88. Does PL/SQL support "overloading"? Explain
The concept of overloading in PL/SQL relates to the idea that you can define procedures and functions with the same name. PL/SQL does not look only at the referenced name, however, to resolve a procedure or function call. The count and data types of formal parameters are also considered.
PL/SQL also attempts to resolve any procedure or function calls in locally defined packages before looking at globally defined packages or internal functions. To further ensure calling the proper procedure, you can use the dot notation. Prefacing a procedure or function name with the package name fully qualifies any procedure or function reference.
89. Tables derived from the ERD
a) Are totally unnormalised
b) Are always in 1NF
c) Can be further denormalised
d) May have multi-valued attributes
(b) Are always in 1NF
90. Spurious tuples may occur due to
i. Bad normalization
ii. Theta joins
iii. Updating tables from join
a) i & ii b) ii & iii
c) i & iii d) ii & iii
(a) i & ii because theta joins are joins made on keys that are not primary keys.
91. A B C is a set of attributes. The functional dependency is as follows
AB -> B
AC -> C
C -> B
a) is in 1NF
b) is in 2NF
c) is in 3NF
d) is in BCNF
(a) is in 1NF since (AC)+ = { A, B, C} hence AC is the primary key. Since C B is a FD given, where neither C is a Key nor B is a prime attribute, this it is not in 3NF. Further B is not functionally dependent on key AC thus it is not in 2NF. Thus the given FDs is in 1NF.
92. In mapping of ERD to DFD
a) entities in ERD should correspond to an existing entity/store in DFD
b) entity in DFD is converted to attributes of an entity in ERD
c) relations in ERD has 1 to 1 correspondence to processes in DFD
d) relationships in ERD has 1 to 1 correspondence to flows in DFD
(a) entities in ERD should correspond to an existing entity/store in DFD
93. A dominant entity is the entity
a) on the N side in a 1 : N relationship
b) on the 1 side in a 1 : N relationship
c) on either side in a 1 : 1 relationship
d) nothing to do with 1 : 1 or 1 : N relationship
(b) on the 1 side in a 1 : N relationship
94. Select 'NORTH', CUSTOMER From CUST_DTLS Where REGION = 'N' Order By
CUSTOMER Union Select 'EAST', CUSTOMER From CUST_DTLS Where REGION = 'E' Order By CUSTOMER
The above is
a) Not an error
b) Error - the string in single quotes 'NORTH' and 'SOUTH'
c) Error - the string should be in double quotes
d) Error - ORDER BY clause
(d) Error - the ORDER BY clause. Since ORDER BY clause cannot be used in UNIONS
95. What is Storage Manager?
It is a program module that provides the interface between the low-level data stored in database, application programs and queries submitted to the system.

96. What is Buffer Manager?
It is a program module, which is responsible for fetching data from disk storage into main memory and deciding what data to be cache in memory.
97. What is Transaction Manager?
It is a program module, which ensures that database, remains in a consistent state despite system failures and concurrent transaction execution proceeds without conflicting.
98. What is File Manager?
It is a program module, which manages the allocation of space on disk storage and data structure used to represent information stored on a disk.
99. What is Authorization and Integrity manager?
It is the program module, which tests for the satisfaction of integrity constraint and checks the authority of user to access data.
100. What are stand-alone procedures?
Procedures that are not part of a package are known as stand-alone because they independently defined. A good example of a stand-alone procedure is one written in a SQL*Forms application. These types of procedures are not available for reference from other Oracle tools. Another limitation of stand-alone procedures is that they are compiled at run time, which slows execution.
101. What are cursors give different types of cursors.
PL/SQL uses cursors for all database information accesses statements. The language supports the use two types of cursors
Ø Implicit
Ø Explicit
102. What is cold backup and hot backup (in case of Oracle)?
Ø Cold Backup:
It is copying the three sets of files (database files, redo logs, and control file) when the instance is shut down. This is a straight file copy, usually from the disk directly to tape. You must shut down the instance to guarantee a consistent copy.
If a cold backup is performed, the only option available in the event of data file loss is restoring all the files from the latest backup. All work performed on the database since the last backup is lost.
Ø Hot Backup:
Some sites (such as worldwide airline reservations systems) cannot shut down the database while making a backup copy of the files. The cold backup is not an available option.
So different means of backing up database must be used - the hot backup. Issue a SQL command to indicate to Oracle, on a tablespace-by-tablespace basis, that the files of the tablespace are to backed up. The users can continue to make full use of the files, including making changes to the data. Once the user has indicated that he/she wants to back up the tablespace files, he/she can use the operating system to copy those files to the desired backup destination.
The database must be running in ARCHIVELOG mode for the hot backup option.
If a data loss failure does occur, the lost database files can be restored using the hot backup and the online and offline redo logs created since the backup was done. The database is restored to the most consistent state without any loss of committed transactions.
103. What are Armstrong rules? How do we say that they are complete and/or sound
The well-known inference rules for FDs
Ø Reflexive rule :
If Y is subset or equal to X then X Y.
Ø Augmentation rule:
If X Y then XZ YZ.
Ø Transitive rule:
If {X Y, Y Z} then X Z.
Ø Decomposition rule :
If X YZ then X Y.
Ø Union or Additive rule:
If {X Y, X Z} then X YZ.
Ø Pseudo Transitive rule :
If {X Y, WY Z} then WX Z.
Of these the first three are known as Amstrong Rules. They are sound because it is enough if a set of FDs satisfy these three. They are called complete because using these three rules we can generate the rest all inference rules.
104. How can you find the minimal key of relational schema?
Minimal key is one which can identify each tuple of the given relation schema uniquely. For finding the minimal key it is required to find the closure that is the set of all attributes that are dependent on any given set of attributes under the given set of functional dependency.
Algo. I Determining X+, closure for X, given set of FDs F
1. Set X+ = X
2. Set Old X+ = X+
3. For each FD Y Z in F and if Y belongs to X+ then add Z to X+
4. Repeat steps 2 and 3 until Old X+ = X+
Algo.II Determining minimal K for relation schema R, given set of FDs F
1. Set K to R that is make K a set of all attributes in R
2. For each attribute A in K
a. Compute (K - A)+ with respect to F
b. If (K - A)+ = R then set K = (K - A)+
105. What do you understand by dependency preservation?
Given a relation R and a set of FDs F, dependency preservation states that the closure of the union of the projection of F on each decomposed relation Ri is equal to the closure of F. i.e.,
((PR1(F)) U … U (PRn(F)))+ = F+
if decomposition is not dependency preserving, then some dependency is lost in the decomposition.
106. What is meant by Proactive, Retroactive and Simultaneous Update.
Proactive Update:
The updates that are applied to database before it becomes effective in real world .
Retroactive Update:
The updates that are applied to database after it becomes effective in real world .
Simulatneous Update:
The updates that are applied to database at the same time when it becomes effective in real world .
107. What are the different types of JOIN operations?
Equi Join: This is the most common type of join which involves only equality comparisions. The disadvantage in this type of join is that there

What is the differnce between store procedure and Functions (UDF)

A procedure or function is an object stored in the database, and run as a unit to solve a specific problem or perform a set of related tasks. Procedures and functions permit the caller to provide parameters that can be input only, output only, or input and output values. Procedures and functions let you combine the ease and flexibility of SQL with the procedural functionality of a structured programming language
1. Return Type: Functions should return atleast one output parameter. On the other hand it is not mandatory for the SP to return any value, but it could return one or more values limit is 1024.
2. Compiled Type: Functions are parsed and compile at runtime. On the other hand SP are stored in database as Compiled form.
3. State of the Database: Functions can not effects the state of the database, they cant invoke DDL (Database definition language) but can use DML (Data Manipulation Language). On the other hand SP can effect the state of database using DDL and use commit.
4. Invoke: Functions can be invoke from the SQL Statement e.g. Select. Whereas SP can not be invoke using Select Command.
5. Usage: Functions are generally use for computing values whereas SP use to apply business logics.

Interview Questions on ADO.net

1. What is ADO.Net?

ActiveX Data Object (ADO).NET is the primary relational data access model for Microsoft .NET-based applications. ADO.Net provides consistent data access from database management system (DBMS) such as SQL Server, Oracle etc. ADO.NET is exclusively designed to meet the requirements of web-based applications model such as disconnected data architecture, integration with XML, common data representation, combining data from multiple data sources, and optimization in interacting with the database.

2. Explain the ADO .Net Architecture?

ADO.NET Architecture includes three data providers for implementing connectivity with databases: SQL Server .NET Data Provider, OLEDB .NET Data Provider, and ODBC .Net Data Provider. You can access data through data provider in two ways either using a DataReader or DataAdapter.

Leverage current ADO knowledge
Support the N-Tier programming model
Provide support for XML

In distributed applications, the concept of working with disconnected data has become very common. A disconnected model means that once you have retrieved the data that you need, the connection to the data source is dropped—you work with the data locally. The reason why this model has become so popular is that it frees up precious database server resources, which leads to highly scalable applications. The ADO.NET solution for disconnected data is the DataSet object.

Data Access in ADO.NET relies on two components:

DataSet
Data Provider.
DataSet


The ADO.NET Data Set is explicitly designed for data access independent of any data source. As a result, it can be used with multiple and differing data sources, used with XML data, or used to manage data local to the application. The DataSet contains a collection of one or more objects made up of rows and columns of data, as well as primary key, foreign key, constraint, and relation information about the data in the DataTable objects.
The dataset is a disconnected, in-memory representation of data. It can be considered as a local copy of the relevant portions of the database. The DataSet is persisted in memory and the data in it can be manipulated and updated independent of the database. When the use of this DataSet is finished, changes can be made back to the central database for updating. The data in DataSet can be loaded from any valid data source like Microsoft SQL server database, an Oracle database or from a Microsoft Access database.



Data Provider

The Data Provider is responsible for providing and maintaining the connection to the database. A DataProvider is a set of related components that work together to provide data in an efficient and performance driven manner. The .NET Framework currently comes with two DataProviders: the SQL Data Provider which is designed only to work with Microsoft's SQL Server 7.0 or later and the OleDb DataProvider which allows us to connect to other types of databases like Access and Oracle. Each DataProvider consists of the following component classes:

The Connection object which provides a connection to the database
The Command object which is used to execute a command
The DataReader object which provides a forward-only, read only, connected recordset
The DataAdapter object which populates a disconnected DataSet with data and performs update

Data access with ADO.NET can be summarized as follows:
A connection object establishes the connection for the application with the database. The command object provides direct execution of the command to the database. If the command returns more than a single value, the command object returns a DataReader to provide the data. Alternatively, the DataAdapter can be used to fill the Dataset object. The database can be updated using the command object or the DataAdapter.


Component classes that make up the Data Providers

The Connection Object


The Connection object creates the connection to the database. Microsoft Visual Studio .NET provides two types of Connection classes: the SqlConnection object, which is designed specifically to connect to Microsoft SQL Server 7.0 or later, and the OleDbConnection object, which can provide connections to a wide range of database types like Microsoft Access and Oracle. The Connection object contains all of the information required to open a connection to the database.







The Command Object


The Command object is represented by two corresponding classes: SqlCommand and OleDbCommand. Command objects are used to execute commands to a database across a data connection. The Command objects can be used to execute stored procedures on the database, SQL commands, or return complete tables directly. Command objects provide three methods that are used to execute commands on the database:
ExecuteNonQuery: Executes commands that have no return values such as INSERT, UPDATE or DELETE
ExecuteScalar: Returns a single value from a database query
ExecuteReader: Returns a result set by way of a DataReader object



The DataReader Object


The DataReader object provides a forward-only, read-only, connected stream recordset from a database. Unlike other components of the Data Provider, DataReader objects cannot be directly instantiated. Rather, the DataReader is returned as the result of the Command object's ExecuteReader method. The SqlCommand.ExecuteReader method returns a SqlDataReader object, and the OleDbCommand.ExecuteReader method returns an OleDbDataReader object. The DataReader can provide rows of data directly to application logic when you do not need to keep the data cached in memory. Because only one row is in memory at a time, the DataReader provides the lowest overhead in terms of system performance but requires the exclusive use of an open Connection object for the lifetime of the DataReader.



The DataAdapter Object


The DataAdapter is the class at the core of ADO .NET's disconnected data access. It is essentially the middleman facilitating all communication between the database and a DataSet. The DataAdapter is used either to fill a DataTable or DataSet with data from the database with it's Fill method. After the memory-resident data has been manipulated, the DataAdapter can commit the changes to the database by calling the Update method. The DataAdapter provides four properties that represent database commands:



SelectCommand
InsertCommand
DeleteCommand
UpdateCommand
When the Update method is called, changes in the DataSet are copied back to the database and the appropriate InsertCommand, DeleteCommand, or UpdateCommand is executed.


3. What are the advantages and drawbacks of using ADO.NET?

Pros

ADO.NET is rich with plenty of features that are bound to impress even the most skeptical of programmers. If this weren’t the case, Microsoft wouldn’t even be able to get anyone to use the Beta. What we’ve done here is come up with a short list of some of the more outstanding benefits to using the ADO.NET architecture and the System.Data namespace.

* Performance – there is no doubt that ADO.NET is extremely fast. The actual figures vary depending on who performed the test and which benchmark was being used, but ADO.NET performs much, much faster at the same tasks than its predecessor, ADO. Some of the reasons why ADO.NET is faster than ADO are discussed in the ADO versus ADO.NET section later in this chapter.

* Optimized SQL Provider – in addition to performing well under general circumstances, ADO.NET includes a SQL Server Data Provider that is highly optimized for interaction with SQL Server. It uses SQL Server’s own TDS (Tabular Data Stream) format for exchanging information. Without question, your SQL Server 7 and above data access operations will run blazingly fast utilizing this optimized Data Provider.

* XML Support (and Reliance) – everything you do in ADO.NET at some point will boil down to the use of XML. In fact, many of the classes in ADO.NET, such as the DataSet, are so intertwined with XML that they simply cannot exist or function without utilizing the technology. You’ll see later when we compare and contrast the “old” and the “new” why the reliance on XML for internal storage provides many, many advantages, both to the framework and to the programmer utilizing the class library.

* Disconnected Operation Model – the core ADO.NET class, the DataSet, operates in an entirely disconnected fashion. This may be new to some programmers, but it is a remarkably efficient and scalable architecture. Because the disconnected model allows for the DataSet class to be unaware of the origin of its data, an unlimited number of supported data sources can be plugged into code without any hassle in the future.

* Rich Object Model – the entire ADO.NET architecture is built on a hierarchy of class inheritance and interface implementation. Once you start looking for things you need within this namespace, you’ll find that the logical inheritance of features and base class support makes the entire system extremely easy to use, and very customizable to suit your own needs. It is just another example of how everything in the .NET framework is pushing toward a trend of strong application design and strong OOP implementations.


Cons

Hard as it may be to believe, there are a couple of drawbacks or disadvantages to using the ADO.NET architecture. I’m sure others can find many more faults than we list here, but we decided to stick with a short list of some of the more obvious and important shortcomings of the technology.

* Managed-Only Access – for a few obvious reasons, and some far more technical, you cannot utilize the ADO.NET architecture from anything but managed code. This means that there is no COM interoperability allowed for ADO.NET. Therefore, in order to take advantage of the advanced SQL Server Data Provider and any other feature like DataSets, XML internal data storage, etc, your code must be running under the CLR.

* Only Three Managed Data Providers (so far) – unfortunately, if you need to access any data that requires a driver that cannot be used through either an OLEDB provider or the SQL Server Data Provider, then you may be out of luck. However, the good news is that the OLEDB provider for ODBC is available for download from Microsoft. At that point the down-side becomes one of performance, in which you are invoking multiple layers of abstraction as well as crossing the COM InterOp gap, incurring some initial overhead as well.

* Learning Curve – despite the misleading name, ADO.NET is not simply a new version of ADO, nor should it even be considered a direct successor. ADO.NET should be thought of more as the data access class library for use with the .NET framework. The difficulty in learning to use ADO.NET to its fullest is that a lot of it does seem familiar. It is this that causes some common pitfalls. Programmers need to learn that even though some syntax may appear the same, there is actually a considerable amount of difference in the internal workings of many classes. For example (this will be discussed in far more detail later), an ADO.NET DataSet is nothing at all like a disconnected ADO RecordSet. Some may consider a learning curve a drawback, but I consider learning curves more like scheduling issues. There’s a learning curve in learning anything new; it’s just up to you to schedule that curve into your time so that you can learn the new technology at a pace that fits your schedule.


4. Explain what a diffgram is and its usage ?

A DiffGram is an XML format that is used to identify current and original versions of data elements. The DataSet uses the DiffGram format to load and persist its contents, and to serialize its contents for transport across a network connection. When a DataSet is written as a DiffGram, it populates the DiffGram with all the necessary information to accurately recreate the contents, though not the schema, of the DataSet, including column values from both the Original and Current row versions, row error information, and row order.
When sending and retrieving a DataSet from an XML Web service, the DiffGram format is implicitly used. Additionally, when loading the contents of a DataSet from XML using the ReadXml method, or when writing the contents of a DataSet in XML using the WriteXml method, you can select that the contents be read or written as a DiffGram.
The DiffGram format is divided into three sections: the current data, the original (or "before") data, and an errors section, as shown in the following example.





xmlns:msdata="urn:schemas-microsoft-com:xml-msdata"
xmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">









The DiffGram format consists of the following blocks of data:

The name of this element, DataInstance, is used for explanation purposes in this documentation. A DataInstance element represents a DataSet or a row of a DataTable. Instead of DataInstance, the element would contain the name of the DataSet or DataTable. This block of the DiffGram format contains the current data, whether it has been modified or not. An element, or row, that has been modified is identified with the diffgr:hasChanges annotation.

This block of the DiffGram format contains the original version of a row. Elements in this block are matched to elements in the DataInstance block using the diffgr:id annotation.

This block of the DiffGram format contains error information for a particular row in the DataInstance block. Elements in this block are matched to elements in the DataInstance block using the diffgr:id annotation.



Attribute
Description

diffgr:hasChanges
The row has been modified (see related row in ) or inserted.

diffgr:hasErrors
The row has an error (see related row in ).

diffgr:id
Identifies the ID used to couple rows across sections: TableName+RowIdentifier.

diffgr:parentId
Identifies the ID used to identify the parent of the current row.

diffgr:error
Contains the error text for the row in .

msdata:rowOrder
Tracks the ordinal position of the row in the DataSet.

msdata:hidden
Identifies columns marked as hidden msdata:hiddenColumn=…



5. Can you edit data in the Repeater control?

NO.

6. Which method do you invoke on the DataAdapter control to load your generated dataset with data?

You have to use the Fill method of the DataAdapter control and pass the dataset object as an argument to load the generated data.

7. Which are the different IsolationLevels ?

Isolation Level
Description

ReadCommitted
The default for SQL Server. This level ensures that data written by one transaction will only be accessible in a second transaction after the first transaction commits.

ReadUncommitted
This permits your transaction to read data within the database, even data that has not yet been committed by another transaction. For example, if two users were accessing the same database, and the first inserted some data without concluding their transaction (by means of a Commit or Rollback), then the second user with their isolation level set to ReadUncommitted could read the data.

RepeatableRead
This level, which extends the ReadCommitted level, ensures that if the same statement is issued within the transaction, regardless of other poten- tial updates made to the database, the same data will always be returned. This level does require extra locks to be held on the data, which could adversely affect performance. This level guarantees that, for each row in the initial query, no changes can be made to that data. It does, however, permit "phantom" rows to show up — these are completely new rows that another transaction might have inserted while your transaction was running.

Serializable
This is the most "exclusive" transaction level, which in effect serializes access to data within the database. With this isolation level, phantom rows can never show up, so a SQL statement issued within a serializable transac- tion will always retrieve the same data. The negative performance impact of a Serializable transaction should not be underestimated — if you don't absolutely need to use this level of isolation, stay away from it.



8. How xml files and be read and write using dataset?.

DataSet exposes method like ReadXml and WriteXml to read and write xml

9. What are the different rowversions available?

DataRow Version Value
Description

Current
The value existing at present within the column. If no edit has occurred, this will be the same as the original value. If an edit (or edits) have occurred, the value will be the last valid value entered.

Default
The default value (in other words, any default set up for the column).

Original
The value of the column when originally selected from the database. If the DataRow's AcceptChanges method is called, this value will update to the Current value.

Proposed
When changes are in progress for a row, it is possible to retrieve this modified value. If you call BeginEdit() on the row and make changes, each column will have a proposed value until either EndEdit() or CancelEdit() is called.



10. Explain ACID properties?.

The ACID model is one of the oldest and most important concepts of database theory. It sets forward four goals that every database management system must strive to achieve: atomicity, consistency, isolation and durability. No database that fails to meet any of these four goals can be considered reliable.

Let’s take a moment to examine each one of these characteristics in detail:



Atomicity states that database modifications must follow an “all or nothing” rule. Each transaction is said to be “atomic.” If one part of the transaction fails, the entire transaction fails. It is critical that the database management system maintain the atomic nature of transactions in spite of any DBMS, operating system or hardware failure.


Consistency states that only valid data will be written to the database. If, for some reason, a transaction is executed that violates the database’s consistency rules, the entire transaction will be rolled back and the database will be restored to a state consistent with those rules. On the other hand, if a transaction successfully executes, it will take the database from one state that is consistent with the rules to another state that is also consistent with the rules.


Isolation requires that multiple transactions occurring at the same time not impact each other’s execution. For example, if Joe issues a transaction against a database at the same time that Mary issues a different transaction, both transactions should operate on the database in an isolated manner. The database should either perform Joe’s entire transaction before executing Mary’s or vice-versa. This prevents Joe’s transaction from reading intermediate data produced as a side effect of part of Mary’s transaction that will not eventually be committed to the database. Note that the isolation property does not ensure which transaction will execute first, merely that they will not interfere with each other.


Durability ensures that any transaction committed to the database will not be lost. Durability is ensured through the use of database backups and transaction logs that facilitate the restoration of committed transactions in spite of any subsequent software or hardware failures.


11. Differences Between ADO and ADO.NET

ADO.NET is an evolution of ADO. The following table lists several data access features and how each feature differs between ADO and ADO.NET.



Feature
ADO
ADO.NET

Memory-resident data representation
Uses the Recordset object, which holds single rows of data, much like a database table
Uses the DataSet object, which can contain one or more tables represented by DataTable objects

Relationships between multiple tables
Requires the JOIN query to assemble data from multiple database tables in a single result table. Also offers hierarchical recordsets, but they are hard to use
Supports the DataRelation object to associate rows in one DataTable object with rows in another DataTable object

Data navigation
Traverses rows in a Recordset sequentially, by using the .MoveNext method
The DataSet uses a navigation paradigm for nonsequential access to rows in a table. Accessing the data is more like accessing data in a collection or array. This is possible because of the Rows collection of the DataTable; it allows you to access rows by index. Follows relationships to navigate from rows in one table to corresponding rows in another table

Disconnected access
Provided by the Recordset but it has to be explicitly coded for. The default for a Recordset object is to be connected via the ActiveConnection property. You communicate to a database with calls to an OLE DB provider
Communicates to a database with standardized calls to the DataAdapter object, which communicates to an OLE DB data provider, or directly to a SQL Server data provider

Programmability
All Recordset field data types are COM Variant data types, and usually correspond to field names in a database table
Uses the strongly typed programming characteristic of XML. Data is self-describing because names for code items correspond to the business problem solved by the code. Data in DataSet and DataReader objects can be strongly typed, thus making code easier to read and to write

Sharing disconnected data between tiers or components
Uses COM marshaling to transmit a disconnected record set. This supports only those data types defined by the COM standard. Requires type conversions, which demand system resources
Transmits a DataSet as XML. The XML format places no restrictions on data types and requires no type conversions

Transmitting data through firewalls
Problematic, because firewalls are typically configured to prevent system-level requests such as COM marshaling
Supported, because ADO.NET DataSet objects use XML, which can pass through firewalls

Scalability
Since the defaults in ADO are to use connected Recordset objects, database locks, and active database connections for long durations contend for limited database resources
Disconnected access to database data without retaining database locks or active database connections for lengthy periods limits contention for limited database resources



12. Whate are different types of Commands available with DataAdapter ?

The SqlDataAdapter has four command objects



SelectCommand
InsertCommand
DeleteCommand
UpdateCommand


13. What is a Dataset?

Major component of ADO.NET is the DataSet object, which you can think of as being similar to an in-memory relational database. DataSet objects contain DataTable objects, relationships, and constraints, allowing them to replicate an entire data source, or selected parts of it, in a disconnected fashion.
A DataSet object is always disconnected from the source whose data it contains, and as a consequence it doesn't care where the data comes from—it can be used to manipulate data from a traditional database or an XML document, or anything in between. In order to connect a DataSet to a data source, you need to use a data adapter as an intermediary between the DataSet and the .NET data provider.

Datasets are the result of bringing together ADO and XML. A dataset contains one or more data of tabular XML, known as DataTables, these data can be treated separately, or can have relationships defined between them. Indeed these relationships give you ADO data SHAPING without needing to master the SHAPE language, which many people are not comfortable with.
The dataset is a disconnected in-memory cache database. The dataset object model looks like this:
Dataset
DataTableCollection
DataTable
DataView
DataRowCollection
DataRow
DataColumnCollection
DataColumn
ChildRelations
ParentRelations
Constraints
PrimaryKey
DataRelationCollection
Let’s take a look at each of these:
DataTableCollection: As we say that a DataSet is an in-memory database. So it has this collection, which holds data from multiple tables in a single DataSet object.
DataTable: In the DataTableCollection, we have DataTable objects, which represents the individual tables of the dataset.
DataView: The way we have views in database, same way we can have DataViews. We can use these DataViews to do Sort, filter data.
DataRowCollection: Similar to DataTableCollection, to represent each row in each Table we have DataRowCollection.
DataRow: To represent each and every row of the DataRowCollection, we have DataRows.
DataColumnCollection: Similar to DataTableCollection, to represent each column in each Table we have DataColumnCollection.
DataColumn: To represent each and every Column of the DataColumnCollection, we have DataColumn.
PrimaryKey: Dataset defines Primary key for the table and the primary key validation will take place without going to the database.
Constraints: We can define various constraints on the Tables, and can use Dataset.Tables(0).enforceConstraints. This will execute all the constraints, whenever we enter data in DataTable.
DataRelationCollection: as we know that we can have more than 1 table in the dataset, we can also define relationship between these tables using this collection and maintain a parent-child relationship.


14. How you will set the datarelation between two columns?

ADO.NET provides DataRelation object to set relation between two columns.It helps to enforce the following constraints,a unique constraint, which guarantees that a column in the table contains no duplicates and a foreign-key constraint,which can be used to maintain referential integrity.A unique constraint is implemented either by simply setting the Unique property of a data column to true, or by adding an instance of the UniqueConstraint class to the DataRelation object's ParentKeyConstraint. As part of the foreign-key constraint, you can specify referential integrity rules that are applied at three points,when a parent record is updated,when a parent record is deleted and when a change is accepted or rejected.

15. Which method do you invoke on the DataAdapter control to load your generated dataset with data?

Use the Fill method of the DataAdapter control and pass the dataset object as an argument to load the generated data.

16. How do you handle data concurrency in .NET ?

In general, there are three common ways to manage concurrency in a database:



Pessimistic concurrency control: A row is unavailable to users from the time the record is fetched until it is updated in the database.
Optimistic concurrency control: A row is unavailable to other users only while the data is actually being updated. The update examines the row in the database and determines whether any changes have been made. Attempting to update a record that has already been changed results in a concurrency violation.
"Last in wins": A row is unavailable to other users only while the data is actually being updated. However, no effort is made to compare updates against the original record; the record is simply written out, potentially overwriting any changes made by other users since you last refreshed the records.
Pessimistic Concurrency

Pessimistic concurrency is typically used for two reasons. First, in some situations there is high contention for the same records. The cost of placing locks on the data is less than the cost of rolling back changes when concurrency conflicts occur.
Pessimistic concurrency is also useful for situations where it is detrimental for the record to change during the course of a transaction. A good example is an inventory application. Consider a company representative checking inventory for a potential customer. You typically want to lock the record until an order is generated, which would generally flag the item with a status of ordered and remove it from available inventory. If no order is generated, the lock would be released so that other users checking inventory get an accurate count of available inventory.
However, pessimistic concurrency control is not possible in a disconnected architecture. Connections are open only long enough to read the data or to update it, so locks cannot be sustained for long periods. Moreover, an application that holds onto locks for long periods is not scalable.


Optimistic Concurrency


In optimistic concurrency, locks are set and held only while the database is being accessed. The locks prevent other users from attempting to update records at the same instant. The data is always available except for the exact moment that an update is taking place. For more information, see Using Optimistic Concurrency.
When an update is attempted, the original version of a changed row is compared against the existing row in the database. If the two are different, the update fails with a concurrency error. It is up to you at that point to reconcile the two rows, using business logic that you create.


Last in Wins


With "last in wins," no check of the original data is made and the update is simply written to the database. It is understood that the following scenario can occur:



User A fetches a record from the database.
User B fetches the same record from the database, modifies it, and writes the updated record back to the database.
User A modifies the 'old' record and writes it back to the database.
In the above scenario, the changes User B made were never seen by User A. Be sure that this situation is acceptable if you plan to use the "last in wins" approach of concurrency control.


Concurrency Control in ADO.NET and Visual Studio


ADO.NET and Visual Studio use optimistic concurrency, because the data architecture is based on disconnected data. Therefore, you need to add business logic to resolve issues with optimistic concurrency.
If you choose to use optimistic concurrency, there are two general ways to determine if changes have occurred: the version approach (true version numbers or date-time stamps) and the saving-all-values approach.


The Version Number Approach


In the version number approach, the record to be updated must have a column that contains a date-time stamp or version number. The date-time stamp or a version number is saved on the client when the record is read. This value is then made part of the update.
One way to handle concurrency is to update only if value in the WHERE clause matches the value on the record. The SQL representation of this approach is:



UPDATE Table1 SET Column1 = @newvalue1, Column2 = @newvalue2 WHERE DateTimeStamp = @origDateTimeStamp

Alternatively, the comparison can be made using the version number:


UPDATE Table1 SET Column1 = @newvalue1, Column2 = @newvalue2 WHERE RowVersion = @origRowVersionValue


If the date-time stamps or version numbers match, the record in the data store has not changed and can be safely updated with the new values from the dataset. An error is returned if they don't match. You can write code to implement this form of concurrency checking in Visual Studio. You will also have to write code to respond to any update conflicts. To keep the date-time stamp or version number accurate, you need to set up a trigger on the table to update it when a change to a row occurs.


The Saving-All-Values Approach


An alternative to using a date-time stamp or version number is to get copies of all the fields when the record is read. The DataSet object in ADO.NET maintains two versions of each modified record: an original version (that was originally read from the data source) and a modified version, representing the user updates. When attempting to write the record back to the data source, the original values in the data row are compared against the record in the data source. If they match, it means that the database record has not changed since it was read. In that case, the changed values from the dataset are successfully written to the database.
Each data adapter command has a parameters collection for each of its four commands (DELETE, INSERT, SELECT, and UPDATE). Each command has parameters for both the original values, as well as the current (or modified) values.
The following example shows the command text for a dataset command that updates a typical Customers table. The command is specified for dynamic SQL and optimistic concurrency.



UPDATE Customers SET CustomerID = @currCustomerID, CompanyName = @currCompanyName, ContactName = @currContactName, ContactTitle = currContactTitle, Address = @currAddress, City = @currCity, PostalCode = @currPostalCode, Phone = @currPhone, Fax = @currFax WHERE (CustomerID = @origCustomerID) AND (Address = @origAddress OR @origAddress IS NULL AND Address IS NULL) AND (City = @origCity OR @origCity IS NULL AND City IS NULL) AND (CompanyName = @origCompanyName OR @origCompanyName IS NULL AND CompanyName IS NULL) AND (ContactName = @origContactName OR @origContactName IS NULL AND ContactName IS NULL) AND (ContactTitle = @origContactTitle OR @origContactTitle IS NULL AND ContactTitle IS NULL) AND (Fax = @origFax OR @origFax IS NULL AND Fax IS NULL) AND (Phone = @origPhone OR @origPhone IS NULL AND Phone IS NULL) AND (PostalCode = @origPostalCode OR @origPostalCode IS NULL AND PostalCode IS NULL); SELECT CustomerID, CompanyName, ContactName, ContactTitle, Address, City, PostalCode, Phone, Fax FROM Customers WHERE (CustomerID = @currCustomerID)

Note that the nine SET statement parameters represent the current values that will be written to the database, whereas the nine WHERE statement parameters represent the original values that are used to locate the original record.
The first nine parameters in the SET statement correspond to the first nine parameters in the parameters collection. These parameters would have their SourceVersion property set to Current.
The next nine parameters in the WHERE statement are used for optimistic concurrency. These placeholders would correspond to the next nine parameters in the parameters collection, and each of these parameters would have their SourceVersion property set to Original.
The SELECT statement is used to refresh the dataset after the update has occurred. It is generated when you set the Refresh the DataSet option in the Advanced SQL Generations Options dialog box.


17. What are relation objects in dataset and how & where to use them?

In a DataSet that contains multiple DataTable objects, you can use DataRelation objects to relate one table to another, to navigate through the tables, and to return child or parent rows from a related table. Adding a DataRelation to a DataSet adds, by default, a UniqueConstraint to the parent table and a ForeignKeyConstraint to the child table.

The following code example creates a DataRelation using two DataTable objects in a DataSet. Each DataTable contains a column named CustID, which serves as a link between the two DataTable objects. The example adds a single DataRelation to the Relations collection of the DataSet. The first argument in the example specifies the name of the DataRelation being created. The second argument sets the parent DataColumn and the third argument sets the child DataColumn.`

custDS.Relations.Add("CustOrders",
custDS.Tables["Customers"].Columns["CustID"],
custDS.Tables["Orders"].Columns["CustID"]);


18. Difference between OLEDB Provider and SqlClient ?

SQLClient .NET classes are highly optimized for the .net / sqlserver combination and achieve optimal results. The SqlClient data provider is fast. It's faster than the Oracle provider, and faster than accessing database via the OleDb layer. It's faster because it accesses the native library (which automatically gives you better performance), and it was written with lots of help from the SQL Server team.

19. What are the different namespaces used in the project to connect the database? What data providers available in .net to connect to database?

Following are different Namespaces:



System.Data.OleDb - classes that make up the .NET Framework Data Provider for OLE DB-compatible data sources. These classes allow you to connect to an OLE DB data source, execute commands against the source, and read the results.
System.Data.SqlClient - classes that make up the .NET Framework Data Provider for SQL Server, which allows you to connect to SQL Server 7.0, execute commands, and read results. The System.Data.SqlClient namespace is similar to the System.Data.OleDb namespace, but is optimized for access to SQL Server 7.0 and later.
System.Data.Odbc - classes that make up the .NET Framework Data Provider for ODBC. These classes allow you to access ODBC data source in the managed space.
System.Data.OracleClient - classes that make up the .NET Framework Data Provider for Oracle. These classes allow you to access an Oracle data source in the managed space.


20. What is Data Reader?

You can use the ADO.NET DataReader to retrieve a read-only, forward-only stream of data from a database. Using the DataReader can increase application performance and reduce system overhead because only one row at a time is ever in memory.
After creating an instance of the Command object, you create a DataReader by calling Command. ExecuteReader to retrieve rows from a data source, as shown in the following example.



SqlDataReader myReader = myCommand.ExecuteReader();



You use the Read method of the DataReader object to obtain a row from the results of the query.


while (myReader.Read())
Console.WriteLine("\t{0}\t{1}", myReader.GetInt32(0), myReader.GetString(1));
myReader.Close();


21. What is Data Set?

The DataSet is a memory-resident representation of data that provides a consistent relational programming model regardless of the data source. It can be used with multiple and differing data sources, used with XML data, or used to manage data local to the application. The DataSet represents a complete set of data including related tables, constraints, and relationships among the tables. The methods and objects in a DataSet are consistent with those in the relational database model. The DataSet can also persist and reload its contents as XML and its schema as XML Schema definition language (XSD) schema.

22. What is Data Adapter?

The DataAdapter serves as a bridge between a DataSet and a data source for retrieving and saving data. The DataAdapter provides this bridge by mapping Fill, which changes the data in the DataSet to match the data in the data source, and Update, which changes the data in the data source to match the data in the DataSet. If you are connecting to a Microsoft SQL Server database, you can increase overall performance by using the SqlDataAdapter along with its associated SqlCommand and SqlConnection. For other OLE DB-supported databases, use the DataAdapter with its associated OleDbCommand and OleDbConnection objects.

23. Which method do you invoke on the DataAdapter control to load your generated dataset with data?

Fill() method is used to load the generated data set with Data.

24. Explain different methods and Properties of DataReader which you have used in your project?

Following are the methods and properties :

Read
GetString
GetInt32
while (myReader.Read())
Console.WriteLine("\t{0}\t{1}", myReader.GetInt32(0), myReader.GetString(1));
myReader.Close();


25. What happens when we issue Dataset.ReadXml command?

Reads XML schema and data into the DataSet

26. What Method checks if a datareader is closed or opened?

IsClosed() checks whether a datareader is closed or open.

27. What is method to get XML and schema from Dataset?

getXML () and get Schema ().

28. Differences between dataset.clone and dataset.copy?

The Difference is as follows:

Clone - Copies the structure of the DataSet, including all DataTable schemas, relations, and constraints. Does not copy any data.
Copy - Copies both the structure and data for this DataSet.


29. What are the differences between the Recordset and the DataSet objects?

Tables represented by the object: The ADO Recordset object represents only one table at a given time, while the DataSet object in ADO.NET can represent any number of tables, keys, constraints and relations which makes it very much like an RDBMS.
Navigation: Navigating the Recordset object depend on the cursor used to create the object with limited functionality in moving back and forth while the DataSet represents data in “collections” that could be accessed through indexers in a random-access fashion
Connection Model: The Recordset is designed to work as a “connected object” with a server-side cursor in mind while the DataSet is designed to work as a disconnected object containing hierarchy of data in XML format.
Database Updates: Updating a database through the use of a Recordset object is direct since it is tied to the database. On the other hand, the DataSet as an independent data store must use a database-specific DataAdapter object to post updates to the database.


30. Which ADO.Net objects fall under connected database model and disconnected database model?

The DataReader object falls under connected model and DataSet, DataTable, DataAdapter objects fall under disconnected database model.


31. How to use ImportRow method?

The ImportRow method of DataTable copies a row into a DataTable with all of the properties and data of the row. It actually calls NewRow method on destination DataTable with current table schema and sets DataRowState to Added.


DataTable dt; /// fill the table before you use it
DataTable copyto;

foreach(DataRow dr in dt.Rows)
{
copyto.ImportRow(dr);
}


32. What are the pros and cons of using DataReader object?

The DataReader object is a forward-only resultset and is faster to traverse as compared to its counterpart DataTable. However it holds an active connection to the database until all records are retrieved from it or closed explicitly. This could be a problem when the resultset holds large number of records and the application has many concurrent users.

33. What are different execute methods of ADO.NET command object?

ExecuteScalar method returns a single value from the first row and first column of the resultset obtained from the execution of SQL query.
ExecuteNonQuery method executes the DML SQL query such as insert, delete or update and returns the number of rows affected by this action.
ExecuteReader method returns DataReader object which is a forward-only resultset.
ExecuteXMLReader method is available for SQL Server 2000 or later. Upon execution it builds XMLReader object from standard SQL query.


34. What is the difference between data reader and data adapter?

DateReader is an forward only and read only cursor type if you are accessing data through DataRead it shows the data on the web form/control but you can not perform the paging feature on that record(because it's forward only type).

Reader is best fit to show the Data (where no need to work on data)

DataAdapter is not only connect with the Databse(through Command object) it provide four types of command (InsertCommand, UpdateCommand, DeleteCommand, SelectCommand), It supports to the disconnected Architecture of .NET show we can populate the records to the DataSet. where as Dataadapter is best fit to work on data.


35. Difference between SqlCommand and SqlCommandBuilder?

SQLCommand is used to retrieve or update the data from database.

You can use the SELECT / INSERT,UPDATE,DELETE command with SQLCommand. SQLCommand will execute these commnds in the database.

SQLBUILDER is used to build the SQL Command like SELECT/ INSERTR, UPDATE etc.


36. Can you edit data in the Repeater control?

NO.

37. What are the different rowversions available?

There are four types of Rowversions.
Current:
The current values for the row. This row version does not exist for rows with a RowState of Deleted.
Default :
The row the default version for the current DataRowState. For a DataRowState value of Added, Modified or Current, the default version is Current. For a DataRowState of Deleted, the version is Original. For a DataRowState value of Detached, the version is Proposed.
Original:
The row contains its original values.
Proposed:
The proposed values for the row. This row version exists during an edit operation on a row, or for a row that is not part of a DataRowCollection


38. Explain DataSet.AcceptChanges and DataAdapter.Update methods?

Dataset maintains the rowstate of each row with in a table. as a dataset is loaded its rowstate version is unchanged . whenever there is a modification in a paricular row with in a datatable , dataset changes the rowversion as modified, Added or deleted based on the particular action performed on the particular row.

AcceptChanges() method again change the Rowversion back to Unchanged.

Update() method is for updation of any changes made to the dataset in the database. This function checks the rowversion of each row with in a table. If it finds any row with added rowstate then that particular row is inserted else if it is Modified it is upadted if deleted then a delete statement is executed.

But if Acceptchanges() is done before an update function it wold not update anything to database since rowstate becomes unchanged


39. How you will set the datarelation between two columns?

ADO.NET provides DataRelation object to set relation between two columns.It helps to enforce the following constraints,a unique constraint, which guarantees that a column in the table contains no duplicates and a foreign-key constraint,which can be used to maintain referential integrity.A unique constraint is implemented either by simply setting the Unique property of a data column to true, or by adding an instance of the UniqueConstraint class to the DataRelation object's ParentKeyConstraint. As part of the foreign-key constraint, you can specify referential integrity rules that are applied at three points,when a parent record is updated,when a parent record is deleted and when a change is accepted or rejected.

40. What connections does Microsoft SQL Server support?

Windows Authentication (via Active Directory) and SQL Server authentication (via Microsoft SQL Server username and passwords).



41. Which one is trusted and which one is untrusted?

Windows Authentication is trusted because the username and password are checked with the Active Directory, the SQL Server authentication is untrusted, since SQL Server is the only verifier participating in the transaction.

42. What is Connection pooling?

The connection represents an open and unique link to a data source. In a distributed system, this often involves a network connection. Depending on the underlying data source, the programming interface of the various connection objects may differ quite a bit. A connection object is specific to a particular type of data source, such as SQL Server and Oracle. Connection objects can't be used interchangeably across different data sources, but all share a common set of methods and properties grouped in the IDbConnection interface.
In ADO.NET, connection objects are implemented within data providers as sealed classes (that is, they are not further inheritable). This means that the behavior of a connection class can never be modified or overridden, just configured through properties and attributes. In ADO.NET, all connection classes support connection pooling, although each class may implement it differently. Connection pooling is implicit, meaning that you don't need to enable it because the provider manages this automatically.
ADO.NET pools connections with the same connection or configuration (connection string). It can maintain more than one pool (actually, one for each configuration). An interesting note: Connection pooling is utilized (by default) unless otherwise specified. If you close and dispose of all connections, then there will be no pool (since there are no available connections).
While leaving database connections continuously open can be troublesome, it can be advantageous for applications that are in constant communication with a database by negating the need to re-open connections. Some database administrators may frown on the practice since multiple connections (not all of which may be useful) to the database are open. Using connection pooling depends upon available server resources and application requirements (i.e., does it really need it).

Using connection pooling

Connection pooling is enabled by default. You may override the default behavior with the pooling setting in the connection string. The following SQL Server connection string does not utilize connection pooling:
Data Source=TestServer;Initial Catalog=Northwind;
User ID=Chester;Password=Tester;Pooling=False;
You can use the same approach with other .NET Data Providers. You may enable it by setting it to True (or eliminating the Pooling variable to use the default behavior). In addition, the default size of the connection pool is 100, but you may override this as well with connection string variables. You may use the following variables to control the minimum and maximum size of the pool as well as transaction support:


• Max Pool Size: The maximum number of connections allowed in the pool. The default value is 100.
• Min Pool Size: The minimum number of connections allowed in the pool. The default value is zero.
• Enlist: Signals whether the pooler automatically enlists the connection in the creation thread's current transaction context. The default value is true.


The following SQL Server connection string uses connection pooling with a minimum size of five and a maximum size of 100:


Data Source=TestServer;Initial Catalog=Northwind;
User ID=Chester;Password=Tester;Max Pool Size=50;
Min Pool Size=5;Pooling=True;


43. What are the two fundamental objects in ADO.NET ?

Datareader and Dataset are the two fundamental objects in ADO.NET.

44. What is the use of connection object ?

They are used to connect a data to a Command object.

An OleDbConnection object is used with an OLE-DB provider
A SqlConnection object uses Tabular Data Services (TDS) with MS SQL Server


45. What are the various objects in Dataset ?

Dataset has a collection of DataTable object within the Tables collection. Each DataTable object contains a collection of DataRow objects and a collection of DataColumn objects. There are also collections for the primary keys, constraints, and default values used in this table which is called as constraint collection, and the parent and child relationships between the tables. Finally, there is a DefaultView object for each table. This is used to create a Data View object based on the table, so that the data can be searched, filtered or otherwise manipulated while displaying the data.


46. How can we force the connection object to close after my datareader is closed ?

Command method Executereader takes a parameter called as CommandBehavior where in we can specify saying close connection automatically after the Datareader is close.
pobjDataReader =pobjCommand.ExecuteReader(CommandBehavior.CloseConnection)


47. How can we get only schema using dataReader?

pobjDataReader = pobjCommand.ExecuteReader(Co-mmandBehavior.SchemaOnly)

48. Explain how to use stored procedures with ADO.net?

Using Stored Procedures with a Command


Stored procedures offer many advantages in data-driven applications. Using stored procedures, database operations can be encapsulated in a single command, optimized for best performance, and enhanced with additional security. While a stored procedure can be called by simply passing the stored procedure name followed by parameter arguments as an SQL statement, using the Parameters collection of the ADO.NET Command object enables you to more explicitly define stored procedure parameters as well as to access output parameters and return values.
To call a stored procedure, set the CommandType of the Command object to StoredProcedure. Once the CommandType is set to StoredProcedure, you can use the Parameters collection to define parameters, as in the following example.
Note The OdbcCommand requires that you supply the full ODBC CALL syntax when calling a stored procedure.


SqlClient
[Visual Basic]
Dim nwindConn As SqlConnection = New SqlConnection("Data Source=localhost;Integrated Security=SSPI;" & _
"Initial Catalog=northwind")

Dim salesCMD As SqlCommand = New SqlCommand("SalesByCategory", nwindConn)
salesCMD.CommandType = CommandType.StoredProcedure

Dim myParm As SqlParameter = salesCMD.Parameters.Add("@CategoryName", SqlDbType.NVarChar, 15)
myParm.Value = "Beverages"

nwindConn.Open()

Dim myReader As SqlDataReader = salesCMD.ExecuteReader()

Console.WriteLine("{0}, {1}", myReader.GetName(0), myReader.GetName(1))

Do While myReader.Read()
Console.WriteLine("{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1))
Loop

myReader.Close()
nwindConn.Close()


[C#]


SqlConnection nwindConn = new SqlConnection("Data Source=localhost;Integrated Security=SSPI;Initial Catalog=northwind");

SqlCommand salesCMD = new SqlCommand("SalesByCategory", nwindConn);
salesCMD.CommandType = CommandType.StoredProcedure;

SqlParameter myParm = salesCMD.Parameters.Add("@CategoryName", SqlDbType.NVarChar, 15);
myParm.Value = "Beverages";

nwindConn.Open();

SqlDataReader myReader = salesCMD.ExecuteReader();

Console.WriteLine("{0}, {1}", myReader.GetName(0), myReader.GetName(1));

while (myReader.Read())
{
Console.WriteLine("{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1));
}

myReader.Close();
nwindConn.Close();


OleDb
[Visual Basic]
Dim nwindConn As OleDbConnection = New OleDbConnection("Provider=SQLOLEDB;Data Source=localhost;Integrated Security=SSPI;" & _
"Initial Catalog=northwind")

Dim salesCMD As OleDbCommand = New OleDbCommand("SalesByCategory", nwindConn)
salesCMD.CommandType = CommandType.StoredProcedure

Dim myParm As OleDbParameter = salesCMD.Parameters.Add("@CategoryName", OleDbType.VarChar, 15)
myParm.Value = "Beverages"

nwindConn.Open()

Dim myReader As OleDbDataReader = salesCMD.ExecuteReader()

Console.WriteLine("{0}, {1}", myReader.GetName(0), myReader.GetName(1))

Do While myReader.Read()
Console.WriteLine("{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1))
Loop

myReader.Close()
nwindConn.Close()


[C#]


OleDbConnection nwindConn = new OleDbConnection("Provider=SQLOLEDB;Data Source=localhost;Integrated Security=SSPI;" +
"Initial Catalog=northwind");

OleDbCommand salesCMD = new OleDbCommand("SalesByCategory", nwindConn);
salesCMD.CommandType = CommandType.StoredProcedure;

OleDbParameter myParm = salesCMD.Parameters.Add("@CategoryName", OleDbType.VarChar, 15);
myParm.Value = "Beverages";

nwindConn.Open();

OleDbDataReader myReader = salesCMD.ExecuteReader();

Console.WriteLine("\t{0}, {1}", myReader.GetName(0), myReader.GetName(1));

while (myReader.Read())
{
Console.WriteLine("\t{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1));
}

myReader.Close();
nwindConn.Close();


Odbc
[Visual Basic]


Dim nwindConn As OdbcConnection = New OdbcConnection("Driver={SQL Server};Server=localhost;Trusted_Connection=yes;" & _
"Database=northwind")
nwindConn.Open()

Dim salesCMD As OdbcCommand = New OdbcCommand("{ CALL SalesByCategory(?) }", nwindConn)
salesCMD.CommandType = CommandType.StoredProcedure

Dim myParm As OdbcParameter = salesCMD.Parameters.Add("@CategoryName", OdbcType.VarChar, 15)
myParm.Value = "Beverages"

Dim myReader As OdbcDataReader = salesCMD.ExecuteReader()

Console.WriteLine("{0}, {1}", myReader.GetName(0), myReader.GetName(1))

Do While myReader.Read()
Console.WriteLine("{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1))
Loop

myReader.Close()
nwindConn.Close()
[C#]
OdbcConnection nwindConn = new OdbcConnection("Driver={SQL Server};Server=localhost;Trusted_Connection=yes;" +
"Database=northwind");
nwindConn.Open();

OdbcCommand salesCMD = new OdbcCommand("{ CALL SalesByCategory(?) }", nwindConn);
salesCMD.CommandType = CommandType.StoredProcedure;

OdbcParameter myParm = salesCMD.Parameters.Add("@CategoryName", OdbcType.VarChar, 15);
myParm.Value = "Beverages";

OdbcDataReader myReader = salesCMD.ExecuteReader();

Console.WriteLine("\t{0}, {1}", myReader.GetName(0), myReader.GetName(1));

while (myReader.Read())
{
Console.WriteLine("\t{0}, ${1}", myReader.GetString(0), myReader.GetDecimal(1));
}

myReader.Close();
nwindConn.Close();


A Parameter object can be created using the Parameter constructor, or by calling the Add method of the Parameters collection of a Command. Parameters.Add will take as input either constructor arguments or an existing Parameter object. When setting the Value of a Parameter to a null reference, use DBNull.Value.
For parameters other than Input parameters, you must set the ParameterDirection property to specify whether the parameter type is InputOutput, Output, or ReturnValue. The following example shows the difference between creating Input, Output, and ReturnValue parameters.


[Visual Basic]


Dim sampleCMD As SqlCommand = New SqlCommand("SampleProc", nwindConn)
sampleCMD.CommandType = CommandType.StoredProcedure

Dim sampParm As SqlParameter = sampleCMD.Parameters.Add("RETURN_VALUE", SqlDbType.Int)
sampParm.Direction = ParameterDirection.ReturnValue

sampParm = sampleCMD.Parameters.Add("@InputParm", SqlDbType.NVarChar, 12)
sampParm.Value = "Sample Value"

sampParm = sampleCMD.Parameters.Add("@OutputParm", SqlDbType.NVarChar, 28)
sampParm.Direction = ParameterDirection.Output

nwindConn.Open()

Dim sampReader As SqlDataReader = sampleCMD.ExecuteReader()

Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1))

Do While sampReader.Read()
Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1))
Loop

sampReader.Close()
nwindConn.Close()

Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters("@OutputParm").Value)
Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters("RETURN_VALUE").Value)


[C#]


SqlCommand sampleCMD = new SqlCommand("SampleProc", nwindConn);
sampleCMD.CommandType = CommandType.StoredProcedure;

SqlParameter sampParm = sampleCMD.Parameters.Add("RETURN_VALUE", SqlDbType.Int);
sampParm.Direction = ParameterDirection.ReturnValue;

sampParm = sampleCMD.Parameters.Add("@InputParm", SqlDbType.NVarChar, 12);
sampParm.Value = "Sample Value";

sampParm = sampleCMD.Parameters.Add("@OutputParm", SqlDbType.NVarChar, 28);
sampParm.Direction = ParameterDirection.Output;

nwindConn.Open();

SqlDataReader sampReader = sampleCMD.ExecuteReader();

Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1));

while (sampReader.Read())
{
Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1));
}

sampReader.Close();
nwindConn.Close();

Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters["@OutputParm"].Value);
Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters["RETURN_VALUE"].Value);
OleDb


[Visual Basic]


Dim sampleCMD As OleDbCommand = New OleDbCommand("SampleProc", nwindConn)
sampleCMD.CommandType = CommandType.StoredProcedure

Dim sampParm As OleDbParameter = sampleCMD.Parameters.Add("RETURN_VALUE", OleDbType.Integer)
sampParm.Direction = ParameterDirection.ReturnValue

sampParm = sampleCMD.Parameters.Add("@InputParm", OleDbType.VarChar, 12)
sampParm.Value = "Sample Value"

sampParm = sampleCMD.Parameters.Add("@OutputParm", OleDbType.VarChar, 28)
sampParm.Direction = ParameterDirection.Output

nwindConn.Open()

Dim sampReader As OleDbDataReader = sampleCMD.ExecuteReader()

Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1))

Do While sampReader.Read()
Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1))
Loop

sampReader.Close()
nwindConn.Close()

Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters("@OutputParm").Value)
Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters("RETURN_VALUE").Value)


[C#]


OleDbCommand sampleCMD = new OleDbCommand("SampleProc", nwindConn);
sampleCMD.CommandType = CommandType.StoredProcedure;

OleDbParameter sampParm = sampleCMD.Parameters.Add("RETURN_VALUE", OleDbType.Integer);
sampParm.Direction = ParameterDirection.ReturnValue;

sampParm = sampleCMD.Parameters.Add("@InputParm", OleDbType.VarChar, 12);
sampParm.Value = "Sample Value";

sampParm = sampleCMD.Parameters.Add("@OutputParm", OleDbType.VarChar, 28);
sampParm.Direction = ParameterDirection.Output;

nwindConn.Open();

OleDbDataReader sampReader = sampleCMD.ExecuteReader();

Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1));

while (sampReader.Read())
{
Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1));
}

sampReader.Close();
nwindConn.Close();

Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters["@OutputParm"].Value);
Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters["RETURN_VALUE"].Value);


Odbc
[Visual Basic]
Dim sampleCMD As OdbcCommand = New OdbcCommand("{ ? = CALL SampleProc(?, ?) }", nwindConn)
sampleCMD.CommandType = CommandType.StoredProcedure

Dim sampParm As OdbcParameter = sampleCMD.Parameters.Add("RETURN_VALUE", OdbcType.Int)
sampParm.Direction = ParameterDirection.ReturnValue

sampParm = sampleCMD.Parameters.Add("@InputParm", OdbcType.VarChar, 12)
sampParm.Value = "Sample Value"

sampParm = sampleCMD.Parameters.Add("@OutputParm", OdbcType.VarChar, 28)
sampParm.Direction = ParameterDirection.Output

nwindConn.Open()

Dim sampReader As OdbcDataReader = sampleCMD.ExecuteReader()

Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1))

Do While sampReader.Read()
Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1))
Loop

sampReader.Close()
nwindConn.Close()

Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters("@OutputParm").Value)
Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters("RETURN_VALUE").Value)


[C#]


OdbcCommand sampleCMD = new OdbcCommand("{ ? = CALL SampleProc(?, ?) }", nwindConn);
sampleCMD.CommandType = CommandType.StoredProcedure;

OdbcParameter sampParm = sampleCMD.Parameters.Add("RETURN_VALUE", OdbcType.Int);
sampParm.Direction = ParameterDirection.ReturnValue;

sampParm = sampleCMD.Parameters.Add("@InputParm", OdbcType.VarChar, 12);
sampParm.Value = "Sample Value";

sampParm = sampleCMD.Parameters.Add("@OutputParm", OdbcType.VarChar, 28);
sampParm.Direction = ParameterDirection.Output;

nwindConn.Open();

OdbcDataReader sampReader = sampleCMD.ExecuteReader();

Console.WriteLine("{0}, {1}", sampReader.GetName(0), sampReader.GetName(1));

while (sampReader.Read())
{
Console.WriteLine("{0}, {1}", sampReader.GetInt32(0), sampReader.GetString(1));
}

sampReader.Close();
nwindConn.Close();

Console.WriteLine(" @OutputParm: {0}", sampleCMD.Parameters["@OutputParm"].Value);
Console.WriteLine("RETURN_VALUE: {0}", sampleCMD.Parameters["RETURN_VALUE"].Value);


Using Parameters with a SqlCommand


When using parameters with a SqlCommand, the names of the parameters added to the Parameters collection must match the names of the parameter markers in your stored procedure. The .NET Framework Data Provider for SQL Server treats parameters in the stored procedure as named parameters and searches for the matching parameter markers.
The .NET Framework Data Provider for SQL Server does not support the question mark (?) placeholder for passing parameters to an SQL statement or a stored procedure. In this case, you must use named parameters, as in the following example.
SELECT * FROM Customers WHERE CustomerID = @CustomerID
Using Parameters with an OleDbCommand or OdbcCommand
When using parameters with an OleDbCommand or OdbcCommand, the order of the parameters added to the Parameters collection must match the order of the parameters defined in your stored procedure. The .NET Framework Data Provider for OLE DB and .NET Framework Data Provider for ODBC treat parameters in a stored procedure as placeholders and applies parameter values in order. In addition, return value parameters must be the first parameters added to the Parameters collection.
The .NET Framework Data Provider for OLE DB and .NET Framework Data Provider for ODBC do not support named parameters for passing parameters to an SQL statement or a stored procedure. In this case, you must use the question mark (?) placeholder, as in the following example.
SELECT * FROM Customers WHERE CustomerID = ?
As a result, the order in which Parameter objects are added to the Parameters collection must directly correspond to the position of the question mark placeholder for the parameter.
Deriving Parameter Information
Parameters can also be derived from a stored procedure using the CommandBuilder class. Both the SqlCommandBuilder and OleDbCommandBuilder classes provide a static method, DeriveParameters, which will automatically populate the Parameters collection of a Command object with parameter information from a stored procedure. Note that DeriveParameters will overwrite any existing parameter information for the Command.
Deriving parameter information does require an added trip to the data source for the information. If parameter information is known at design-time, you can improve the performance of your application by setting the parameters explicitly.
The following code example shows how to populate the Parameters collection of a Command object using CommandBuilder.DeriveParameters.


[Visual Basic]


Dim nwindConn As SqlConnection = New SqlConnection("Data Source=localhost;Initial Catalog=Northwind;Integrated Security=SSPI;")
Dim salesCMD As SqlCommand = New SqlCommand("Sales By Year", nwindConn)
salesCMD.CommandType = CommandType.StoredProcedure

nwindConn.Open()
SqlCommandBuilder.DeriveParameters(salesCMD)
nwindConn.Close()


[C#]


SqlConnection nwindConn = new SqlConnection("Data Source=localhost;Initial Catalog=Northwind;Integrated Security=SSPI;");
SqlCommand salesCMD = new SqlCommand("Sales By Year", nwindConn);
salesCMD.CommandType = CommandType.StoredProcedure;

nwindConn.Open();
SqlCommandBuilder.DeriveParameters(salesCMD);
nwindConn.Close();


49. How can we fine tune the command object when we are expecting a single row or a single value ?

CommandBehaviour enumeration provides two values SingleResult and SingleRow.If you are expecting a single value then pass "CommandBehaviour.SingleResult" and the query is optimized accordingly, if you are expecting single row then pass "CommandBehaviour.SingleRow" and query is optimized according to single row.

50. How can you Obtaining Data as XML from SQL Server?

[Visual Basic]


Dim custCMD As SqlCommand = New SqlCommand("SELECT * FROM Customers FOR XML AUTO, ELEMENTS", nwindConn)
Dim myXR As System.Xml.XmlReader = custCMD.ExecuteXmlReader()


[C#]


SqlCommand custCMD = new SqlCommand("SELECT * FROM Customers FOR XML AUTO, ELEMENTS", nwindConn);
System.Xml.XmlReader myXR = custCMD.ExecuteXmlReader();


51. How to add Existing Constraints to a DataSet?

The Fill method of the DataAdapter fills a DataSet only with table columns and rows from a data source; though constraints are commonly set by the data source, the Fill method does not add this schema information to the DataSet by default. To populate a DataSet with existing primary key constraint information from a data source, you can either call the FillSchema method of the DataAdapter, or set the MissingSchemaAction property of the DataAdapter to AddWithKey before calling Fill. This will ensure that primary key constraints in the DataSet reflect those at the data source. Foreign key constraint information is not included and will need to be created explicitly

Adding schema information to a DataSet before filling it with data ensures that primary key constraints are included with the DataTable objects in the DataSet. As a result, when additional calls to Fill the DataSet are made, the primary key column information is used to match new rows from the data source with current rows in each DataTable, and current data in the tables is overwritten with data from the data source. Without the schema information, the new rows from the data source are appended to the DataSet, resulting in duplicate rows.

Using FillSchema or setting the MissingSchemaAction to AddWithKey requires extra processing at the data source to determine primary key column information. This additional processing can hinder performance. If you know the primary key information at design-time, it is recommended that you specify the primary key column or columns explicitly in order to achieve optimal performance. For information about explicitly setting primary key information for a table



[Visual Basic]
Dim custDS As DataSet = New DataSet()

custDA.FillSchema(custDS, SchemaType.Source, "Customers")
custDA.Fill(custDS, "Customers")


[C#]
DataSet custDS = new DataSet();

custDA.FillSchema(custDS, SchemaType.Source, "Customers");
custDA.Fill(custDS, "Customers");

[Visual Basic]
Dim custDS As DataSet = New DataSet()

custDA.MissingSchemaAction = MissingSchemaAction.AddWithKey
custDA.Fill(custDS, "Customers")


[C#]
DataSet custDS = new DataSet();

custDA.MissingSchemaAction = MissingSchemaAction.AddWithKey;
custDA.Fill(custDS, "Customers");


52. How to add relation between tables?

In a DataSet that contains multiple DataTable objects, you can use DataRelation objects to relate one table to another, to navigate through the tables, and to return child or parent rows from a related table.
Adding a DataRelation to a DataSet adds, by default, a UniqueConstraint to the parent table and a ForeignKeyConstraint to the child table. For more information about these default constraints



[Visual Basic]
custDS.Relations.Add("CustOrders", _
custDS.Tables("Customers").Columns("CustID"), _
custDS.Tables("Orders").Columns("CustID"))

[C#]
custDS.Relations.Add("CustOrders",
custDS.Tables["Customers"].Columns["CustID"],
custDS.Tables["Orders"].Columns["CustID"]);


53. How to get the data changes in dataset?

GetChanges : Gets a copy of the DataSet containing all changes made to it since it was last loaded, or since AcceptChanges was called.



[Visual Basic]
Private Sub UpdateDataSet(ByVal myDataSet As DataSet)
' Check for changes with the HasChanges method first.
If Not myDataSet.HasChanges(DataRowState.Modified) Then Exit Sub
' Create temporary DataSet variable.
Dim xDataSet As DataSet
' GetChanges for modified rows only.
xDataSet = myDataSet.GetChanges(DataRowState.Modified)
' Check the DataSet for errors.
If xDataSet.HasErrors Then
' Insert code to resolve errors.
End If
' After fixing errors, update the data source with the DataAdapter
' used to create the DataSet.
myOleDbDataAdapter.Update(xDataSet)
End Sub



[C#]
private void UpdateDataSet(DataSet myDataSet){
// Check for changes with the HasChanges method first.
if(!myDataSet.HasChanges(DataRowState.Modified)) return;
// Create temp


54. What are the various methods provided by the dataset object to generate XML?

ReadXML : Read's a XML document in to Dataset.
GetXML : This is function's which return's a string containing XML document.
WriteXML : This write's a XML data to disk.


55. What is Dataview and what’s the use of Dataview?

Represents a databindable, customized view of a DataTable for sorting, filtering, searching, editing, and navigation. A major function of the DataView is to allow data binding on both Windows Forms and Web Forms.

Dataview has 4 main method's :-
Find
Take's a array of value's and return's the index of the row.
FindRow
This also takes array of values but returns a collection of "DataRow".
If we want to manipulate data of "DataTable" object create "DataView" (Using the "DefaultView" we can create "DataView" object) of the "DataTable" object, and use the following functionalities:-
AddNew
Add's a new row to the "DataView" object.
Delete
Deletes the specified row from "DataView" object.

Additionally, a DataView can be customized to present a subset of data from the DataTable. This capability allows you to have two controls bound to the same DataTable, but showing different versions of the data. For example, one control may be bound to a DataView showing all of the rows in the table, while a second may be configured to display only the rows that have been deleted from the DataTable. The DataTable also has a DefaultView property which returns the default DataView for the table. For example, if you wish to create a custom view on the table, set the RowFilter on the DataView returned by the DefaultView.

To create a filtered and sorted view of data, set the RowFilter and Sort properties. Then use the Item property to return a single DataRowView.

You can also add and delete from the set of rows using the AddNew and Delete methods. When you use those methods, the RowStateFilter property can set to specify that only deleted rows or new rows be displayed by the DataView.



56. What is CommandBuilder?

What the CommandBuilder can do is relieve you of the responsibility of writing your own action queries by automatically constructing the SQL code, ADO.NET Command objects, and their associated Parameters collections given a SelectCommand.



The CommandBuilder expects you to provide a viable, executable, and simple SelectCommand associated with a DataAdapter. It also expects a viable Connection. That's because the CommandBuilder opens the Connection associated with the DataAdapter and makes a round trip to the server each and every time it's asked to construct the action queries. It closes the Connection when it's done.



Dim cn As SqlConnection
Dim da As SqlDataAdapter
Dim cb As SqlCommandBuilder
cn = New SqlConnection("data source=demoserver…")
da = New SqlDataAdapter("SELECT Au_ID, au_lname, City FROM authors", cn)


57. what’s the difference between optimistic locking and pessimistic locking?

In pessimistic locking when user wants to update data it locks the record and till then no one can update data. Other user's can only view the data when there is pessimistic locking.


In optimistic locking multiple user's can open the same record for updating, thus increase maximum concurrency. Record is only locked when updating the record. This is the most preferred way of locking practically. Now a days browser based application are very common and having pessimistic locking is not a practical solution.


The basic difference between Optimistic and Pessimistic locking is the time at which the lock on a row or page occurs. A Pessimistic lock is enforced when the row is being edited while an Optimistic lock occurs at the time the row is being updated. Obviously the time between an Edit and Update can be very short, but Pessimistic locking will allow the database provider to prevent a user from overwriting changes to a row by another user that occurred while he was updating it. There is no
provision for this under Optimistic locking and the last user to perform the update wins.


58. How to implement pessimistic locking?

The basics steps for pessimistic locking are as follows:

Create a transaction with an IsolationLevel of RepeatableRead.
Set the DataAdapter’s SelectCommand property to use the transaction you created.
Make the changes to the data.
Set DataAdapter’s Insert, Update, and Delete command properties to use the transaction you created.
Call the DataAdapter’s Update method.
Commit the transaction.


59. How to use transactions in ADO.net?

Transactions are a feature offered by most enterprise-class databases for making sure data integrity is maintained when data is modified. A transaction at its most basic level consists of two required steps—Begin, and then either Commit or Rollback. The Begin call defines the start of the transaction boundary, and the call to either Commit or Rollback defines the end of it. Within the transaction boundary, all of the statements executed are considered to be part of a unit for accomplishing the given task, and must succeed or fail as one. Commit (as the name suggests) commits the data modifications if everything was successful, and Rollback undoes the data modifications if an error occurs. All of the .NET data providers provide similar classes and methods to accomplish these operations.



The ADO.NET data providers offer transaction functionality through the Connection, Command, and Transaction classes. A typical transaction would follow a process similar to this:



Open the transaction using Connection.BeginTransaction().
Enlist statements or stored procedure calls in the transaction by setting the Command.Transaction property of the Command objects associated with them.
Depending on the provider, optionally use Transaction.Save() or Transaction.Begin() to create a savepoint or a nested transaction to enable a partial rollback.
Commit or roll back the transaction using Transaction.Commit() or Transaction.Rollback().


using System;
using System.Drawing;
using System.Collections;
using System.ComponentModel;
using System.Windows.Forms;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;

…public void SPTransaction(int partID, int numberMoved, int siteID)
{
// Create and open the connection.
SqlConnection conn = new SqlConnection();
string connString = "Server=SqlInstance;Database=Test;"
+ "Integrated Security=SSPI";
conn.ConnectionString = connString;
conn.Open();

// Create the commands and related parameters.
// cmdDebit debits inventory from the WarehouseInventory
// table by calling the DebitWarehouseInventory
// stored procedure.
SqlCommand cmdDebit =
new SqlCommand("DebitWarehouseInventory", conn);
cmdDebit.CommandType = CommandType.StoredProcedure;
cmdDebit.Parameters.Add("@PartID", SqlDbType.Int, 0, "PartID");
cmdDebit.Parameters["@PartID"].Direction =
ParameterDirection.Input;
cmdDebit.Parameters.Add("@Debit", SqlDbType.Int, 0, "Quantity");
cmdDebit.Parameters["@Debit"].Direction =
ParameterDirection.Input;

// cmdCredit adds inventory to the SiteInventory
// table by calling the CreditSiteInventory
// stored procedure.
SqlCommand cmdCredit =
new SqlCommand("CreditSiteInventory", conn);
cmdCredit.CommandType = CommandType.StoredProcedure;
cmdCredit.Parameters.Add("@PartID", SqlDbType.Int, 0, "PartID");
cmdCredit.Parameters["@PartID"].Direction =
ParameterDirection.Input;
cmdCredit.Parameters.Add
("@Credit", SqlDbType.Int, 0, "Quantity");
cmdCredit.Parameters["@Credit"].Direction =
ParameterDirection.Input;
cmdCredit.Parameters.Add("@SiteID", SqlDbType.Int, 0, "SiteID");
cmdCredit.Parameters["@SiteID"].Direction =
ParameterDirection.Input;

// Begin the transaction and enlist the commands.
SqlTransaction tran = conn.BeginTransaction();
cmdDebit.Transaction = tran;
cmdCredit.Transaction = tran;

try
{
// Execute the commands.
cmdDebit.Parameters["@PartID"].Value = partID;
cmdDebit.Parameters["@Debit"].Value = numberMoved;
cmdDebit.ExecuteNonQuery();

cmdCredit.Parameters["@PartID"].Value = partID;
cmdCredit.Parameters["@Credit"].Value = numberMoved;
cmdCredit.Parameters["@SiteID"].Value = siteID;
cmdCredit.ExecuteNonQuery();

// Commit the transaction.
tran.Commit();
}
catch(SqlException ex)
{
// Roll back the transaction.
tran.Rollback();

// Additional error handling if needed.
}
finally
{
// Close the connection.
conn.Close();
}
}


60. Whats the difference between Dataset.clone and Dataset.copy ?

The Clone method of the DataSet class copies only the schema of a DataSet object. It returns a new DataSet object that has the same schema as the existing DataSet object, including all DataTable schemas, relations, and constraints. It does not copy any data from the existing DataSet object into the new DataSet.

The Copy method of the DataSet class copies both the structure and data of a DataSet object. It returns a new DataSet object having the same structure (including all DataTable schemas, relations, and constraints) and data as the existing DataSet object.



61. Difference between OLEDB Provider and SqlClient ?

SQLClient .NET classes are highly optimized for the .net / sqlserver combination and achieve optimal results. The SqlClient data provider is fast. It's faster than the Oracle provider, and faster than accessing database via the OleDb layer. It's faster because it accesses the native library (which automatically gives you better performance), and it was written with lots of help from the SQL Server team.

62. What are the different namespaces used in the project to connect the database? What data providers available in .net to connect to database?

System.Data.OleDb – classes that make up the .NET Framework Data Provider for OLE DB-compatible data sources. These classes allow you to connect to an OLE DB data source, execute commands against the source, and read the results.
System.Data.SqlClient – classes that make up the .NET Framework Data Provider for SQL Server, which allows you to connect to SQL Server 7.0, execute commands, and read results. The System.Data.SqlClient namespace is similar to the System.Data.OleDb namespace, but is optimized for access to SQL Server 7.0 and later.
System.Data.Odbc - classes that make up the .NET Framework Data Provider for ODBC. These classes allow you to access ODBC data source in the managed space.
System.Data.OracleClient - classes that make up the .NET Framework Data Provider for Oracle. These classes allow you to access an Oracle data source in the managed space.


63. How to check if a datareader is closed or opened?

IsClosed()
 
Locations of visitors to this page