Monday, October 4, 2010

Generate dynamic SQL statements in SQL Server

When you need to solve a tricky database problem, the ability to generate SQL statements is a powerful tool--although you must be careful when using it. This article explores how you can use this functionality to generate SQL statements on the fly.

Dynamic SQL statements
A dynamic SQL statement is constructed at execution time, for which different conditions generate different SQL statements.

It can be useful to construct these statements dynamically when you need to decide at run time what fields to bring back from SELECT statements; the different criteria for your queries; and perhaps different tables to query based on different conditions.

These SQL strings are not parsed for errors because they are generated at execution time, and they may introduce security vulnerabilities into your database. Also, SQL strings can be a nightmare to debug, which is why I have never been a big fan of dynamically built SQL statements; however, sometimes they are perfect for certain scenarios.

A dynamic example
The question I answer most often is, "How can I pass my WHERE statement into a stored procedure?" I usually see scenarios similar to the following, which is not valid TSQL syntax:

DECLARE @WhereClause NVARCHAR(2000)
SET @WhereClause = ' Prouct = ''Computer'''

SELECT * FROM SalesHistory WHERE @WhereClause

In a perfect world, it would make much more sense to do the following:

DECLARE @Product VARCHAR(20)
SET @Product = 'Computer'

SELECT * FROM SalesHistory WHERE Product = @Product

It isn't always this easy. In some scenarios, additional criteria is needed, and as tables grow wider, more and more criteria is often needed. This can typically be solved by writing different stored procedures for the different criteria, but sometimes the criteria is so different for each execution that covering all of the possibilities in a stored procedure is burdensome. While these stored procedures can be made to take into account every WHERE statement possible depending on different parameters, this often leads to a degradation in query performance because of so many conditions in the WHERE clause.

Let's take a look at how to build a simple dynamic query. First, I need a table and some data to query. The script below creates my SalesHistory table and loads data into it:

CREATE TABLE [dbo].[SalesHistory]
(
[SaleID] [int] IDENTITY(1,1),
[Product] [varchar](10) NULL,
[SaleDate] [datetime] NULL,
[SalePrice] [money] NULL
)
GO

SET NOCOUNT ON

DECLARE @i INT
SET @i = 1
WHILE (@i <=5000)
BEGIN

INSERT INTO [SalesHistory](Product, SaleDate, SalePrice)
VALUES ('Computer', DATEADD(ww, @i, '3/11/1919'),
DATEPART(ms, GETDATE()) + (@i + 57))

INSERT INTO [SalesHistory](Product, SaleDate, SalePrice)
VALUES('BigScreen', DATEADD(ww, @i, '3/11/1927'),
DATEPART(ms, GETDATE()) + (@i + 13))

INSERT INTO [SalesHistory](Product, SaleDate, SalePrice)
VALUES('PoolTable', DATEADD(ww, @i, '3/11/1908'),
DATEPART(ms, GETDATE()) + (@i + 29))

SET @i = @i + 1

END

Now I will build my stored procedure that accepts a WHERE clause. For the purpose of this example, I will assume that the WHERE clause was built dynamically from the calling client application.

CREATE PROCEDURE usp_GetSalesHistory
(
@WhereClause NVARCHAR(2000) = NULL
)
AS
BEGIN
DECLARE @SelectStatement NVARCHAR(2000)
DECLARE @FullStatement NVARCHAR(4000)

SET @SelectStatement = 'SELECT TOP 5 * FROM SalesHistory '
SET @FullStatement = @SelectStatement + ISNULL(@WhereClause,'')

PRINT @FullStatement
EXECUTE sp_executesql @FullStatement

/*
--can also execute the same statement using EXECUTE()
EXECUTE (@FullStatement)
*/
END

I set the @WhereClause parameter to allow NULL values because we may not always want to pass a value in for the @WhereClause.

For every execution of this stored procedure, every field is returned for the TOP 5 rows from SalesHistory. If there is a value passed in for the @WhereClause parameter, the executing statement will append that string to the @SelectStatement string. Then I use the stored procedure sp_executesql to execute the dynamically built SQL string.

sp_executesql or EXECUTE()
There are two ways to execute dynamic SQL in SQL Server: use the sp_executesql system stored procedure or the EXECUTE() operator. Sometimes the two methods can produce the same result, but there are differences in how they behave.

The system stored procedure sp_executesql allows for parameters to be passed into and out of the dynamic SQL statement, whereas EXECUTE() does not. Because the SQL statement is passed into the sp_executesql stored procedure as a parameter, it is less suseptible to SQL injection attacks than EXECUTE(). Since sp_executesql is a stored procedure, passing SQL strings to it results in a higher chance that the SQL string will remain cached, which should lead to better performance when the same SQL statement is executed again. In my opinion, sp_executesql results in code that is a lot cleaner and easier to read and maintain. These reasons are why sp_executesql is the preferred way to execute dynamic SQL statements.

In my previous example, I looked at how you can build a simple SQL statement by passing a WHERE clause into a stored procedure. But what if I want to get a list of parameter values from my dynamically built SQL statement? I would have to use sp_executesql because it is the only one of my two options that allows for input and output parameters.

I am going to slightly modify my original stored procedure so that it will assign the total number of records returned from the SQL statement to an output parameter.

DROP PROCEDURE usp_GetSalesHistory
GO
CREATE PROCEDURE usp_GetSalesHistory
(
@WhereClause NVARCHAR(2000) = NULL,
@TotalRowsReturned INT OUTPUT
)
AS
BEGIN
DECLARE @SelectStatement NVARCHAR(2000)
DECLARE @FullStatement NVARCHAR(4000)
DECLARE @ParameterList NVARCHAR(500)

SET @ParameterList = '@TotalRowsReturned INT OUTPUT'

SET @SelectStatement = 'SELECT @TotalRowsReturned
= COUNT(*) FROM SalesHistory '
SET @FullStatement = @SelectStatement + ISNULL(@WhereClause,'')

PRINT @FullStatement
EXECUTE sp_executesql @FullStatement, @ParameterList,
@TotalRowsReturned = @TotalRowsReturned OUTPUT
END
GO

In the above procedure, I need to declare a parameter list to pass into the sp_executesql stored procedure because a value is being assigned to the variable at run time. The only other change to the sp_executesql call is that I am assigning the output parameter from the call to the local @TotalRowsReturned parameter in my usp_GetSalesHistory stored procedure.

I can even call my usp_GetSalesHistory stored procedure similar to the way I did before, but with the addition of an output parameter to indicate the rows that were returned.

DECLARE @WhereClause NVARCHAR(2000), @TotalRowsReturned INT
SET @WhereClause = 'WHERE Product = ''Computer'''

EXECUTE usp_GetSalesHistory
@WhereClause = @WhereClause,
@TotalRowsReturned = @TotalRowsReturned OUTPUT

SELECT @TotalRowsReturned

Caution
Although I am not a huge fan of using dynamic SQL statements, I believe it is a great option to have in your tool belt.

If you decide to incorporate dynamic SQL into your production level code, be careful. The code is not parsed until it is executed, and it can potentially introduce security vulnerabilities that you do not want.

If you are careful with your dynamic SQL statement, it can help you create solutions to some pretty tricky problems.

http://www.zdnetasia.com/generate-dynamic-sql-statements-in-sql-server-62040277.htm

Sunday, September 19, 2010

Wcf Tutorial Site

http://www.dotnetspark.com/kb/1373-step-by-step-tutorial-rest-enabled-service.aspx

http://www.wcftutorial.net/WCF-Development-Tools.aspx

Wednesday, August 25, 2010

sub query and Join in sql

http://technet.microsoft.com/en-us/library/cc917631.aspx

Temporary Tables in Sql

Temporary Tables

The simple answer is yes you can. Let look at a simple CREATE TABLE statement:

CREATE TABLE #Yaks (
YakID int,
YakName char(30) )

You'll notice I prefixed the table with a pound sign (#). This tells SQL Server that this table is a local temporary table. This table is only visible to this session of SQL Server. When I close this session, the table will be automatically dropped. You can treat this table just like any other table with a few exceptions. The only real major one is that you can't have foreign key constraints on a temporary table. The others are covered in Books Online.

Temporary tables are created in tempdb. If you run this query:

CREATE TABLE #Yaks (
YakID int,
YakName char(30) )

select name
from tempdb..sysobjects
where name like '#yak%'

drop table #yaks

You'll get something like this:

name
------------------------------------------------------------------------------------
#Yaks_________________________ . . . ___________________________________00000000001D

(1 row(s) affected)

except that I took about fifty underscores out to make it readable. SQL Server stores the object with a some type of unique number appended on the end of the name. It does all this for you automatically. You just have to refer to #Yaks in your code.

If two different users both create a #Yaks table each will have their own copy of it. The exact same code will run properly on both connections. Any temporary table created inside a stored procedure is automatically dropped when the stored procedure finishes executing. If stored procedure A creates a temporary table and calls stored procedure B, then B will be able to use the temporary table that A created. It's generally considered good coding practice to explicitly drop every temporary table you create. If you are running scripts through SQL Server Management Studio or Query Analyzer the temporary tables are kept until you explicitly drop them or until you close the session.

Now let's get back to your question. The best way to use a temporary table is to create it and then fill it with data. This goes something like this:

CREATE TABLE #TibetanYaks(
YakID int,
YakName char(30) )

INSERT INTO #TibetanYaks (YakID, YakName)
SELECT YakID, YakName
FROM dbo.Yaks
WHERE YakType = 'Tibetan'

-- Do some stuff with the table

drop table #TibetanYaks

Obviously, this DBA knows their yaks as they're selecting the famed Tibetan yaks, the Cadillac of yaks. Temporary tables are usually pretty quick. Since you are creating and deleting them on the fly, they are usually only cached in memory.
Table Variables

If you are using SQL Server 2000 or higher, you can take advantage of the new TABLE variable type. These are similar to temporary tables except with more flexibility and they always stay in memory. The code above using a table variable might look like this:

DECLARE @TibetanYaks TABLE (
YakID int,
YakName char(30) )

INSERT INTO @TibetanYaks (YakID, YakName)
SELECT YakID, YakName
FROM dbo.Yaks
WHERE YakType = 'Tibetan'

-- Do some stuff with the table

Table variables don't need to be dropped when you are done with them.
Which to Use

* If you have less than 100 rows generally use a table variable. Otherwise use a temporary table. This is because SQL Server won't create statistics on table variables.
* If you need to create indexes on it then you must use a temporary table.
* When using temporary tables always create them and create any indexes and then use them. This will help reduce recompilations. The impact of this is reduced starting in SQL Server 2005 but it's still a good idea.

Answering the Question

And all this brings us back to your question. The final answer to your question might look something like this:

DECLARE @TibetanYaks TABLE (
YakID int,
YakName char(30) )

INSERT INTO @TibetanYaks (YakID, YakName)
SELECT YakID, YakName
FROM dbo.Yaks
WHERE YakType = 'Tibetan'

UPDATE @TibetanYaks
SET YakName = UPPER(YakName)

SELECT *
FROM @TibetanYaks

Global Temporary Tables

You can also create global temporary tables. These are named with two pound signs. For example, ##YakHerders is a global temporary table. Global temporary tables are visible to all SQL Server connections. When you create one of these, all the users can see it. These are rarely used in SQL Server.

Tuesday, August 24, 2010

Managing Distributed Transactions with ADO.NET 2.0 using TransactionScope

Ideally, your code will look something like the following:

using (TransactionScope scope = new TransactionScope())

{

biz1.updateSale(someguid);

biz2.insertUserInfo(sometext);

biz3.addNote(someguid);



scope.Complete();

}
http://msdn.microsoft.com/en-us/library/ms172152.aspx
And that’s it. This assumes 2 things: 1) You are using a System.Transaction aware connection to SQL (things like SqlConnection) and 2) You are opening the database connection inside the TransactionScope. A good example of this can be found here: http://msdn2.microsoft.com/en-us/library/ms172152.aspx . And in general here: http://msdn2.microsoft.com/en-us/library/0abf6ykb.aspx


Introduction:

A transaction is a unit of work. You use transactions to ensure the consistency and integrity of a database. If a transaction is successful, all of the data modifications performed during the transaction are committed and made permanent. If an error occurs during a transaction, you can roll back the transaction to undo the data modifications that occurred during the transaction.

This article describes how to manage distributed transactions that can span multiple data sources in a Microsoft .NET application.

Distributed Transactions:

A distributed transaction spans multiple data sources. Distributed transactions enable you to incorporate several distinct operations, which occur on different systems, into an atomic action that either succeeds or fails completely.

Properties of a transaction:

* Atomicity:
A transaction is an atomic unit of work, an indivisible set of operations. The operations that you perform within a transaction usually share a common purpose, and are interdependent. Either all of the operations in a transaction should complete, or none of them should. Atomicity helps to avoid data inconsistencies by eliminating the chance of processing a subset of operations.
* Consistency:
A transaction preserves the consistency of data. A transaction transforms one consistent state of data into another consistent state of data. Some of the responsibility for maintaining consistency falls to the application developer, who must ensure that the application enforces all known integrity constraints.
* Isolation:
A transaction is a unit of isolation. Isolation requires that each transaction appear to be the only transaction manipulating the data store, even though other transactions might be running concurrently. Transactions attain the highest level of isolation when they have the ability to be serialized; at this level of isolation, the results obtained from a set of concurrent transactions are identical to the results obtained by running each transaction serially. A high degree of isolation can limit the number of concurrent transactions, and consequently, applications often reduce the isolation level in exchange for better throughput.
* Durability:
A transaction is the unit of recovery for a set of operations. If a transaction succeeds, the system guarantees that its updates will persist, even if the computer crashes immediately after the application performs a commit operation. Specialized logging allows the system restart procedure to complete unfinished operations so that the transaction is durable.

Creating Distributed Transactions:

The .NET Framework 2.0 includes the System.Transactions namespace, which provides extensive support for distributed transactions across a range of transaction managers, including data sources and message queues. The System.Transactions namespace defines the TransactionScope class, which enables you to create and manage distributed transactions.

To create and use distributed transactions, create a TransactionScope object, and specify whether you want to create a new transaction context or enlist in an existing transaction context. You can also exclude operations from a transaction context if appropriate.

You can open multiple database connections within the same transaction scope. The transaction scope decides whether to create a local transaction or a distributed transaction. The transaction scope, automatically promotes a local transaction to a distributed transaction if necessary, based on the following rules:

* When you create a TransactionScope object, it initially creates a local, lightweight transaction. Lightweight transactions are more efficient than distributed transactions because they do not incur
the overhead of the Microsoft Distributed Transaction Coordinator (DTC).
* If the first connection that you open in a transaction scope is to a SQL Server 2005 database, the connection enlists in the local transaction. The resource manager for SQL Server 2005 works with
the System.Transactions namespace and supports automatic promotion of local transactions to distributed transactions. Therefore, the transaction scope is able to defer creating a distributed
transaction unless and until it becomes necessary later.
* If the first connection that you open in a transaction scope is to anything other than a SQL Server 2005 database, the transaction scope promotes the local transaction to a distributed transaction
immediately. This immediate promotion occurs because the resource managers for these other databases do not support automatic promotion of local transactions to distributed transactions.
* When you open subsequent connections in the transaction scope, the transaction scope promotes the transaction to a distributed transaction, regardless of the type of the database.

Steps of creating distributed transaction:

1. Instantiate a TransactionScope object.
2. Open a connection with the database.
3. Perform your database operations (insert, update & delete).
4. If your operations completed successfully, mark your transaction as complete.
5. Dispose The TransactionScope object.

If all the update operations succeed in a transaction scope, call the Complete method on the TransactionScope object to indicate that the transaction completed successfully. To terminate a transaction, call the Dispose method on the TransactionScope object. When you dispose a TransactionScope, the transaction is either committed or rolled back, depending on whether you called the Complete method:

* If you called the Complete method on the TransactionScope object before its disposal, the transaction manager commits the transaction.
* If you did not call the Complete method on the TransactionScope object before its disposal, the transaction manager rolls back the transaction.

TransactionScope How To:

A transaction scope defines a block of code that participates in a transaction. If the code block completes successfully, the transaction manager commits the transaction. Otherwise, the transaction manager rolls back the transaction. Bellow is a guide lines on how you can create and use a TransactionScope instance:

1. Add a reference to the System.Transactions assembly.
2. Import the System.Transactions namespace into your application.
3. If you want to specify the nesting behavior for the transaction, declare a TransactionScopeOption variable, and assign it a suitable value.
4. If you want to specify the isolation level or timeout for the transaction, create a TransactionOptions instance, and set its IsolationLevel and TimeOut properties.
5. Create a new TransactionScope object in a using statement (a Using statement in Microsoft Visual Basic).
Pass a TransactionScopeOption variable and a TransactionOptions object into the constructor, if appropriate.
6. In the using block (Using block in Visual Basic), open connections to each database that you need to update, and perform update operations as required by your application. If all updates succeed, call the Complete method on the TransactionScope object
7. Close the using block (Using block in Visual Basic) to dispose the TransactionScope object. If the transaction completed successfully, the transaction manager commits the transaction. Otherwise, the transaction manager rolls back the transaction.

Listing 1: TransactionScope C#:

using System.Transactions;

...

TransactionOptions options = new TransactionOptions();

options.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;

options.Timeout = new TimeSpan(0, 2, 0);

using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, options))

{

using (SqlConnection connA = new SqlConnection(connStringA))

{

using (SqlCommand cmdA = new SqlCommand(sqlStmtA, connA))

{

int rowsAffectedA = cmdA.ExecuteNonQuery();

if (rowsAffectedA > 0)

{

using (SqlConnection connB = new SqlConnection(connStringB))

{

using (SqlCommand cmdB = new SqlCommand(sqlStmtB, connB))

{

int rowsAffectedB = cmdB.ExecuteNonQuery();

if (rowsAffectedB > 0)

{

transactionScope.Complete();

}

} // Dispose the second command object.

} // Dispose (close) the second connection.

}

} // Dispose the first command object.

} // Dispose (close) the first connection.

} // Dispose TransactionScope object, to commit or rollback transaction.

Listing 2: TransactionScope VB.NET:

Imports System.Transactions
...
Dim options As New TransactionOptions()
options.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted
options.Timeout = New TimeSpan(0, 2, 0)
Using scope As New TransactionScope(TransactionScopeOption.Required, options)

Using connA As New SqlConnection(connStringA)

Using cmdA As New SqlCommand(sqlStmtA, connA)

Dim rowsAffectedA As Integer = cmdA.ExecuteNonQuery()

If (rowsAffectedA > 0) Then

Using connB As New SqlConnection(connStringB)

Using cmdB As New SqlCommand(sqlStmtA, connB)

Dim rowsAffectedB As Integer = cmdB.ExecuteNonQuery()

If (rowsAffectedB > 0) Then

transactionScope.Complete()

End If

End Using ' Dispose the second command object.

End Using ' Dispose (close) the second connection.

End If

End Using ' Dispose the first command object.

End Using ' Dispose (close) the first connection.

End Using ' Dispose TransactionScope object, to commit or rollback transaction.

Conclusion:

A transaction scope defines a block of code that participates in a transaction. If the code block completes successfully, the transaction manager commits the transaction. Otherwise, the transaction manager rolls back the transaction.

Also you should know that you can TransactionScope with any Data Provider such as Oracle or OleDB or ODBC.

Multiple Ways to do Multiple Inserts

http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx

Multiple Ways to do Multiple Inserts in Sql

Multiple Ways to do Multiple Inserts
By Neeraj Saluja | 5 Jun 2008 | Unedited contribution Part of The SQL Zone.
Various ways to do Multiple Inserts in SQL Server 2000/2005 or Oracle Database using ADO.NET in single database round trip.

Download Multiple Insert Sample C# 2003.zip - 294.96 KB
Download Multiple Inserts Sample C# 2005.zip - 41.85 KB

Table of Contents

*

Introduction and scope of the Article
*

Setting up the Environment
*

Use Batch Update of ADO.NET 2.0 DataAdapter Object
*

Use SQLBulkCopy of ADO.NET 2.0
*

Pass Array as Parameter to ODP.NET Command Object
*

Pass the data in XML format to SQL Server Stored Procedure
*

Form a Single SQL Statement and execute it directly from ADO.NET
o

Append Insert Statements
o

Use Select and Union All for Insert
o

Pass the values in a string with ColumnSeperator and/or RowSeperator and pass it to Stored proc in SQL Server. Split it in proc and Insert
*

Other Useful References:
*

Wrapping up


Introduction and Scope of the Article

Very often we come across a scenario where we need to execute same Insert or update commands with different values multiple times in one go. Say for example, show multiple records in updatable grid/tablular format allowing user to update them and then update them in database. Their are multiple ways to handle it and simplest being execute each DML command one after the other. The most resource consuming part of it is establishing connection for each DML command. Well, connection pooling helps a bit, but still one needs to request for connection from the connection pool. For details about connection pooling, refer to the article ADO.NET Connection Pooling at a Glance . Best solution in such a situation would be to establish the connection only once and perform multiple insert/update within it, and this is what is the target of this article. Let us see couple of mechanisms in which we can do this.

Let us start by setting up the Environment.
Setting up the Environment

We are going to use SQL Server 2000/2005 Or Oracle in the examples. So let us create a very simple table Person in the desired database. Let us keep it very simple by having just 2 columns PersonId and PersonName in it. The syntax for the same would be:

CREATE TABLE Person
(
PersonId INT PRIMARY KEY,
PersonName VARCHAR(100)
)
Use Batch Update of ADO.NET 2.0 DataAdapter Object

Valid for Environment: .NET 2.0 (ADO.NET 2.0), SQL Server 2000 or above

It was quite a cumbersome job until ADO.NET 2.0. But with ADO.NET 2.0 things got very simple as Adapter now supports multiple Insert/Update with the user defined batch size. We are going to limit our discussion to Insert functionality. For Insert, create the Command object with usual simple stored proc for Insert and specify that as the InsertCommand to the DataAdapter object. Along with this we need to specify the UpdateBatchSize which determines the number of Inserts to be processed in one network round trip. Follow the code below to have complete understanding.

First of all create the stored proc in your SQL Server Instance:

CREATE PROCEDURE sp_BatchInsert ( @PersonId INT, @PersonName VARCHAR(100) )
AS
BEGIN
INSERT INTO Person VALUES ( @PersonId, @PersonName);
END

Now refer to the C# code below:

private void btnBatchInsert_Click(object sender, EventArgs e)

{

// Get the DataTable with Rows State as RowState.Added

DataTable dtInsertRows = GetDataTable();



SqlConnection connection = new SqlConnection(connectionString);

SqlCommand command = new SqlCommand("sp_BatchInsert", connection);

command.CommandType = CommandType.StoredProcedure;

command.UpdatedRowSource = UpdateRowSource.None;



// Set the Parameter with appropriate Source Column Name

command.Parameters.Add("@PersonId", SqlDbType.Int, 4, dtInsertRows.Columns[0].ColumnName);

command.Parameters.Add("@PersonName", SqlDbType.VarChar, 100, dtInsertRows.Columns[1].ColumnName);



SqlDataAdapter adpt = new SqlDataAdapter();

adpt.InsertCommand = command;

// Specify the number of records to be Inserted/Updated in one go. Default is 1.

adpt.UpdateBatchSize = 2;



connection.Open();

int recordsInserted = adpt.Update(dtInsertRows);

connection.Close();



MessageBox.Show("Number of records affected : " + recordsInserted.ToString());



}

Well, first thing we all developers do is check using SQL Profiler. And to our surprise it shows 4 different RPC requests sent to SQL Server. This is how it looks there :

Do not Panic. The difference in not in the way your statements are executed at the Server, the difference is in terms of how your request(s) are sent to the server. In simple words, with UpdateBatchSize as 2 ( in this case ), it just means that request for insertion of 2 rows will be grouped together and sent to the database server in single network round trip. You can probably use some other tools like “Netmon” etc to have a closer look. So use the UpdateBatchSize appropriately
The DataAdapter has two update-related events: RowUpdating and RowUpdated. Only one RowUpdated event occurs for each batch, whereas the RowUpdating event occurs for each row processed. You may also like to look at the Exceptional handling part of it. Explore them.
Use SQLBulkCopy of ADO.NET 2.0

Valid for Environment: NET 2.0 or above on SQL Server 2005 database or above

With ADO.NET 2.0 we got the programming interface for Bulk Copy which provides quite simple and straight forward mechanism to transfer the data from one SQL server instance to another, from one table to another, from DataTable to SQL Server 2005 database, from DataReader to SQL Server 2005 database and many more.

SqlBulkCopy belongs to System.Data.SqlClient namespace and it is as simple as ADO.NET Command object when it comes to programming it. Let us see it working:



private void btnSQLBulkCopyInsert_Click(object sender, EventArgs e)

{

// Get the DataTable

DataTable dtInsertRows = GetDataTable();



using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString))

{

sbc.DestinationTableName = "Person";



// Number of records to be processed in one go

sbc.BatchSize = 2;



// Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table

sbc.ColumnMappings.Add("PersonId", "PersonId");

sbc.ColumnMappings.Add("PersonName", "PersonName");



// Number of records after which client has to be notified about its status

sbc.NotifyAfter = dtInsertRows.Rows.Count;



// Event that gets fired when NotifyAfter number of records are processed.

sbc.SqlRowsCopied+=new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied);



// Finally write to server

sbc.WriteToServer(dtInsertRows);

sbc.Close();

}

}



void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e)

{

MessageBox.Show("Number of records affected : " + e.RowsCopied.ToString());

}





The code above is very simple and quite self explanatory.

Key Notes :

1. BatchSize and NotifyAfter are two different properties. Former specify the number of records to be processed in one go while later specifies the number of records to be processed after which client needs to be notified.

Reference:

1. No better place than MSDN. Refer to http://msdn2.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopy.aspx for details on SqlBulkCopy
2. Refer http://msdn2.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopy_properties.aspx to get details on all properties on SqlBulkCopy.


Pass Array as Parameter to ODP.NET Command Object

Valid for Environment: ODP.NET 9.2.0.1 or above with NET 1.1 or above on Oracle 9i Database or above

Oracle has been investing a lot in connectivity between Oracle Database and .NET. Oracle provided ODP.NET (Oracle Provider for .NET) to have connectivity between Oracle Database and .NET. Well, I have not gone into its details but it is believed that it is better to use ODP.NET to connect to Oracle database from .Net environment. May be since both are Oracle products they may have optimized ODP.NET for better performance, may be…

Anyways, let us see what we have got in it for Multiple Inserts in one network roundtrip. ODP.NET provides us OracleCommand object which is quite similar to SQLCommand Object. OracleCommand object supports taking arrays as parameters. The only thing is while using array one needs to provide ArrayBindCount, which informs ODP.NET the number of records to expect and process from the array. Simply put, the code is exactly same as if we are calling a stored proc by providing two simple parameters, just that, rather than providing simple value to a parameter, we need to specify an array of values. And along with that we specify ArrayBindCount same as Array Length, to enable ODP.NET to do multiple inserts. I am sure the code below will help you to understand this better:

Create the simple stored proc sp_InsertByODPNET in oracle database similar to that of sp_BatchInsert in the code above and follow the .NET Code below :

private void btnOracleODPNET_Click(object sender, System.EventArgs e)

{

int[] arrPersonId = {Convert.ToInt32(txtPersonId1.Text.Trim()), Convert.ToInt32(txtPersonId2.Text.Trim()), Convert.ToInt32(txtPersonId3.Text.Trim()), Convert.ToInt32(txtPersonId3.Text.Trim())};



string[] arrPersonName = {txtPersonName1.Text.Trim(), txtPersonName2.Text.Trim(), txtPersonName3.Text.Trim(), txtPersonId4.Text.Trim()};



// You need to put the Namespace "using Oracle.DataAccess.Client;" to compile this code

OracleConnection connection = new OracleConnection(oracleConnectionString);



OracleCommand command = new OracleCommand("sp_InsertByODPNET", connection);

command.CommandType = CommandType.StoredProcedure;



// We need to tell the ODP.NET the number of rows to process

//and that we can do using "ArrayBindCount" property of OracleCommand Object

command.ArrayBindCount = arrPersonId.Length;



// For specifying Oracle Data Types for Parameters you need to use "Oracle.DataAccess.Types" namespace.

command.Parameters.Add("@PersonId", OracleDbType.Int16);

command.Parameters[0].Value = arrPersonId;



command.Parameters.Add("@PersonName", OracleDbType.Varchar2, 100);

command.Parameters[1].Value = arrPersonName;



connection.Open();

int recordsInserted = command.ExecuteNonQuery();

connection.Close();



MessageBox.Show("Number of records affected : " + recordsInserted.ToString());

}



Couple of people have raised issues against this approach as they encountered Memory Leak while using ODP.NET along with .NET. But also, I have heard that the issues are resolved with recent version of ODP.NET and .NET and patches. You may like to do your research before adopting this approach.

Key Notes:

1. Use the latest possible version of ODP.NET, as it usually has the bug fixes from all the previous releases.

2. Do not forget to set command.ArrayBindCount.



References:

1. For latest on Oracle’s ODP.NET refer to its details on Oracle site at URL: http://www.oracle.com/technology/tech/windows/odpnet/index.html


Pass the data in XML format to SQL Server Stored Procedure

Valid for Environment: .NET 1.1, .NET 2.0, SQL Server 2000 or above

SQL Server 2000 supports XML. SELECT * FROM table FOR XML AUTO. Syntax sounds familiar, right? Yes, SQL Server 2000 supports XML. It not only supports returning the data in XML format, it also supports reading the XML string and parsing it. Before going to the implementation of Multiple Insert using this approach. To understand it a little bit, copy the code below and execute it in SQL Query Analyzer SQL Window :

DECLARE @intDocHandle int

DECLARE @xmlString varchar(4000)



SET @xmlString ='











'



--Create an internal representation of the XML document.

EXEC sp_xml_preparedocument @intDocHandle OUTPUT, @xmlString



SELECT * FROM OPENXML(@intDocHandle,

'/root/person')

WITH

( PersonId INT,

PersonName VARCHAR(100)

)



-- Remove the internal representation.

exec sp_xml_removedocument @intDocHandle



I am leaving further interpretation and understanding part up to you. It is quite easy to observe that it revolves around two important stored procs : sp_xml_preparedocument, sp_xml_removedocument and a key word OPENXML Let us now see how can we exploit this for Multiple Insert Scenario.

With the Logic mentioned above, the code for desired Stored Procedure looks like:

CREATE PROCEDURE sp_InsertByXML ( @strXML VARCHAR(4000) )

AS

BEGIN

DECLARE @intDocHandle int



--Create an internal representation of the XML document.

EXEC sp_xml_preparedocument @intDocHandle OUTPUT, @strXML



INSERT INTO Person ( PersonId, PersonName )

SELECT * FROM OPENXML(@intDocHandle,

'/PersonData/Person', 2)

WITH

( PersonId INT,

PersonName VARCHAR(100)

)



-- Remove the internal representation.

EXEC sp_xml_removedocument @intDocHandle



END



Now we need to form the XML at front end, which we can pass to this stored proc. I am sure, there can be various ways to do it. You can form by concatenating and forming XML or by using XMLDocument object of System.XML namespace and get the XML string out of it. Since most of times we play around DataSet and DataTables, I chose to get the XML out from the DataSet. First of all get the desired data in DataSet object. If you have trouble forming DataSet at runtime, refer to the “private string GetXml()” method in the sample code attached and then use the following code to get the string out of it :

System.IO.StringWriter sw = new System.IO.StringWriter ( );

dsPersonData.WriteXml (sw);

string strXml = sw.ToString();

With this I have the desired XML string. Now only job left is to call the stored procedure from my front end code, which is as follows:

private void btnInsertByXMLInput_Click(object sender, System.EventArgs e)

{

string strXml = GetXml();



SqlConnection connection = new SqlConnection(connectionString);



SqlCommand command = new SqlCommand("sp_InsertByXML", connection);

command.CommandType = CommandType.StoredProcedure;



command.Parameters.Add("@strXML",SqlDbType.VarChar, 4000);

command.Parameters[0].Value = strXml;



connection.Open();

int recordsInserted = command.ExecuteNonQuery();

connection.Close();



MessageBox.Show("Number of records affected : " + recordsInserted.ToString());



}



It also returns the message “Number of records affected : 4 “. Multiple records inserted, and Mission Accomplished yet again.

Key Notes:

1. XML is Case Sensitive. For example, in the OPENXML statement ‘/root/person’ is not equal to ‘/ROOT/person’.

2. You can form the Attribute Centric as well as Element Centric XML. In the code above, it is Element Centric XML thus we have “2” in OPEN XML syntax, else default is 1 which is used for Attribute Centric XML

3. In ADO.NET 2.0, you can get the XML out of DataTable itself. In such case modify the XPath in OPENXML appropriately.

4. SQL Server 2000 supports XML processing, but in SQL Server 2005 we have xml as datatype. It has got lot more ways to support DML and DDL for xml. Choose as per your need.



References:

1. Books Online for details of sp_xml_preparedocument, sp_xml_removedocument, OPENXML

Form a Single SQL Statement and execute it directly from ADO.NET

It may sound bit rude and may be disgusting to today’s developers and architects of doing it this way, but yes, it is also an option. And, I agree with them.

If you believe that all the latest mechanisms like the one mentioned above does not suits your requirement, form a query by yourself and execute it. But that too when your project design permits you to execute the query directly from .NET Data Access Layer, one can use Command Object with CommandType as Text.

And to execute multiple insert queries, we can simply append the insert queries separated by semi-colon “;” and use that as the CommandText for your Command Object. Use the ExecuteNonQuery() method of Command Object and observe the resulting number of records affected.

Well, there are couple of ways in which we can form the SQL Statement for our need. Let us see each of them one by one.
Append Insert Statements

In this case we are going to append each of t Insert statement one after the other and execute it as a single command. The syntax that we are trying to achieve here is

INSERT INTO Person VALUES (‘1’, ‘AA’); INSERT INTO Person VALUES (‘2’, ‘BB’);

The .NET 1.1 code for creating sql query having 4 simultaneous insert and executing it with the command object would look like this:

private void btnInsertByJoiningQueries_Click(object sender, System.EventArgs e)

{

string sqlText = "INSERT INTO Person VALUES ( '" + txtPersonId1.Text.Trim() + "',' " + txtPersonName1.Text + "'); ";

sqlText += "INSERT INTO Person VALUES ( '" + txtPersonId2.Text.Trim() + "',' " + txtPersonName2.Text + "'); ";

sqlText += "INSERT INTO Person VALUES ( '" + txtPersonId3.Text.Trim() + "',' " + txtPersonName3.Text + "'); ";

sqlText += "INSERT INTO Person VALUES ( '" + txtPersonId4.Text.Trim() + "',' " + txtPersonName4.Text + "'); ";



SqlConnection connection = new SqlConnection(connectionString);

SqlCommand command = new SqlCommand(sqlText,connection);

command.CommandType = CommandType.Text;

connection.Open();

int recordsInserted = command.ExecuteNonQuery();

connection.Close();



MessageBox.Show("Number of records affected : " + recordsInserted.ToString());



}



In the example above, it returns the message “Number of records affected : 4 “. Multiple records inserted, Mission Accomplished.
Use Select and Union All for Insert

This is quite similar to appending insert statements, but rather than appending each complete Insert statement we are going to first going to Select the values and then pass it to Insert statement to Insert them in the table. The syntax that we are trying to achieve here is

INSERT INTO TableName (Col1, Col2)

SELECT 1, ‘AA’

UNION ALL

SELECT 2, ‘BB’

UNION ALL

SELECT 3, ‘CC’

Rest, the approach is quite similar to what we saw in “Append Insert Statements” section. So the code for it is left to you. Still if you need some assistance feel free to post it.
Pass the values in a string with ColumnSeperator and/or RowSeperator and pass it to Stored proc in SQL Server. Split it in proc and Insert

Here what we can do is, from the front end create a string with values separated by predefined ColumnSeperator and/or RowSeperator and then let Stored Proc parse it, separate the data, bring the data into useful format and then insert it into the respective table. Well, I am not going to go in its details as I also believe that this approach should be used when all your other options are really ruled out. But I can give you tips for it. Create a function which returns you the values after splitting the in the desired format. I found couple of them as mentioned below :

Split functions for SQL Server 2000 at the URLs :



· http://www.eggheadcafe.com/community/aspnet/9/10012556/split-function-in-your-da.aspx

· http://vyaskn.tripod.com/passing_arrays_to_stored_procedures.htm



Split functions for Oracle at the URLs :

*

http://www.eggheadcafe.com/community/aspnet/9/10030407/equivalent-split-function.aspx



May be you can take the permission from the Author and modify and use it.



Key Notes:

1. If number of Insert queries are fairly large in number, use StringBuilder rather than simple string object for better performance.

2. Go for clubbing the SQL statements only if you do not have any option left with you. This is surely an option, but not as maintainable and secure as other previously mentioned options.


Other Useful References:

I came across couple of articles which I really found relevant and useful, I advise you to go through them

* Performing Batch Operations Using DataAdapters (ADO.NET): http://msdn2.microsoft.com/en-us/library/aadf8fk2.aspx
* Data Access Application Block at microsoft.com
* How to: Bind an Array to an ODP.NET Database Command at oracle.com/technology
* Passing an array or DataTable into a stored procedure : http://www.codeproject.com/KB/database/PassingArraysIntoSPs.aspx

Wrapping up

Huff, numerous ways to do the same thing. Now the big question - which one to go for? Trust me there is no hard and fast rule, but when you choose one of them, ensure that whichever mechanism you are choosing gives best run time performance, easy to develop, easy to maintain, secure and reliable. As stated earlier, this article limits the scope of discussion to ensure that in one connection instance we get the multiple inserts done for better performance and less resource consumption, but I am sure there are various ways in which performance can be enhanced.

One more important thing that I really want you all to take care of is Exception Handling. You may get exceptions like Primary Key violation, foreign key violation, or some other constraint violation even when we try to insert single record, here we are attempting to insert lot more than one, so got to be very careful. I have not taken exception handling in details here due to the limited scope.

Please spare some time to rate and provide feedback about this article. Your couple of minutes can help in enhancing the quality of this article.