locked
Proper Error Handling in Transaction when including DDL RRS feed

  • Question

  • Hi All,

    When creating a new table in SQL Server 2008 R2, I receive the following DDL (edited for readability):

    IF OBJECT_ID('dbo.TestA', 'U') IS NULL
        DROP TABLE [dbo].[TestA]
    GO
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[TestA]
        (
        [PKID] [varchar](50) NOT NULL,
        [Description] [varchar](100) NULL,
        [Inactive] [bit] NULL,
        CONSTRAINT [DF_TestA_PKID] PRIMARY KEY CLUSTERED ([PKID] ASC)
        WITH
            (
            PAD_INDEX = OFF,
            STATISTICS_NORECOMPUTE = OFF,
            IGNORE_DUP_KEY = OFF,
            ALLOW_ROW_LOCKS = ON,
            ALLOW_PAGE_LOCKS = ON
            )
        ON [PRIMARY]
        )
    ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [dbo].[TestA] ADD CONSTRAINT [DF_TestA_Inactive] DEFAULT ((0)) FOR [Inactive]
    GO

    Now assume that I'm creating three different tables, all within the same script.  I want the script to run the DDL inside a transaction, so that if anything fails, it all rolls back.  I'm aware of the traditional method of encapsulating statements inside a transaction, and use that method for DML scripts.  (Begin the transaction, execute statement, check @@ERROR, if not equal to 0, GOTO TRAN_ERROR).  The problem is this all falls apart when the statements above are separated into batches using GO.  I understand that using named transactions in SSMS will get you around the GO statement but it still causes issues with the GOTO error, namely:

    A GOTO statement references the label TRAN_ERROR but the label has not been declared.

    So I guess my question is this: when issuing DDL statements, complete with error checking after each one, is it necessary to use GO after each statement?  If I can get rid of these I'm assuming my script will function as desired.

    All responses are greatly appreciated!

    Best Regards

    Brad

    Thursday, December 5, 2013 7:49 PM

Answers

  • The short answer is NO - it is not required after every statement.  However, you do need to be careful about batches and how they are compiled.  For example, the create procedure command cannot be combined with other statements in a single batch. 

    You can also make your scripts much more readable by writing them yourself rather than generating them using SSMS.  The SSMS output is overly wordy, overly cautious, and primitively formatted. They're also simply ugly.  In the above example, ansi_padding is set on to create the table, and then set off.  Well, this setting is deprecated and will always be on in the future.  In addition, the default is on.  So why bother flipping it on and then off - especially for every create table or alter table statement?  Similar thoughts apply to the other connection settings.  Set them once to the appropriate values - they are connection settings and will apply to every batch in the script.

    I would also like to point out that a table where all of the "real" columns are nullable is not really a good table design.  Having a bit column that is nullable is equally suspicious - especially given the name.  Having a default does not prevent the column from being set to null.  I realize this may just be a bad example for discussion purposes, but I've seen far more of this in actual production systems than one should ever expect.  Since you are handed the scripts, perhaps you can provide some level of guidance for those making these poor decisions.

    As far as error handling goes - I don't generally.  Once you have multiple batches, error handling becomes an issue anyways.  Either a script works correctly without errors .... or you restore the database from the backup that you created immediately before you ran the script.  And you take a backup just in case the script crashes for some reason and/or the elaborate error handling in place fails.  Yes there is some set of errors that you should expect and handle correctly - based on the logic in the script.  But generally speaking, it seems to be a waste of time to craft some elaborate error handling within EVERY script that covers all possible issues.  Erland Sommarskog has a long discussion about this on his website - which you should be able to find by simple searching for his name here.

    • Proposed as answer by Naomi N Thursday, December 5, 2013 9:10 PM
    • Marked as answer by 2012S4 Thursday, December 5, 2013 11:07 PM
    Thursday, December 5, 2013 8:27 PM

All replies

  • The short answer is NO - it is not required after every statement.  However, you do need to be careful about batches and how they are compiled.  For example, the create procedure command cannot be combined with other statements in a single batch. 

    You can also make your scripts much more readable by writing them yourself rather than generating them using SSMS.  The SSMS output is overly wordy, overly cautious, and primitively formatted. They're also simply ugly.  In the above example, ansi_padding is set on to create the table, and then set off.  Well, this setting is deprecated and will always be on in the future.  In addition, the default is on.  So why bother flipping it on and then off - especially for every create table or alter table statement?  Similar thoughts apply to the other connection settings.  Set them once to the appropriate values - they are connection settings and will apply to every batch in the script.

    I would also like to point out that a table where all of the "real" columns are nullable is not really a good table design.  Having a bit column that is nullable is equally suspicious - especially given the name.  Having a default does not prevent the column from being set to null.  I realize this may just be a bad example for discussion purposes, but I've seen far more of this in actual production systems than one should ever expect.  Since you are handed the scripts, perhaps you can provide some level of guidance for those making these poor decisions.

    As far as error handling goes - I don't generally.  Once you have multiple batches, error handling becomes an issue anyways.  Either a script works correctly without errors .... or you restore the database from the backup that you created immediately before you ran the script.  And you take a backup just in case the script crashes for some reason and/or the elaborate error handling in place fails.  Yes there is some set of errors that you should expect and handle correctly - based on the logic in the script.  But generally speaking, it seems to be a waste of time to craft some elaborate error handling within EVERY script that covers all possible issues.  Erland Sommarskog has a long discussion about this on his website - which you should be able to find by simple searching for his name here.

    • Proposed as answer by Naomi N Thursday, December 5, 2013 9:10 PM
    • Marked as answer by 2012S4 Thursday, December 5, 2013 11:07 PM
    Thursday, December 5, 2013 8:27 PM
  • I agree with Scott.

    I would add, these are all NEW tables, so failure has no adverse affects to current code.  So I would not bother trying to rollback create object statements.

    If you generate the code using SSDT Schema Compare options, there is an option you can check to create transactions. However, when I did that, it failed terribly and after playing for a few mins with it, I decided it was not worth the bother to try and fix.

    Thursday, December 5, 2013 9:05 PM
  • >> When creating a new table in SQL Server 2008 R2, I receive the following DDL (edited for readability): <<

    I would edit it for correctness instead. It is a total mess full of ISO-11179 violation and absurdity. Nobody would use bit flags in SQL, most columns are NOT NULL, we never put meta data in a data element names, etc. 

    Oh, that absurd “active_something_flg” is NULL-able! Do you know that story?  The first time MS put this thing into their dialect, it was a true BIT {0,1}, but now it is a numeric data type. All data types in SQL are NULL-able. When bad programmers wrote code like you, without the NOT NULL they got a surprise :) 

    CREATE TABLE TestA
    (test_id VARCHAR(50) NOT NULL PRIMARY KEY,
     test_description VARCHAR(100) NOT NULL,
     active_something_flg DEFAULT 0 BIT NOT NULL);

    >> Now assume that I'm creating three different tables, all within the same script. I want the script to run the DDL inside a transaction, so that if anything fails, it all rolls back. <<

    No, we do not create tables on the fly. It is done once and then the schema is used from that point forward. Back in the 1950's, we did programming like this with file declarations scripts. 

    Each table is an entity or a relationship. We cannot magically create things from thin air; your script is a magic wand! That is not logic. 

    You have just been told by an SQL authority that you do not know how to use the language. You are doing the wrong things with bad code. Stop it. Please think about it.

    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking in Sets / Trees and Hierarchies in SQL

    Thursday, December 5, 2013 9:39 PM
  • CELKO!

    There is a wise old saying: "A person who wants to teach, Firstly have to know to learn". Please try to check responses that you receive. They might suggest something.


    [Personal Site] [Blog] [Facebook]signature

    Thursday, December 5, 2013 9:45 PM
  • Erland Sommarskog has a long discussion about this on his website - which you should be able to find by simple searching for his name here.

    Thanks Scott. :-)

    I have an even longer article in the works, and there is a section on error handling with DDL. The gist of that section is that if you want a deployment script with transaction handling to be run from SSMS, you are barking up the wrong tree, because you have to litter the script with so much control that dwarves the real meat of the script. If you can guarantee that the script will only be run from SQLCMD you have a better proposition. (Because you can terminate SQLCMD by raising an error with state 127.)

    The below is from one of the examples I have in my text. The gist is this: a batch of DDL may produce an error that aborts the batch and rolls back the transaction. If you don't watch out, the script will continue and running - and carry out the rest of the script. The script below considers this situation and starts a new transaction and a process-global flag and rolls back at the end.

    As Scott says, restoring a backup is simpler. Or, as I suggest in the article, run the script from a controlling program that exits on first error. (And I completely agree with Scott's remark on ANSI_PADDING.)

    Here is the script; the purpose of it is to alter some existing tables, but the important part is the error handling:

    BEGIN TRANSACTION
    SET XACT_ABORT ON
    SET QUOTED_IDENTIFIER ON
    SET ARITHABORT ON
    SET NUMERIC_ROUNDABORT OFF
    SET CONCAT_NULL_YIELDS_NULL ON
    SET ANSI_NULLS ON
    SET ANSI_PADDING ON
    SET ANSI_WARNINGS ON
    SET CONTEXT_INFO 0x
    GO
    IF @@error <> 0 AND @@trancount > 0 ROLLBACK TRANSACTION
    IF @@trancount = 0 BEGIN SET CONTEXT_INFO 0x01 BEGIN TRANSACTION END
    GO
    ALTER TABLE dbo.Main
         DROP CONSTRAINT fk_Main_Lookup
    GO
    IF @@error <> 0 AND @@trancount > 0 ROLLBACK TRANSACTION
    IF @@trancount = 0 BEGIN SET CONTEXT_INFO 0x01 BEGIN TRANSACTION END
    GO
    ALTER TABLE dbo.Lookup SET (LOCK_ESCALATION = TABLE)
    GO
    IF @@error <> 0 AND @@trancount > 0 ROLLBACK TRANSACTION
    IF @@trancount = 0 BEGIN SET CONTEXT_INFO 0x01 BEGIN TRANSACTION END
    GO
    CREATE TABLE dbo.Tmp_Main
         (
         KeyCol int NOT NULL,
         LookupId int NOT NULL,
         Constrained int NULL,
         DataCol int NULL
         )    ON [PRIMARY]
    GO
    IF @@error <> 0 AND @@trancount > 0 ROLLBACK TRANSACTION
    IF @@trancount = 0 BEGIN SET CONTEXT_INFO 0x01 BEGIN TRANSACTION END
    GO
    ALTER TABLE dbo.Tmp_Main SET (LOCK_ESCALATION = TABLE)
    GO
    IF @@error <> 0 AND @@trancount > 0 ROLLBACK TRANSACTION
    IF @@trancount = 0 BEGIN SET CONTEXT_INFO 0x01 BEGIN TRANSACTION END
    GO
    IF EXISTS(SELECT * FROM dbo.Main)
            EXEC('INSERT INTO dbo.Tmp_Main (KeyCol, LookupId, Constrained, DataCol)
                SELECT KeyCol, LookupId, Constrained, CONVERT(int, DataCol) FROM dbo.Main WITH (HOLDLOCK TABLOCKX)')
    GO
    IF @@error <> 0 AND @@trancount > 0 ROLLBACK TRANSACTION
    IF @@trancount = 0 BEGIN SET CONTEXT_INFO 0x01 BEGIN TRANSACTION END
    GO
    ALTER TABLE dbo.Sub
         DROP CONSTRAINT fk_Sub_Main
    GO
    IF @@error <> 0 AND @@trancount > 0 ROLLBACK TRANSACTION
    IF @@trancount = 0 BEGIN SET CONTEXT_INFO 0x01 BEGIN TRANSACTION END
    GO
    DROP TABLE dbo.Main
    GO
    IF @@error <> 0 AND @@trancount > 0 ROLLBACK TRANSACTION
    IF @@trancount = 0 BEGIN SET CONTEXT_INFO 0x01 BEGIN TRANSACTION END
    GO
    EXECUTE sp_rename N'dbo.Tmp_Main', N'Main', 'OBJECT'
    GO
    IF @@error <> 0 AND @@trancount > 0 ROLLBACK TRANSACTION
    IF @@trancount = 0 BEGIN SET CONTEXT_INFO 0x01 BEGIN TRANSACTION END
    GO
    ALTER TABLE dbo.Main ADD CONSTRAINT
         pk_Main PRIMARY KEY CLUSTERED
         (
         KeyCol
         ) WITH( STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    
    GO
    IF @@error <> 0 AND @@trancount > 0 ROLLBACK TRANSACTION
    IF @@trancount = 0 BEGIN SET CONTEXT_INFO 0x01 BEGIN TRANSACTION END
    GO
    CREATE NONCLUSTERED INDEX LookupId_ix ON dbo.Main
         (
         LookupId
         ) WITH( STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    GO
    IF @@error <> 0 AND @@trancount > 0 ROLLBACK TRANSACTION
    IF @@trancount = 0 BEGIN SET CONTEXT_INFO 0x01 BEGIN TRANSACTION END
    GO
    ALTER TABLE dbo.Main ADD CONSTRAINT
         ck_constrained CHECK (([Constrained]>(0)))
    GO
    IF @@error <> 0 AND @@trancount > 0 ROLLBACK TRANSACTION
    IF @@trancount = 0 BEGIN SET CONTEXT_INFO 0x01 BEGIN TRANSACTION END
    GO
    ALTER TABLE dbo.Main ADD CONSTRAINT
         fk_Main_Lookup FOREIGN KEY
         (
         LookupId
         ) REFERENCES dbo.Lookup
         (
         LookupId
         ) ON UPDATE    NO ACTION
            ON DELETE    NO ACTION
    
    GO
    IF @@error <> 0 AND @@trancount > 0 ROLLBACK TRANSACTION
    IF @@trancount = 0 BEGIN SET CONTEXT_INFO 0x01 BEGIN TRANSACTION END
    GO
    CREATE TRIGGER Audit_Main ON dbo.Main FOR INSERT, UPDATE AS
         INSERT Audit(TableName, KeyValue1, OtherCols)
                SELECT 'Main', ltrim(str(KeyCol)),
                             (SELECT LookupId, Constrained, DataCol
                                FOR        XML RAW('Data'))
                FROM     inserted
    GO
    IF @@error <> 0 AND @@trancount > 0 ROLLBACK TRANSACTION
    IF @@trancount = 0 BEGIN SET CONTEXT_INFO 0x01 BEGIN TRANSACTION END
    GO
    ALTER TABLE dbo.Sub ADD CONSTRAINT
         fk_Sub_Main FOREIGN KEY
         (
         KeyCol
         ) REFERENCES dbo.Main
         (
         KeyCol
         ) ON UPDATE    NO ACTION
            ON DELETE    NO ACTION
    
    GO
    IF @@error <> 0 AND @@trancount > 0 ROLLBACK TRANSACTION
    IF @@trancount = 0 BEGIN SET CONTEXT_INFO 0x01 BEGIN TRANSACTION END
    GO
    ALTER TABLE dbo.Sub SET (LOCK_ESCALATION = TABLE)
    GO
    IF @@error <> 0 AND @@trancount > 0 ROLLBACK TRANSACTION
    IF @@trancount = 0 BEGIN SET CONTEXT_INFO 0x01 BEGIN TRANSACTION END
    GO
    IF context_info() IS NULL
         COMMIT
    ELSE
         ROLLBACK TRANSACTION

    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
    Thursday, December 5, 2013 11:09 PM