Oddly, I ran my SSIS package on dev without problems but when I re-point to my integration database (still using the same source) I am occasionally getting these errors on an execute sql task that simply inserts into a table. As I noted, I don't remember ever seeing this on dev and I successfully ran 36,000 rows into the dev table. To be sure, I dropped the integration table and re-created with the dev schema and re-ran. I can go about 250 rows but then I start seeing these errors. I thought maybe it was a memory issue so closed bunch of stuff on my machine but same thing still happens.
1) Any ideas what might be causing this? I double-checked all the fields and the db fields should be large enough to handle the data. The first 250 rows do not cause any problems.
2) I have error handling set on the sql task but it does not appear to be capturing any information related to this error. Any suggestions on how to capture more information would be appreciated.
The value is too large to fit in the column data area of the buffer.
at Microsoft.SqlServer.Dts.Pipeline.PipelineBuffer.SetString(Int32 columnIndex, String value)
at Microsoft.SqlServer.Dts.Pipeline.PipelineBuffer.set_Item(Int32 columnIndex, Object value)
at Microsoft.SqlServer.Dts.Pipeline.ScriptBuffer.set_Item(Int32 ColumnIndex, Object value)
at UserComponent.PrimeOutput(Int32 Outputs, Int32 OutputIDs, PipelineBuffer Buffers)
at Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost.PrimeOutput(Int32 outputs, Int32 outputIDs, PipelineBuffer buffers)
Running this package again in debug mode and the odd thing is that while the the OnError event of the sql task is triggered (where I am therefore able to see the values of the variables) the row and all the fields are still inserted into the table. I would think that if this were a 'true' error then the insert would fail. I'll check the size of the string in the output column but I am still flummoxed as to how I can get an error but it actually still works.
Ok. I would say use the data flow task to load the source rows to destination table. Also on destionation component, use the redirect row to error output to inspect the error records.
Hello Arthur and hsbal,
I unproposed Arthur's reply as answer. After additional exploration I discovered that the error I noted about about the value too big to fit in buffer is actually the manifestation of an effort to insert a duplicate key into a table. Kind of misrepresenting the true error which made it confusing.
In the source database there are some values that were stored as both 01-234-567 and 1234567 (for same person in the source database). I need to always format them as 1234567 to pass into the stored procedure so while they were unique in the source database once I removed hyphens and leading zeros they became duplicates. I added transformations in the dataflow to redirect these duplicates so they can be removed from the source database.