locked
Content Deployment Job RRS feed

  • Question

  • Hi there, I am hoping someone could be a great help for the issue I stack with over the last few days while I am pushing content from the Staging farm to DR farm. Before I dive in to the error, let me give you backgorund information about the environment.

    Authoring farm : MOSS 2007, 32-bit environment

    Staging farm : MOSS 2007, 32-bit environment

    Production : MOSS 2007, 64-bit environment

    DR : MOSS 2007, 32-bit and then upgraded to 64-bit.

    Content pushed from Authoring to Stage, Stage to Prod and Stage to DR. All working great except some of the Content Deployment jobs from Stage to DR throwing the below exceptions. Can some one please help to rectify this issue. Thanking in advance.

    <?xml version="1.0" encoding="utf-8" ?>

    - <ArrayOfReportMessage xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">

      <ReportMessage Title="Exception of type 'System.OutOfMemoryException' was thrown. at System.String.CreateStringFromEncoding(Byte* bytes, Int32 byteLength, Encoding encoding) at System.Text.UnicodeEncoding.GetString(Byte[] bytes, Int32 index, Int32 count) at System.Data.SqlClient.TdsParserStateObject.ReadString(Int32 length) at System.Data.SqlClient.TdsParser.ReadSqlStringValue(SqlBuffer value, Byte type, Int32 length, Encoding encoding, Boolean isPlp, TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.ReadSqlValue(SqlBuffer value, SqlMetaDataPriv md, Int32 length, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ReadColumnData() at System.Data.SqlClient.SqlDataReader.ReadColumn(Int32 i, Boolean setTimeout) at System.Data.SqlClient.SqlDataReader.GetValueInternal(Int32 i) at System.Data.SqlClient.SqlDataReader.GetValues(Object[] values) at System.Data.ProviderBase.DataReaderContainer.CommonLanguageSubsetDataReader.GetValues(Object[] values) at System.Data.ProviderBase.SchemaMapping.LoadDataRow() at System.Data.Common.DataAdapter.FillLoadDataRow(SchemaMapping mapping) at System.Data.Common.DataAdapter.FillFromReader(DataSet dataset, DataTable datatable, String srcTable, DataReaderContainer dataReader, Int32 startRecord, Int32 maxRecords, DataColumn parentChapterColumn, Object parentChapterValue) at System.Data.Common.DataAdapter.Fill(DataTable[] dataTables, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) at System.Data.Common.LoadAdapter.FillFromReader(DataTable[] dataTables, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) at System.Data.DataSet.Load(IDataReader reader, LoadOption loadOption, FillErrorEventHandler errorHandler, DataTable[] tables) at System.Data.DataSet.Load(IDataReader reader, LoadOption loadOption, String[] tables) at Microsoft.SharePoint.Deployment.ListObjectHelper.GetNextBatch() at Microsoft.SharePoint.Deployment.ObjectHelper.RetrieveDataFromDatabase(ExportObject exportObject) at Microsoft.SharePoint.Deployment.ListObjectHelper.RetrieveData(ExportObject exportObject) at Microsoft.SharePoint.Deployment.ExportObjectManager.GetObjectData(ExportObject exportObject) at Microsoft.SharePoint.Deployment.ExportObjectManager.MoveNext() at Microsoft.SharePoint.Deployment.ExportObjectManager.ExportObjectEnumerator.MoveNext() at Microsoft.SharePoint.Deployment.SPExport.SerializeObjects() at Microsoft.SharePoint.Deployment.SPExport.Run()" Time="2011-08-21T09:20:11.6259708Z" Severity="Error" Phase="ExportInProgress" />

      <ReportMessage Title="Content deployment job 'XYZ'  failed.The exception thrown was 'System.OutOfMemoryException' : 'Exception of type 'System.OutOfMemoryException' was thrown.'" Time="2011-08-21T09:20:13.172826Z" Severity="Error" Description="" Recommendation="" Phase="Failure" />

      </ArrayOfReportMessage>

    Sunday, August 21, 2011 9:57 AM

Answers

  • Content Deployment Issues and Fix

    Our MOSS 2007 based Internet farm has been setup having four different farms

    1.       Authoring  : SQL 2005 DB Server and  WFE/App Server (32-bit OS, Windows Server 2003 Enterprise, 4GB of RAM)

    2.       Stage – Standalone (W2k3 Enterprise, 32-bit, 4GB RAM, SQL2k5 )

    3.       Production – SQL 2005 DB Server and WFE/APP (Windows Server 2008 Standard, 8GB RAM)

    4.       Disaster Recovery farm - SQL 2005 DB Server and WFE/APP (Windows Server 2008 Standard, 8GB RAM)

    All the above SharePoint farms have been patched to the same build version (12.0.0.6548)

    Our company massively use Content Deployment for the transfer of contents across those farms.

    Recently I manage to migrate one of the 32-bit Windows Server 2003 based WFE/APP server to 64-bit Windows Server 2008 WFE/APP servers.

    I have been faced with a new challenge by the time when I create a new Content Deployment jobs between the 32-bit stage server and the new 64-bit Prod server. Below are some of the errors and steps I took to address the issue.

    During this period, I have almost gone through each and every article on the net that has the key word “Content Deployment” but there was not direct address to the problems I was facing with. I understand why at the end, because even though the vent ID and logs are the same; the root cause could be different depending on your environment. Any ways, thought to write it here in case this would help someone one day.

    I have created 10 different full and incremental Content Deployment jobs between Stage and the new Production 64-bit Server and 6 of them succeeded. The remaining 4 jobs throw the following exceptions

    <ReportMessage Title="Content deployment job 'XYZ'  failed.The exception thrown was 'System.OutOfMemoryException' : 'Exception of type 'System.OutOfMemoryException' was thrown.'" Time="2011-08-21T09:20:13.172826Z" Severity="Error" Description="" Recommendation="" Phase="Failure" />

    When you check the event viewer you may see errors with Event ID

    Event ID: 4958

    Publishing: Content deployment job failed. Error: 'System.Net.WebException: The underlying connection was closed: Unable to connect to the remote server. ---> System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException'

     

     

    Event ID: 5323

    Failed to transfer files to destination server for Content Deployment job ‘XYZ'. Exception was: 'System.Net.WebException: The underlying connection was closed: Unable to connect to the remote server. ---> System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.

    First thing first; looking the error it looks like something to do with memory. Ok where can I free up the memory

    Ø  Thought to stop some of the app pools on the stage server – to free up some memory

    Ø  Restart SharePoint timer jobs (net stop sptimerv3 / net start sptimerv3 )

    Ø  Try the content push . Failed again with the same error

    Then I read this article which saves me plenty of time and also gives me hope

    http://social.technet.microsoft.com/Forums/en/sharepointadmin/thread/ba6f8a3c-848b-4107-a2bb-1012ea2bb91b  the fix was, just to restart the stage (32-bit) server and unbelievably after restart three of the four jobs succeeded.

    Here you go the real challenge starts; one of the failed job still throwing the above error

    Export ran out of memory while compressing a very large file. To successfully export, turn compression off by specifying the -nofilecompression parameter. at Microsoft.SharePoint.Deployment.ExportDataFileManager.<>c__DisplayClass2.<Compress>b__0() at Microsoft.SharePoint.Utilities.SecurityContext.RunAsProcess(CodeToRunElevated secureCode) at Microsoft.SharePoint.Deployment.ExportDataFileManager.Compress(SPRequest request) at Microsoft.SharePoint.Deployment.SPExport.Run()

     

    Then run the following stsadm command to disable compression. By default compression is enabled.  Because when the job starts compressing, it takes up too much memory as content deployment as memory intensive.  Here’s the command to disable compression

    Stsadm –o editcontentdeploymentpath –pathname “pathname” –enablefilecompression no

    I have also checked the disk space on the drive where 12 hive is, because I noticed that I run out of disk while it’s exporting. I cleaned up my C drive since I was left with 1.5GB.

    Re-run the job and now – though it fails, I notice good progress.

    This time the export phase has finished and it fails when it starts transporting those exported objects to the destination server. Here’s the error I got

     

    ·             <ReportMessage Title="Content deployment job 'TEST' failed.The remote upload Web request failed." Time="2011-08-22T14:06:46.5204312Z" Severity="Error" Description="" Recommendation="" Phase="Failure" />

      </ArrayOfReportMessage>

    Troubleshooting

    At the beginning thought it’s something to do establishing connection with the destination server, while transporting the items and I proved it it’s not. Because I checked files being transferred on the Destination Server content deployment temporary folder. Great, so, nothing to do the network, authentication.

    I then thought, it could be the upload limit on the destination server, and decided to get to know the size of those exported items from the stage. Note when the job fails, by default all the exported files will automatically be removed from the content deployment temp folder on the source farm and also if you manage to get to the transporting phase, it does on the destination farm too. So, technically you won’t get a chance to see the size of those exported items. But here’s the command how you keep those exported items so that you  get to know the chance to see the size

    Stsadm –o editcontentdeploymentpath –pathname “pathname” –keeptemporaryfiles

    This gives me the chance to see the size of those exported items which later be transported across the destination farm. I found out the largest item with 65MB. But IIS 7 has default upload limit of 29MB. Okay here’s how I fix it

    Step 1: In your destination farm Central Admin,  change the default upload limit to max size of file you could possibly upload to the farm. In my case change it to 100MB

    Step 2: Change the time out in IIS, in case it take longer than the default time out set (120sec) while uploading the file. This can be done through IIS Manager à Advanced settings section.

    Step 3: Change the web.config file of the destination web application to allow upload of max size

    look for  maxRequestLength="51200"

    Originally :  <httpRuntime maxRequestLength="51200" />

    Replace with:  <httpRuntime executionTimeout="999999" maxRequestLength="51200" />

    Step 4: Look for <system.webserver>

    Then add the following entry immediately after. Make sure to change the maxAllowedContentLength to accommodate max upload size limit. Make it bit bigger than the default upload limit set through the Central Admin.

      <security>

          <requestFiltering>

            <requestLimits maxAllowedContentLength="104857600" />

          </requestFiltering>

        </security>

     

    I then re-run the job; great I have seen progress even though it fails.

    This time

    • Ø  The job has finished exporting successfully
    • Ø  The job has finished Transporting the whole exported items across successfully
    • Ø  The job started importing items to the destination farm and then failed

    Here’s the error I have got this time

    The specified name is already in use. A list, survey, discussion board, or document library cannot have the same name as another list, survey, discussion board, or document library in this Web site. Use your browser's Back button, and type a new name. at Microsoft.SharePoint.Library.SPRequest.CreateListOnImport(String bstrUrl, Guid& pguidListId, String bstrTitle, String bstrDescription, Int32 lTemplateID, String bstrFeatureId, Guid guidRootFolderId, Int64 llFlags, Int32 iVersion, Int32 iAuthor, String bstrFields, String bstrContentTypes, String bstrImageUrl, String bstrEventSinkAssembly, String bstrEventSinkClass, String bstrEventSinkData, Guid guidDocTemplateId, String bstrViews, String bstrForms, Boolean bCompressedSchema) at Microsoft.SharePoint.Deployment.ListSerializer.CreateList(SPWeb parentWeb, Dictionary`2 listMetaData, Boolean usingPublicSchema) at Microsoft.SharePoint.Deployment.ListSerializer.SetObjectData(Object obj, SerializationInfo info, StreamingContext context, ISurrogateSelector selector) at Microsoft.SharePoint.Deployment.XmlFormatter.ParseObject(Type objectType, Boolean isChildObject) at Microsoft.SharePoint.Deployment.XmlFormatter.DeserializeObject(Type objectType, Boolean isChildObject, DeploymentObject envelope) at Microsoft.SharePoint.Deployment.XmlFormatter.Deserialize(Stream serializationStream) at Microsoft.SharePoint.Deployment.ObjectSerializer.Deserialize(Stream serializationStream) at Microsoft.SharePoint.Deployment.ImportObjectManager.ProcessObject(XmlReader xmlReader) at Microsoft.SharePoint.Deployment.SPImport.DeserializeObjects() at Microsoft.SharePoint.Deployment.SPImport.Run()

    I know for sure there’s a list, library or survey .. in the destination site collection that has the same name as the source but with different ID.

    I then decided to create a new web application attached with blank content database and then re-create the content deployment job with the new web application.

    Solution:

    create new web application attached with new content database. Create a new site collection using Blank Template . Run Full content deployment job.

    Yay, it succeeded.

    Note: Full content deployment job only deploys the current state of the source farm. It doesn’t export the versions or any history. If you see the size of the database in the source and destination, for sure it will be different because it only registered those active contents. i.e history of those deleted contents won’t be migrated. For me, since there’s no content publishing in the production farm, it works perfectly.

    Finally, apply all custom solutions to the web application I created since we have many in-house customisation. Copy and past the web.config file from the old web application. Test the site - all in good shape.

    Then schedule incremental job and then good to go for walk to breath some fresh air. 


    AfeAU


    • Marked as answer by afuau Thursday, September 1, 2011 2:03 AM
    Wednesday, August 31, 2011 11:44 PM

All replies

  • HI,

    This shows that you need to upgrade your hardware components used in your DR environment.

    I hope this will help you out.


    Thanks, Rahul Rashu
    Sunday, August 21, 2011 11:20 AM
  • Thanks Rahul for your response but this issue arise after we upgrade our DR farm to a 64-bit environment. Content push was working before upgrade while it was on 32-bit server. Once upgrade the farm to 64-bit and run full / incremental content push, it throws the above error for two sites only otherwise all the other jobs have finished successfully.

    Thanks,


    AfeAU
    Monday, August 22, 2011 3:16 AM
  • HI,

    Then I would suggest you to break down these two jobs to more smaller jogs and carry out the same.

    For example if job1 consists of replication of 10 sites then break it into 2 with 5,5 each. 

     

    Let me know if this works

     


    Thanks, Rahul Rashu
    Monday, August 22, 2011 3:53 AM
  • Thanks Rahul, I rebooted the stage server and run the content push and this time the error changed to the one shown hereunder. I setup the job using farm account and for sure the account has full permission on the destination site. I have tested browsing the site using this account from the stage server and all seems good. But for some reason, it throws the following error and can't figure out the root cause. thanking in advance for looking this issue.

     

    Job Name

    AUC

    Job Description

     

    Path

    AUCorp

    Source Server URL

    http://sourcesite

    Destination Server URL

    http://destination site

    Time

    23/08/2011 12:06 AM

     

    Title

    Content deployment job 'Test' failed.The remote upload Web request failed.

    Severity

    Error

    Description

     

    Recommendation

     

     

    <?xml version="1.0" encoding="utf-8" ?>

    - <ArrayOfReportMessage xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">

      <ReportMessage Title="Exporting: #1 - 5864" Time="2011-08-22T14:06:22.7238608Z" Severity="Informational" />

      <ReportMessage Title="Content deployment job 'TEST' failed.The remote upload Web request failed." Time="2011-08-22T14:06:46.5204312Z" Severity="Error" Description="" Recommendation="" Phase="Failure" />

      </ArrayOfReportMessage>

     


    AfeAU
    Monday, August 22, 2011 11:44 PM
  • HI,

    There is another thread at http://social.msdn.microsoft.com/Forums/en-US/sharepointecm/thread/b5f93d44-8c14-498a-b795-fd41c9dbcbc0

    There is also a KB article that explains this http://support.microsoft.com/kb/969565

    I hope this will help you out.


    Thanks, Rahul Rashu
    Tuesday, August 23, 2011 8:59 AM
  • Hi Rashu, thanks for following up this issue but still with no luck. Tried out few things over the last few days and also googled many of the blogs and articles on the net, but still no luck. Here's some of the thing I did and what I have got in return

    Originally, out of 10 Content Deployment job, 4 of them were throwing the above error "The remote upload Web request failed" the remaining jobs run as per schedule and succeeded with no issue.

    In an effort to fix my problem, I found the following article http://social.technet.microsoft.com/Forums/en/sharepointadmin/thread/ba6f8a3c-848b-4107-a2bb-1012ea2bb91b which recomend to reboot the server every time you run the content deployment job. Suprisingly, this fix three of the content deployment jobs out of four. Now I left with one job, which still throws error with event id 4958 and 5323 even though I reboot the server. exporting the contents finish successfully and it throws the error at the stage when it starts Transporting the export item. I use stsadm -o editcontentdeploymentpath -pathname "" -enablecompression no,  so that the sever won't run out of memory at the stage when it starts compressing the exported items which I manage to get to the point export finished successfully with 0 errors. Now the issue is when it starts transporting the items.

    Event ID : 4958

    Publishing: Content deployment job failed. Error: 'System.Net.WebException: The underlying connection was closed: Unable to connect to the remote server. ---> System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException'

    Event ID: 5323

    Failed to transfer files to destination server for Content Deployment job 'Investment Inc'. Exception was: 'System.Net.WebException: The underlying connection was closed: Unable to connect to the remote server. ---> System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.

    Can any one please help me how I can get away this last issue I stack with over the last couple of weeks?

    Thanking in advance.


    AfeAU
    • Edited by afuau Monday, August 29, 2011 11:26 PM
    Monday, August 29, 2011 4:18 AM
  • Hi,

    It seems that the remaining job somehere references anything that is causing this issue.

    Follow this:

    1. Break down the remaining job into parts.

    2. Run each part to check if it fails or pass.

    In this way narrow down to the place that is causing this issue.

    I hope this will help you out.


    Thanks, Rahul Rashu
    Monday, August 29, 2011 7:28 AM
  • Rahul, many thanks again but I did try to to break down the job to a particular site within the site collection. When I create the job, I tried to push the entire site collection which has 10 or more sub sites and the size of the items to be pushed reaches around 1.4GB. Later on I tried only a specific site (one site) with in the site collection, where the size goes down to 1.3GB (not much difference) but the result is still the same - 'System.OutOfMemoryException' .

    Hope someone would help me to get this rectified as I am almost run out of ideas.

    Cheers

     


    AfeAU
    Monday, August 29, 2011 11:33 PM
  • Hi,

    Even if you have broken down the job but still it uses 1.3 GB so there is not much difference. Break it down further and start with 100 MB and then 200MB etc to check at what limit this exception comes.

    I hope this willl help you out.


    Thanks, Rahul Rashu
    Tuesday, August 30, 2011 12:34 AM
  • Hi Rahul,

    When you mean break down to 100MB, how can I achieve that? Through the Content Deployment API in Sharepoint CA, I can only create a job to the entire site collection or a specific site within the site collection. If there's a way, I can break down to smaller size, that would be great. could you please provide detail how I can achieve this?

     

    Thanks,


    AfeAU


    Tuesday, August 30, 2011 1:34 AM
  • Hi,

    You can split the content of your site. let me explain this in details.

    You can follow these steps:

    1. Create a site by copying the contents from your source site.(You can use STSADM command or site manager)

    2. Now create more subsites in it and spread your contents in an approximate manner of 100 , 200, 300 MB etc.

    3. Now run the jobs on each of them to find out what I have explained earlier.

     

    Please let me know if anything is not clear


    Thanks, Rahul Rashu
    Tuesday, August 30, 2011 1:48 AM
  • Rahul,

    That same site collection is being pushed successfully to the production farm from this stage server. But since the DR server is new, eventhough I create an increamental job, still does full content push i belive it's because there's no history of content deployment from stage to the new server in DR. Saying that, recreating a site in the Authoring farm which has less content than the current one and again to push content all the way from Authoring to Stage and from Stage to DR won't help to rectify the issue around memory.

    My only concern is, when I ran the job it fails when it starts compressing the exported items and when i switch off the compression, it fails when starts transporting the exported items, but in all cases it's to do with issue around out of memory.

    Hope this makes sense.

    Thanks,


    AfeAU


    Tuesday, August 30, 2011 2:21 AM
  • Content Deployment Issues and Fix

    Our MOSS 2007 based Internet farm has been setup having four different farms

    1.       Authoring  : SQL 2005 DB Server and  WFE/App Server (32-bit OS, Windows Server 2003 Enterprise, 4GB of RAM)

    2.       Stage – Standalone (W2k3 Enterprise, 32-bit, 4GB RAM, SQL2k5 )

    3.       Production – SQL 2005 DB Server and WFE/APP (Windows Server 2008 Standard, 8GB RAM)

    4.       Disaster Recovery farm - SQL 2005 DB Server and WFE/APP (Windows Server 2008 Standard, 8GB RAM)

    All the above SharePoint farms have been patched to the same build version (12.0.0.6548)

    Our company massively use Content Deployment for the transfer of contents across those farms.

    Recently I manage to migrate one of the 32-bit Windows Server 2003 based WFE/APP server to 64-bit Windows Server 2008 WFE/APP servers.

    I have been faced with a new challenge by the time when I create a new Content Deployment jobs between the 32-bit stage server and the new 64-bit Prod server. Below are some of the errors and steps I took to address the issue.

    During this period, I have almost gone through each and every article on the net that has the key word “Content Deployment” but there was not direct address to the problems I was facing with. I understand why at the end, because even though the vent ID and logs are the same; the root cause could be different depending on your environment. Any ways, thought to write it here in case this would help someone one day.

    I have created 10 different full and incremental Content Deployment jobs between Stage and the new Production 64-bit Server and 6 of them succeeded. The remaining 4 jobs throw the following exceptions

    <ReportMessage Title="Content deployment job 'XYZ'  failed.The exception thrown was 'System.OutOfMemoryException' : 'Exception of type 'System.OutOfMemoryException' was thrown.'" Time="2011-08-21T09:20:13.172826Z" Severity="Error" Description="" Recommendation="" Phase="Failure" />

    When you check the event viewer you may see errors with Event ID

    Event ID: 4958

    Publishing: Content deployment job failed. Error: 'System.Net.WebException: The underlying connection was closed: Unable to connect to the remote server. ---> System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException'

     

     

    Event ID: 5323

    Failed to transfer files to destination server for Content Deployment job ‘XYZ'. Exception was: 'System.Net.WebException: The underlying connection was closed: Unable to connect to the remote server. ---> System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.

    First thing first; looking the error it looks like something to do with memory. Ok where can I free up the memory

    Ø  Thought to stop some of the app pools on the stage server – to free up some memory

    Ø  Restart SharePoint timer jobs (net stop sptimerv3 / net start sptimerv3 )

    Ø  Try the content push . Failed again with the same error

    Then I read this article which saves me plenty of time and also gives me hope

    http://social.technet.microsoft.com/Forums/en/sharepointadmin/thread/ba6f8a3c-848b-4107-a2bb-1012ea2bb91b  the fix was, just to restart the stage (32-bit) server and unbelievably after restart three of the four jobs succeeded.

    Here you go the real challenge starts; one of the failed job still throwing the above error

    Export ran out of memory while compressing a very large file. To successfully export, turn compression off by specifying the -nofilecompression parameter. at Microsoft.SharePoint.Deployment.ExportDataFileManager.<>c__DisplayClass2.<Compress>b__0() at Microsoft.SharePoint.Utilities.SecurityContext.RunAsProcess(CodeToRunElevated secureCode) at Microsoft.SharePoint.Deployment.ExportDataFileManager.Compress(SPRequest request) at Microsoft.SharePoint.Deployment.SPExport.Run()

     

    Then run the following stsadm command to disable compression. By default compression is enabled.  Because when the job starts compressing, it takes up too much memory as content deployment as memory intensive.  Here’s the command to disable compression

    Stsadm –o editcontentdeploymentpath –pathname “pathname” –enablefilecompression no

    I have also checked the disk space on the drive where 12 hive is, because I noticed that I run out of disk while it’s exporting. I cleaned up my C drive since I was left with 1.5GB.

    Re-run the job and now – though it fails, I notice good progress.

    This time the export phase has finished and it fails when it starts transporting those exported objects to the destination server. Here’s the error I got

     

    ·             <ReportMessage Title="Content deployment job 'TEST' failed.The remote upload Web request failed." Time="2011-08-22T14:06:46.5204312Z" Severity="Error" Description="" Recommendation="" Phase="Failure" />

      </ArrayOfReportMessage>

    Troubleshooting

    At the beginning thought it’s something to do establishing connection with the destination server, while transporting the items and I proved it it’s not. Because I checked files being transferred on the Destination Server content deployment temporary folder. Great, so, nothing to do the network, authentication.

    I then thought, it could be the upload limit on the destination server, and decided to get to know the size of those exported items from the stage. Note when the job fails, by default all the exported files will automatically be removed from the content deployment temp folder on the source farm and also if you manage to get to the transporting phase, it does on the destination farm too. So, technically you won’t get a chance to see the size of those exported items. But here’s the command how you keep those exported items so that you  get to know the chance to see the size

    Stsadm –o editcontentdeploymentpath –pathname “pathname” –keeptemporaryfiles

    This gives me the chance to see the size of those exported items which later be transported across the destination farm. I found out the largest item with 65MB. But IIS 7 has default upload limit of 29MB. Okay here’s how I fix it

    Step 1: In your destination farm Central Admin,  change the default upload limit to max size of file you could possibly upload to the farm. In my case change it to 100MB

    Step 2: Change the time out in IIS, in case it take longer than the default time out set (120sec) while uploading the file. This can be done through IIS Manager à Advanced settings section.

    Step 3: Change the web.config file of the destination web application to allow upload of max size

    look for  maxRequestLength="51200"

    Originally :  <httpRuntime maxRequestLength="51200" />

    Replace with:  <httpRuntime executionTimeout="999999" maxRequestLength="51200" />

    Step 4: Look for <system.webserver>

    Then add the following entry immediately after. Make sure to change the maxAllowedContentLength to accommodate max upload size limit. Make it bit bigger than the default upload limit set through the Central Admin.

      <security>

          <requestFiltering>

            <requestLimits maxAllowedContentLength="104857600" />

          </requestFiltering>

        </security>

     

    I then re-run the job; great I have seen progress even though it fails.

    This time

    • Ø  The job has finished exporting successfully
    • Ø  The job has finished Transporting the whole exported items across successfully
    • Ø  The job started importing items to the destination farm and then failed

    Here’s the error I have got this time

    The specified name is already in use. A list, survey, discussion board, or document library cannot have the same name as another list, survey, discussion board, or document library in this Web site. Use your browser's Back button, and type a new name. at Microsoft.SharePoint.Library.SPRequest.CreateListOnImport(String bstrUrl, Guid& pguidListId, String bstrTitle, String bstrDescription, Int32 lTemplateID, String bstrFeatureId, Guid guidRootFolderId, Int64 llFlags, Int32 iVersion, Int32 iAuthor, String bstrFields, String bstrContentTypes, String bstrImageUrl, String bstrEventSinkAssembly, String bstrEventSinkClass, String bstrEventSinkData, Guid guidDocTemplateId, String bstrViews, String bstrForms, Boolean bCompressedSchema) at Microsoft.SharePoint.Deployment.ListSerializer.CreateList(SPWeb parentWeb, Dictionary`2 listMetaData, Boolean usingPublicSchema) at Microsoft.SharePoint.Deployment.ListSerializer.SetObjectData(Object obj, SerializationInfo info, StreamingContext context, ISurrogateSelector selector) at Microsoft.SharePoint.Deployment.XmlFormatter.ParseObject(Type objectType, Boolean isChildObject) at Microsoft.SharePoint.Deployment.XmlFormatter.DeserializeObject(Type objectType, Boolean isChildObject, DeploymentObject envelope) at Microsoft.SharePoint.Deployment.XmlFormatter.Deserialize(Stream serializationStream) at Microsoft.SharePoint.Deployment.ObjectSerializer.Deserialize(Stream serializationStream) at Microsoft.SharePoint.Deployment.ImportObjectManager.ProcessObject(XmlReader xmlReader) at Microsoft.SharePoint.Deployment.SPImport.DeserializeObjects() at Microsoft.SharePoint.Deployment.SPImport.Run()

    I know for sure there’s a list, library or survey .. in the destination site collection that has the same name as the source but with different ID.

    I then decided to create a new web application attached with blank content database and then re-create the content deployment job with the new web application.

    Solution:

    create new web application attached with new content database. Create a new site collection using Blank Template . Run Full content deployment job.

    Yay, it succeeded.

    Note: Full content deployment job only deploys the current state of the source farm. It doesn’t export the versions or any history. If you see the size of the database in the source and destination, for sure it will be different because it only registered those active contents. i.e history of those deleted contents won’t be migrated. For me, since there’s no content publishing in the production farm, it works perfectly.

    Finally, apply all custom solutions to the web application I created since we have many in-house customisation. Copy and past the web.config file from the old web application. Test the site - all in good shape.

    Then schedule incremental job and then good to go for walk to breath some fresh air. 


    AfeAU


    • Marked as answer by afuau Thursday, September 1, 2011 2:03 AM
    Wednesday, August 31, 2011 11:44 PM