A very important part of MOSS is that it holds semi-structured information. Semi-structured repositories were born out of the need to bring both database worlds and document worlds together, and they hold information that are organized in a loose fashion. SharePoint is an excellent example of an application that handles semi-structured information, which it has been doing since it’s first inception in 2001.

You can recognize semi-structured data via some common traits, such as:

  • Similar list items are grouped together (in site collections, sites, lists or folders, depending on the similarity).
  • The name of item containers (such as folders) typically describes its contents.
  • Metadata is used extensively to describe list items.
  • List items that are stored in the same item container don’t have to share identical sets of metadata.
  • The order of list item metadata is unimportant.
  • Pieces of list item metadata may not be required.

Semi-structured information is the opposite of unstructured information which is data that isn’t structured and isn’t easily readable by a machine, such as audio, video, IM messages, and e-mail messages. Looking through SharePoint glasses, to us, unstructured data is just data that is waiting to get some structure.

Semi-structured information is also the opposite of relational information where several pieces of information have some kind of connection to each other. Or as the British mathematician and logician Augustus De Morgan (not your typical English name, now is it?) has put it beautifully around 150 years ago:  "When two objects, qualities, classes, or attributes, viewed together by the mind, are seen under some connexion, that connexion is called a relation."

Currently, MOSS doesn’t handle relations for list items well, although we hope and expect that this changes in the future, because there are valid reasons why you would need them. For example, you might want to store compound documents, or documents that have relationships to each other (before someone asks, the lookup fields mechanism is too basic to implement document relationships, a topic which we could fill at least a book chapter with, but that’s another story).

As we’ve said, MOSS doesn’t handle relations for list items well, but it does offer the infrastructure that allows you to implement such a system, the list event system being the foremost member of this infrastructure. The list event system allows you to implement scenarios like this:

  • Item A has a relationship to Item B, so if this piece of metadata for Item A changes this affects Item B as well.
  • Item A links to Item B, so if Item B moves to another location, the link to Item B needs to be updated as well.
  • If I remove Item A, Item B needs to be removed as well.
When building a system that supports relations between list items, you’ll soon find out that you need a transaction, quite a well-known concept, especially in the database world. Since MOSS doesn’t support list item transactions, you need to build such a mechanism yourself. We think the best way to do this is to build a System.Transactions resource manager for SharePoint, which is coincidentally the topic of this article. To be very precise, we’ll show how to build a resource manager that’s able to support transactions for list item metadata.

Background info

Before we dive into building System.Transactions resource managers it helps if you know a little bit about transactions and related topics in general. As a quick refresher, we’ll discuss some concepts that are important for a complete understanding of the article. These concepts are:

  • Transactions
  • ACID
  • Two-phase commit protocol (2PC) *
  • Single-phase commit
  • MS DTC
  • Redo log

 * For this article this is the most important concept of the topics mentioned above.

Transactions and ACID

A transaction is a unit of work that is performed against some kind of data repository that is treated in a coherent and reliable way independent of other transactions. This means a transaction must have four traits that are often referred to by the acronym ACID. A transaction must be:

  1. Atomic, all tasks within a transaction should be completed or none of them.
  2. Consistent, the data repository should remain in a consistent state before the start of the transaction as well as after the end of the transaction (even if the transaction fails).
  3. Isolated, a transaction should appear isolated from other operations. This means nobody is allowed to see the intermediate state of data during the transaction.
  4. Durable, once the transaction manager notifies the client that the transaction has been successful, the transaction is persisted and cannot be undone (even in the case of a system failure).

Two-phase commit protocol

The 2PC protocol is an algorithm that helps a system coordinate a transaction and is important to us because we will be demonstrating how to build a System.Transactions resource manager for SharePoint that supports this protocol. Tasks (and ultimately resources) within a 2PC transaction are managed by resource managers. All resource managers within a transaction are managed by the transaction coordinator. There are two kinds of resources (and therefore, there are two kinds of resource managers as well):

  • Durable resources, these are resources whose state is expected to exist after the lifetime of a transaction. In case of a failure, these resources need to be recovered.
  • Volatile resources, these are resources whose state is not expected to exist after the lifetime of a transaction (such resources only exist in memory). In case of a failure, these resources won’t be recovered.

The 2PC is mostly used within distributed environments and consists of different phases and different steps within those phases. Every transaction that uses the two-phase commit protocol goes through a life cycle of three phases (yes, you might want to re-read that line, because the two-phase commit protocol does have three phases). The following list shows how the 2PC protocol works:

  1. Observation phase (also known as phase Zero), the transaction coordinator observes resource managers that are participating in the transaction.
    1. Resource managers that want to participate in a transaction are enlisted via a transaction coordinator.
  2. Voting phase (also known as phase One), the transaction coordinator tries to gather information from the resource managers participating in the transaction and decides if the transaction is successful or not. Basically, during this phase, resource managers are telling the transaction coordinator if they are able to perform the action we want them to perform.
    1. The transaction coordinator asks all resource managers if they’re ready to commit.
    2. Resource managers perform some kind of action.
    3. Resource managers write to an undo log. The undo log contains a list of all changes that will be reversed (the rollback) if the transaction fails.
    4. Resource managers write to a redo log. The redo log contains a list of all changes made to the underlying data store. Redo logs can contain committed, as well as uncommitted transactions. In this article, we won’t show how to implement a redo log. However, at the end of this section we’ll talk about redo logs some more.
    5. Resource managers reply to the transaction coordinator letting it know whether the transaction was successful. A resource vote can be either one of three things:
      1. Read-only vote, the resource managers agrees to commit its part of the transaction and doesn’t need an outcome notification.
      2. Prepared vote, the resource managers agrees to commit its part of the transaction and needs an outcome notification.
      3. Abort vote, the entire transaction needs to be aborted because the resource managers fails to complete its part of the work.
    6. The transaction coordinator waits a given amount of time until it has responses from all resource managers. Based on this information, the transaction coordinator decides if the transaction has been a success or a failure. If the transaction coordinator doesn’t get all the responses in time, the transaction fails.
  3. Final phase success (the final phase is also known as phase Two). The transaction is completed. During this phase, resources should actually perform the action we want them to perform. At this stage, resource managers should always be able to perform the required action, or else the 2PC protocol gets screwed up (the 2PC protocol isn’t flawless and there are scenarios where it can actually cause problems; at the end of this article we’ve included a link to more information).
    1. If all resource managers completed their tasks, and they’ve let the transaction coordinator know so, the transaction coordinator sends a Commit notification to all resource managers that sent a Prepared vote (that was sent in phase One, thus letting the transaction coordinator know they needed an outcome notification from the transaction coordinator).
    2. Each resource manager that sent a Prepared vote completes any remaining actions required to commit their part of the transaction and releases any locks and objects it holds.
    3. Each resource manager that sent a Prepared vote acknowledges to the transaction coordinator that it has finished its job.
    4. The transaction coordinator completes any remaining actions (if necessary) and releases any locks and objects it holds.
  4. Final phase (error). The transaction has failed and any temporary changes need to be reversed.
    1. If at least one of the resource managers failed their tasks, or if at least one of the resource managers failed to let the transaction coordinator know, the transaction coordinator sends an Abort notification to all resources that sent a Prepared vote.
    2. Each resource manager that sent a Prepared vote rollbacks any changes it performed based on its undo log and releases any locks and objects it holds.
    3. Each resource manager that sent a Prepared vote acknowledges to the transaction coordinator that it has finished its job. If a resource manager loses contact with the transaction coordinator before this happens, the resource manager is said to be "In Doubt". Durable resource managers will try to reconnect to the transaction coordinator later and perform recovery actions. The other way round works too, if a transaction coordinator is still up and running, but loses contact with one or more of the resource managers within the transaction, the coordinator sends an In Doubt notification to the remaining connected resource managers.
    4. The transaction coordinator completes any remaining actions (if necessary) and releases any locks and objects it holds.

As you’ve seen, although the 2PC protocol consists of three phases, it gets its name from the fact that typically every resource manager performs and commits their actions in two steps: it determines if it is able to perform an action, and then asks the transaction coordinator for permission to do it before it permanently commits the changes.

The 2PC describes an algorithm for handling distributed transactions, but it doesn’t provide an implementation of it. If you want that you need to take a look at OLE Transactions (or you can simply call it OleTx), a distributed transaction protocol that is an implementation of the 2PC protocol. If you want to learn more about OleTx, check out http://msdn.microsoft.com/en-us/library/cc229116.aspx ). On Windows platforms, OleTx works in conjunction with the Distributed Transaction Coordinator (DTC, which will be discussed in more detail later in this section).

Single-phase commit

Starting a distributed transaction (for instance, a 2PC transaction handled by the DTC) by default results in a lot of overhead if it turns out you don’t really need a distributed transaction after all. A transaction doesn’t need to be distributed when all resources within a transaction run in the same application domain. Also, such transactions are allowed to contain one or multiple volatile resources (in which case you won’t need an advanced transaction mechanism) or at most a single durable resource (in which case it is okay to let the underlying data repository, such as SQL Server, handle the transaction). In such cases, you can use a protocol that is a bit more lightweight compared to 2PC. The OleTx transaction protocol specifies such a lightweight algorithm, called Single-Phase Commit which works like this:

  1. The transaction won’t be distributed because it contains one or multiple volatile resources or at most a single durable resource. Because of this, a lightweight transaction coordinator is created that coordinates the transaction (for as long as it stays a non-distributed transaction).
  2. The transaction coordinator doesn’t ask resources if they are ready to commit their tasks, instead it asks them to perform a single-phase commit. In doing this, the transaction coordinator effectively delegates the right to decide the transaction outcome to the resources within the transaction. This lack of explicit coordination enhances runtime performance.
  3. Resources accept the delegation of rights and commit (or rollback) their actions. After doing so, they will notify the transaction coordinator.
  4. Alternatively, a resource can decide to reject the delegation of rights by responding with a Prepared vote. In such a case, the transaction coordinator takes care of the transaction outcome decisioning.

By the way, transactions that start out as non-distributed may need to become distributed after all (for instance, if more durable resources are added to the transaction). If this happens, the non-distributed transaction is said to have been promoted to a distributed transaction.

DTC

The 2PC depends upon the availability of a transaction coordinator, which you luckily don’t have to write yourself. Microsoft has been shipping a transaction coordinator of their own that’s able to handle distributed transactions (that span multiple domains, processes or machines) for years now. It is called the Microsoft Distributed Transaction Coordinator service and uses the OleTx distributed transaction protocol. By the way, since .NET 2.0 you have other (newer) choices when it comes to choosing transaction coordinators as well, but we’ll discuss those in section "System.Transactions background". You can program against the DTC directly, or use it indirectly, for example, via Enterprise Services. The following procedure explains how to monitor the DTC:

  1. Start > Programs > Administrative Tools > Component Services. This opens the Component Services MMC snap-in.
  2. Open the Component Services node.
  3. Open the Computers node.
  4. Open the My Computer node.
  5. Open the Distributed Transaction Coordinator node.
  6. Click on either the Transaction List or Transaction Statistics node if you want to find out what the DTC is doing.

Redo log

We promised that we’d talk more about the redo log at the end of this section. Well, here’s the end of this section, so let’s talk about it. Essentially, a redo log records all changes to a resource to prevent data loss. Durable resources such as a database typically write both to a local redo log and a standby redo log on one or more other (standby) databases. If something really bad happens, the redo log is used for recovering data.

A row in a redo log is called a redo entry that contains so-called change vectors that describe what change has been made to a resource. As a single transaction may instigate multiple changes in a resource, a transaction may be described in multiple redo entries. Redo entries contain both committed and uncommitted transactions. Typical change vectors that are included in a redo entry are:

  • Indicators that specify when a transaction started.
  • A unique transaction identifier.
  • The name of the data object within a resource that was changed (such as the name of a database table).
  • An image of the data that existed before the change.
  • An image of the data that existed after the transaction made its changes.
  • Commit-indicators that indicate whether the transaction has been successful.

 A client will only be notified by a resource that the transaction has been completed after the system has successfully updated the redo log file. If the resource crashes, the recovery process tries to apply all (committed and uncommitted) transactions to its data, using the information it finds in the redo log. It must redo all transactions that were committed, and undo all transactions that were uncommitted (by applying the before and after data images, so the transactions from the past are not actually replayed). Of course, redo logs are only useful in scenario’s where you’re working with durable resources, otherwise, you’ll never need to recover data so you won’t need a redo log.

What Microsoft Office SharePoint Server 2007 can’t do for you

MOSS doesn’t support transactions when working with list items. In our opinion, this is something that definitely needs to be added to future versions. This doesn’t mean that MOSS doesn’t support any kind of transactions at all you could use. For example, there are:

  • Transactions at the SQL Server database level (which should be regarded as off limits, but underwater they happen still the same).
  • Transactions in workflows (built on the Windows Workflow Foundation framework).
  • CAML commands wrapped in batches and issued via SharePoint RPC calls.

 At the end of this article, we will have created a mechanism that allows you to manipulate metadata of one or more SharePoint list items within one or more transactions. To show a preview of what we’re trying to accomplish and what you can’t achieve with out of the box MOSS functionality, take a look at the following attempt to create a transaction that consist of several actions that manipulate SharePoint list item meta data:

using (SPSite site = new SPSite("[URL site collection]"))
{
using (SPWeb web = site.OpenWeb("[URL site]"))
{
web.AllowUnsafeUpdates = true;
using (TransactionScope ts = new TransactionScope())
 {
 SPFile objFile1 = web.GetFile("[URL file]");
 SPFolder objFolder1 = web.GetFolder("[URL folder]");
  
objFile1.CheckOut();
 objFile1.Properties[strKey] = strValue;
objFile1.Update();
objFile1.CheckIn(strComment);
  
objFolder1.Properties[strKey] = strValue;
 objFolder1.Update();
ts.Complete();
}
 }
}

Similar code would work great when working with databases such as SQL Server or Oracle, but since the SharePoint object model does not enlist in any transaction coordinator, nothing happens with SharePoint list items when a transaction is rolled back, making it quite useless to create a transaction scope.

System.Transactions background

In this section, we will discuss the basics of what you need to know if you want to build a System.Transactions resource manager in .NET. First of all, if you want to implement some kind of transactional system without resorting to products like Enterprise Services and WCF, it is good to know that since .NET 2.0 two new transaction coordinators have been created that you can use:

  • The Lightweight Transaction Manager (LTM) which only handles transactions that contain resources that are located within the same application domain.
  • The OleTx Transaction Manager which can handle transactions that span multiple application domains (including cross-machine calls). Under the covers, this OleTx Transaction Manager handles distributed transactions by leveraging COM+ DTC technology by dynamically configuring a temporary Enterprise Service through Services Without Components (SWC, a COM+ 1.5 feature). This means you can use the DTC management console (discussed in section "DTC") to monitor transactions that are handled by the OleTx Transaction Manager.

Functionality in the System.Transactions namespace takes care of communicating with these transaction coordinator, so you don’t need to interact with them yourself. Earlier, when we talked about Single Phase commit, we discussed that a transaction can get promoted. This also holds true for these two new transaction coordinators: a non-distributed transaction will be handled by the LTM, until the transaction spans multiple durable resources. At that point, the transaction gets promoted and handled by the OleTx Transaction Manager instead. Additionally, a transaction also gets promoted when a transaction object is serialized across an application domain boundary.

If you want either the LTM or OleTx Transaction Manager to handle transactions for you, you need to create a System.Transactions resource manager. The LTM and OleTx Transaction Manager transaction coordinators know how to communicate with System.Transactions resource managers, and if you’re creating a custom System.Transactions resource manager you need to make sure that it is able to communicate back to these transaction coordinators and make sure it is able to handle some kind of resource.

If you want to implement a System.Transactions resource manager that supports the 2PC algorithm you need to implement the IEnlistmentNotification interface within the System.Transactions namespace (in the System.Transactions.dll) which makes it quite easy to implement the 2PC algorithm. If you want to do this you need to make sure of the following:

  • You need to enlist the resource manager in the transaction via the transaction coordinator. This is a step that is a part of the observation phase (phase Zero) of the 2PC and is not described in the IEnlistmentNotification interface, so you need to take care of that yourself by calling one of the available Enlist() methods of the current transaction (we’ll show how to do that later). Here you have three choices:
    • Call the EnlistDurable() method if you want to enlist a durable resource manager.
    • Call the EnlistVolatile() method if you want to enlist a volatile resource manager.
    • Call the EnlistPromotableSinglePhase() method if you want to enlist a new transaction coordinator that supports Single Phase commit and transaction promotion. This new transaction coordinator handles the transaction for the resource manager until it decides the transaction should be promoted to another (distributed) transaction coordinator. In System.Transactions terminology, this is called Promotable Single Phase Enlistment or PSPE.
  • The transaction coordinator calls the Prepare() method of the IEnlistmentNotification interface during the Voting phase (phase One) to ask if all System.Transactions resource managers in the transaction are willing and able to perform the action they’re responsible for. So, your System.Transactions resource manager needs to implement the Prepare() method. In this method you should perform some kind of action that is part of the transaction, write to an undo log that allows you to revert changes, write to a redo log (only if you’re managing a durable resource) that contains a list of all changes and send a reply to the transaction coordinator (either a Read-only, Prepared or Abort vote).
  • The transaction coordinator calls the Commit() method of the IEnlistmentNotification interface of every resource manager that sent a Prepared vote (which lets the transaction coordinator know the System.Transactions resource manager wants an outcome notification) if the transaction coordinator decides the transaction should be committed. So, your System.Transactions resource manager needs to implement this Commit() method. In this method you should complete any remaining actions to commit your part of the transaction, release any locks and objects the resource manager is holding and let the transaction coordinator know you’re finished.
  • The transaction coordinator calls the Rollback() method of the IEnlistmentNotification interface of every resource manager that sent a Prepared vote if the transaction coordinator decides the transaction should be aborted. So, your System.Transactions resource manager needs to implement this Rollback() method. In this method you should rollback any changes based on the undo log of your resource manager (which you need to create and update yourself). After that, you should release any locks and objects the resource manager is holding and let the transaction coordinator know you’re finished.
  • The transaction coordinator calls the InDoubt() method of the IEnlistmentNotification interface of every resource manager that sent a Prepared vote if the transaction coordinator loses contact with one or more resource managers. It’s up to you to decide what the System.Transactions resource manager does in this situation.

Enlisting a System.Transactions resource manager

We’ve already discussed that if you want to enlist a System.Transactions resource manager with the current transaction, there are three options available to you. Here, we’ll preview how to enlist a volatile System.Transactions resource manager. Later on, we’ll see extensive examples of all options. The following code listing checks if there is a current transaction, and if so, enlists a volatile resource manager with it:

Transaction tran = Transaction.Current;
if (tran != null)
{
 tran.EnlistVolatile(this, EnlistmentOptions.None);
}

Implementing Prepare()

The transaction coordinator passes a PreparingEnlistment object (called preparingEnlistment) to this method that allows you to communicate with the transaction coordinator. You should call its Prepared() method if the System.Transactions resource manager is able to perform its part of the transaction, like so:

preparingEnlistment.Prepared();

If the System.Transactions resource manager is not able to perform its work, it should force a rollback, like so:

 
preparingEnlistment.ForceRollback();

You can also cast a read-only vote, indicating that the System.Transactions resource manager is committing its part of the transaction but doesn’t need an outcome notification. After casting the read-only vote, the System.Transactions resource manager won’t receive further notifications from the transaction manager. You can cast the read-only vote like so:

 
preparingEnlistment.Done() ;

Committing

The transaction coordinator passes an Enlistment object (called enlistment) to the Commit() method that allows you to communicate with the TM. At this stage, the System.Transactions resource manager should always be able to perform the required action and indicate it has done so, or else the 2PC protocol gets screwed up (see http://msdn.microsoft.com/en-us/library/system.transactions.enlistment.done.aspx for more information). Because of this, it is very important to always call the following method:

 
enlistment.Done();

Rollback

The transaction coordinator passes an Enlistment object (called enlistment) to the Rollback() method that allows you to communicate with the transaction coordinator. Here, you should implement your custom rollback mechanism and let the transaction coordinator know that you’re finished by calling:

enlistment.Done();

InDoubt

The transaction coordinator passes an Enlistment object (called enlistment) to the InDoubt() method that allows you to communicate with the transaction coordinator. Here, you should add your own implementation for dealing with the InDoubt state and let the transaction coordinator know that you’re finished by calling:

enlistment.Done();

IPromotableSinglePhaseNotification interface

If you want to implement a custom non-distributed transaction coordinator that supports the Single Phase Commit algorithm and transaction promotion (so that the custom transaction coordinator is able to escalate the transaction to a distributed transaction coordinator, also known as PSPE which was discussed in the beginning of section "System.Transactions background") you need to implement the IPromotableSinglePhaseNotification interface, which contains the following methods:

  • Initialize, notifies transaction participants that enlistment has been completed.
  • Rollback, notifies transaction participants that the transaction is aborted.
  • SinglePhaseCommit, notifies transaction participants that the transaction is committed.

The IPromotableSinglePhaseNotification interface inherits from the ITransactionPromoter interface, which defines a single method called Promote(). Once this method is called the custom transaction coordinator needs to produce a propagation token (in the form of a byte array) which will be used by the next transaction coordinator to obtain a clone of the current transaction. We’ll show how to do this later.

If you implement the IPromotableSinglePhaseNotification interface you’ve basically created a custom transaction coordinator that allows a System.Transactions resource manager to say to the custom transaction coordinator: could you please take over the burden of taking care of this transaction for me?

Later on in this article, we’ll take this quite literally as we’ll create a custom transaction coordinator that doesn’t give it’s enlisted resource managers a whole lot of influence anymore in the proceeding of a transaction and makes decisions for all transaction participants that have enlisted on a transaction handled by the custom transaction coordinator.

The ISinglePhaseNotification interface

If you want to create a System.Transactions resource manager that supports Single Phase Commit, you’ll need to implement the ISinglePhaseNotification interface. This interface inherits from the IEnlistmentNotification interface, which we’ve already discussed. It makes sense that this is so, because resource managers supporting Single Phase commit and transaction promotion could always end up being a part of a 2PC transaction. For instance, this could happen because several durable System.Transactions resource managers are part of the transaction.

The ISinglePhaseNotification interface defines a new method called SinglePhaseCommit() which is called if the transaction coordinator chooses to use Single Phase commit, thereby delegating the right to decide the transaction outcome to the System.Transactions resource manager. The transaction coordinator passes an SinglePhaseEnlistment object (called singlePhaseEnlistment) to the System.Transactions resource manager that allows you to communicate with the transaction coordinator. The System.Transactions resource manager is responsible for letting the transaction coordinator know which decision it has made, and it has a couple of choices: Abort, Commit or Done.

If you want to let the transaction coordinator know that you’ve decided to abort the operation, you need to call:

singlePhaseEnlistment.Aborted();

If you want to let the transaction coordinator know that you’ve decided to commit the operation, you need to call:

singlePhaseEnlistment.Committed();

If you want to let the transaction coordinator know that you’ve decided to commit the operation and don’t want to receive any further notifications from the transaction coordinator , you need to call:

singlePhaseEnlistment.Done();

And via the last option you can let the transaction coordinator know that the System.Transactions resource manager thinks the transaction status is in doubt (hopefully it doesn’t think so because it can’t reach the transaction coordinator anymore, otherwise this call is going to get a bit funky), by calling:

singlePhaseEnlistment.InDoubt();

Creating a System.Transactions resource manager for SharePoint

Now that we’ve covered the theory behind all this stuff extensively, let’s write some code. To test the code in this article, all you need to do is create a console application that has references to the Microsoft.SharePoint.dll and System.Transactions dll. We’ve created our test app using VS.NET 2008, so if you’re still using VS.NET 2005, you might have some problems with the reference to Linq (you can just remove it, since we won’t be using Linq anyway).

At first, we will be building two volatile resource managers for MOSS. One of them is called VolatileFileCommand and manages metadata for files in SharePoint lists. The other one is called VolatileFolderCommand and manages metadata for folders in SharePoint lists. Since both of them share common traits, we’ve decided to create a base class for both of them called VolatileMossResourceManager. So, let’s start the discussion with this base class.

Implementing volatile resource managers

 First of all, since we’re creating a System.Transactions resource manager you need to import the System.Transactions namespace. As we’re creating a resource manager that supports the 2PC you’ll also need to implement the IEnlistmentNotification interface. When implementing this interface, you need to make sure that you enlist this resource manager in the current transaction. Since we’ve decided to create a volatile resource manager, you need to call the EnlistVolatile() method of the current transaction. To support all this we’ve created a method called EnlistTransaction() that is called by our resource manager whenever it sees fit to do so (but it has to be during phase 0). It looks like this:

public void EnlistTransaction() { if (IsTransactionEnlisted) return;
 
Transaction tran = Transaction.Current;
if (tran != null)
{
tran.EnlistVolatile(this, EnlistmentOptions.None);
}
 
IsTransactionEnlisted = true;
}

 

 

During phase 1, the transaction coordinator calls the Prepare() method of every System.Transactions resource manager. As we’ll see later, at this time our file and folder System.Transactions resource managers will already have performed the actions they needed to perform as part of the transaction, they will also have updated the undo log and since it’s not a durable resource we don’t need a redo log. So all that’s left to do is notify the transaction coordinator that we’re willing to go ahead with the transaction by sending a Prepared vote. The implementation of our Prepare() method looks like this:

public virtual void Prepare(PreparingEnlistment preparingEnlistment)
{
preparingEnlistment.Prepared();
}

 

 

We’ll leave the implementations of the Commit() and Rollback() methods to the file and folder child classes and won’t implement the InDoubt() method as this is overkill for volatile System.Transactions resource managers and way too much work to implement meaningfully for durable System.Transactions resource managers (in all examples we ever saw about the System.Transactions namespace nobody ever implemented this method, which is no coincidence).

Finally, we’ll use a dictionary object and use it as an Undo log and create a custom method called SaveOrgValue() that saves values in this Undo log. We’ll also add two flags. The first is called IsTransactionEnlisted, a flag that keeps track if transaction enlistment has already taken place. The other one is called MetadataIsDirty, a flag that monitors if actual list item metadata has been changed.

The next code listing shows the complete implementation of our resource manager base class called VolatileMossResourceManager.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Transactions;
 
namespace TxTest.MossTx.Volatile
{
public abstract class VolatileMossResourceManager : IEnlistmentNotification
{
public void EnlistTransaction()
{
if (IsTransactionEnlisted) return;
 
Transaction tran = Transaction.Current;
if (tran != null)
{
tran.EnlistVolatile(this, EnlistmentOptions.None);
}
  
IsTransactionEnlisted = true;
}
 
#region IEnlistmentNotification Members
public abstract void Commit(Enlistment enlistment);
 
public void InDoubt(Enlistment enlistment)
{
// Do nothing.
}
  
public virtual void Prepare(PreparingEnlistment preparingEnlistment)
{
preparingEnlistment.Prepared();
}
public abstract void Rollback(Enlistment enlistment);
 
#endregion
 
protected void SaveOrgValue(string strKey, string strOldValue)
{
if (!UndoLog.ContainsKey(strKey))
{
UndoLog.Add(strKey, strOldValue);
}
}
 
#region props
private bool _blnIsTransactionEnlisted;
public bool IsTransactionEnlisted
{
get { return _blnIsTransactionEnlisted; }
set { _blnIsTransactionEnlisted = value; }
}
 
private bool _blnMetadataIsDirty;
public bool MetadataIsDirty
{
get { return _blnMetadataIsDirty; }
set { _blnMetadataIsDirty = value;
}
}
 
private Dictionary _objUndoLog = new Dictionary();
public Dictionary UndoLog
{
get
{
return _objUndoLog;
}
set
{
_objUndoLog = value;
}
}
#endregion
}
}

Implementing a volatile file resource manager

The first concrete resource manager that we’re creating is the volatile File System.Transactions resource manager. It inherits from the base class VolatileMossResourceManager and also implements the IEnlistmentNotification interface. We’ll also make sure it has a reference to an SPFile object so that our resource manager is able to interact with MOSS.

It has a custom method called SetValue() which is called by a client whenever it needs to update file metadata. This method does several things:  

  1. It checks if the resource manager is already enlisted in the transaction. If this is not so, it enlists the System.Transactions resource manager.
  2. It tries to lock the current file so that the System.Transactions resource manager has exclusive access to it. If this fails, the System.Transactions resource manager will indicate that the transaction should abort.
  3. It saves metadata changes to the Undo log.
  4. It updates the file metadata.

The implementation of the SetValue() method looks like this:

public void SetValue(string strKey, string strValue)
{
  EnlistTransaction();
  LockFile();
  SaveOrgValue(strKey, File.Properties[strKey].ToString());
  File.Properties[strKey] = strValue;
}

We won’t discuss the methods it’s calling as we think they are self-explaining. We’ll show them later on and we guess you’ll have no trouble figuring out what they’re doing. There are two points of interest left that we need to discuss: the implementations of the Commit() and Rollback() methods.

In the Commit() method we’ll check-in the file we’re working with. This should be no problem, since we know we’ve successfully acquired a file lock earlier on. Then, we should let the transaction coordinator know we’ve finished. The Commit() method looks like this:

public override void Commit(Enlistment enlistment)
{
  CheckIn("System.Transaction manager commits transaction");
  enlistment.Done();
}

The Rollback method is also pretty simple. If we’ve been successful in acquiring a file lock, all we need to do is undo the check out to rollback our changes. If we weren’t successful when acquiring a lock, we haven’t done any changes at all, so we’re also done with the rollback. Finally, we should let the transaction coordinator know we’ve finished. The Rollback() method looks like this:

public override void Rollback(Enlistment enlistment)
{
  if (LockedFile)
  {
    File.UndoCheckOut();
  }
  enlistment.Done();
}

The next code listing shows the complete implementation of the volatile file resource manager for MOSS:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Transactions;
using Microsoft.SharePoint;
 
namespace TxTest.MossTx.Volatile
{
  public class VolatileFileCommand : VolatileMossResourceManager, IEnlistmentNotification
  {
#region ctor
    public VolatileFileCommand(SPFile objFile)
    {
      File = objFile;
    }
#endregion
 
  public void SetValue(string strKey, string strValue)
  {
    EnlistTransaction();
    LockFile();
    SaveOrgValue(strKey, File.Properties[strKey].ToString());
    File.Properties[strKey] = strValue;
  }
 
  private void LockFile()
  {
    if (!MetadataIsDirty)
    {
      if (File.CheckOutStatus != SPFile.SPCheckOutStatus.None) throw new Exception("Can't lock file");
      File.CheckOut();
      MetadataIsDirty = true;
      LockedFile = true;
    }
  }
 
  public void CheckOut()
  {
    File.CheckOut();
  }
 
  public void CheckIn(string strComment)
  {
    File.CheckIn(strComment);
  }
 
  public void Update()
  {
    File.Update();
  }
 
  public override void Commit(Enlistment enlistment)
  {
    CheckIn("System.Transaction manager commits transaction");
    enlistment.Done();
  }
 
  public override void Rollback(Enlistment enlistment)
  {
    if (LockedFile)
    {
      File.UndoCheckOut();
    }
    enlistment.Done();
  }
 
#region props
  private SPFile _objFile;
  public SPFile File
  {
    get
    {
      return _objFile;
    }
    set
    {
      _objFile = value;
    }
  }
 
  private bool _blnLockedFile;
  public bool LockedFile
  {
    get { return _blnLockedFile; }
    set { _blnLockedFile = value; }
  }
#endregion
}

As you may remember, we’ve put all our code in a console application. In the next code listing we’ll use it to obtain a valid reference to a file located in a SharePoint list. Then, we’ll start a new transaction, update a piece of metadata and commit the transaction. Please note that you should always explicitly call the Complete() method of the current transaction, otherwise the transaction will abort. The complete code listing looks like this:

static void Main(string[] args)
{
VolatileFileCommand objCommand1;
 
try
{
using (SPSite site = new SPSite("http://jupiter/"))
{
using (SPWeb web = site.OpenWeb("/SiteA/SiteB"))
{
SPFile objFile1 = web.GetFile("http://myserver/SiteA/SiteB/ListC/DocA.doc");
 
using (TransactionScope ts = new TransactionScope())
{
objCommand1 = new VolatileFileCommand(objFile1);
objCommand1.SetValue("Dossiernummer", "value " + DateTime.Now);
objCommand1.Update();
 
ts.Complete();
}
}
}
Console.Write("Completed");
}
catch (Exception err)
{
Console.Write(err.Message);
}
Console.ReadLine();
}

This is about the simplest example of using the file System.Transactions resource manager that we can think of. If the transaction is successful, its life cycle looks like this:

  1. The constructor of the VolatileFileCommand object stores a reference to a valid SPFile object.
  2. Before metadata is changed, the System.Transactions resource manager is enlisted in the current transaction.
  3. The file stored in the SharePoint list is checked out.
  4. An entry is added to the Undo log.
  5. A piece of metadata for the file is changed.
  6. The file is updated (and persisted in the SharePoint content database).
  7. The transaction is completed.
  8. The transaction coordinator calls the Prepare() method of the System.Transactions resource manager, that in turn notifies the transaction coordinator that it’s ready to finish the transaction.
  9. The transaction coordinator calls the Commit() method of the System.Transactions resource manager, that performs left-over jobs. This method checks in the file and notifies the transaction coordinator that it has finished.

If the transaction is aborted, the life cycle looks a bit different. We’ll take a look at an example scenario. Let’s suppose the file is already checked out by somebody else. In this case, the life cycle looks like this:

  1. The constructor of the VolatileFileCommand object stores a reference to a valid SPFile object.
  2. Before metadata is changed, the System.Transactions resource manager is enlisted in the current transaction.
  3. The System.Transactions resource manager tries to lock the file, but its attempt to check out the file fails, and an exception is thrown.
  4. The transaction coordinator calls the Rollback() method, which checks if the file was checked out. Since it was not, we know no changes have been made so we notify the transaction coordinator that we've concluded our part of the transaction rollback.

Implementing a volatile folder resource manager

Next, we’ll discuss the folder System.Transactions resource manager. We’ll take a closer look at the Commit() and Rollback() methods before showing you the entire code listing. Since it’s not possible to check out folders and we’ve chosen to allow folder updates at an early stage (the only way to make sure that we’re indeed able to commit our part of the transaction), there’s not much work left to be done in the Commit() method, except for letting the transaction coordinator know we’re good to go. The next code listing shows our implementation of the Commit() method:

public override void Commit(Enlistment enlistment)
{
enlistment.Done();
}

The Rollback() method has become a bit more cumbersome, since we’re not able to discard a check out anymore. In this implementation, we’re using the Undo log to restore the original values. This is shown in the next code listing:

public override void Rollback(Enlistment enlistment)
{
foreach (string strKey in UndoLog.Keys)
{
Folder.Properties[strKey] = UndoLog[strKey];
}
 
Folder.Update();
enlistment.Done();
}

The entire implementation of the folder System.Transactions resource manager is shown in the next code listing:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Transactions;
using Microsoft.SharePoint;
 
namespace TxTest.MossTx.Volatile
{
public class VolatileFolderCommand : VolatileMossResourceManager, IEnlistmentNotification
{
#region ctor
public VolatileFolderCommand(SPFolder objFolder)
{
Folder = objFolder;
}
#endregion
 
public void SetValue(string strKey, string strValue)
{
EnlistTransaction();
 
if (Folder.Properties.Contains(strKey))
{
SaveOrgValue(strKey, Folder.Properties[strKey].ToString());
}
else
{
SaveOrgValue(strKey, String.Empty);
}
 
Folder.Properties[strKey] = strValue;
}
 
public void Update()
{
Folder.Update();
}
 
public override void Commit(Enlistment enlistment)
{
enlistment.Done();
}
 
public override void Rollback(Enlistment enlistment)
{
foreach (string strKey in UndoLog.Keys)
{
Folder.Properties[strKey] = UndoLog[strKey];
}
  
Folder.Update();
 
enlistment.Done();
}
 
#region props
private SPFolder _objFolder;
public SPFolder Folder
{
get
{
return _objFolder;
}
set
{
_objFolder = value;
}
}
#endregion
 
}
}

A client can leverage the folder System.Transactions resource manager like so:

VolatileFolderCommand objCommand4;
SPFolder objFolder1 = web.GetFolder("http://myserver/site/doclib/folderA");
 
using (TransactionScope ts = new TransactionScope())
{
objCommand4 = new VolatileFolderCommand(objFolder1);
objCommand4.SetValue("APieceOfMetadata", "value " + DateTime.Now);
objCommand4.Update();
 
ts.Complete();
}

This is as simple as working with the folder System.Transactions resource manager will get. Since this is all very similar to the file System.Transactions resource manager scenario, we won’t discuss the life cycle of this code.

Multiple metadata updates and multiple commands

In the next example, we’ll show how to update multiple pieces of metadata and how to work with multiple commands within a single transaction.

using (TransactionScope ts = new TransactionScope())
{
objCommand1 = new VolatileFileCommand(objFile1);
objCommand1.SetValue("PropA", "value " + DateTime.Now);
objCommand1.SetValue("PropA", "another value" + DateTime.Now);
objCommand1.SetValue("PropB", "value of related docs" + DateTime.Now);
objCommand1.Update();<BR><BR> objCommand2 = new VolatileFileCommand(objFile2);
objCommand2.SetValue("PropA", "command 2 value " + DateTime.Now);
objCommand2.Update();
 
ts.Complete();
}

If this transaction is successful, the transaction coordinator first calls the Prepare() method of command 1, followed by a call of the Prepare() method of command 2. After that, it calls the Commit() method of the first command, followed by a call to the Commit() method of the second command.

Nested transactions

Every transaction has a scope to which it applies, and it’s also possible to nest transactions. The way transactions behave when nested is determined by setting their transaction scopes. There are three possible transaction scopes:

  • Required, this is the default transaction scope option. If a transaction already exists, the TransactionScope object (used extensively in the previous examples) joins that transaction. Otherwise, it creates a new transaction.
  • RequiresNew, this transaction scope option always starts a new transaction.
  • Suppress, the TransactionScope object will never be a part of a transaction. This one should be used when you’re performing actions that are nice when they succeed, but you don’t want to abort the entire transaction if they fail.

Using nested transactions affect the way transactions behave. We’ll explore this in the next example:

using (TransactionScope ts = new TransactionScope(TransactionScopeOption.Required))
{
using (TransactionScope ts1 = new
TransactionScope(TransactionScopeOption.Required))
{
// If ts fails, ts1 will fail.
// If ts1 fails, ts will fail.
// If ts1 succeeds, ts could succeed.
 
// If ts1 fails, the command is rollbacked immediately.
// If ts1 succeeds, it is committed after ts has completed.
 
// Let file or folder System.Transactions resource managers do work...
ts1.Complete();
}
 
using (TransactionScope ts2 = new TransactionScope(TransactionScopeOption.RequiresNew))
{
// If ts fails, ts2 will fail.
// If ts2 fails or succeeds, it won’t affect ts.
 
// If ts2 fails, the ts2 transaction is aborted immediately.
// If ts2 succeeds, it is committed immediately.
 
// Let file or folder System.Transactions resource managers do work...
ts2.Complete();
}
using (TransactionScope ts3 = new TransactionScope(TransactionScopeOption.Suppress))
{
// Doesn't participate in any transaction,
// therefore, it doesn't affect any other transactions,
// and it doesn't matter if you complete ts3 or not since no parts of the 2PC will
// be called by the TC..
}
  
ts.Complete();
}

Implementing a read-only volatile file command

Please note that the current implementations expect the System.Transactions resource manager to participate fully in the 2PC. If you would create a new class called ReadOnlyVolatileFileCommand that inherits from VolatileFileCommand and implements the Prepare() method differently, like so:

public override void Prepare(PreparingEnlistment preparingEnlistment)
{
preparingEnlistment.Done();
}

Thereby indicating that the System.Transactions resource manager will commit its changes but no longer participates in the 2PC, its Commit() and Rollback() methods shall never be called by the transaction coordinator. In this particular implementation, that would cause problems as we’re using those methods to release the file lock we’ve placed on the file in a SharePoint list. The complete code for such a class is shown in the following code listing:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Transactions;
using Microsoft.SharePoint;
 
namespace TxTest.MossTx.Volatile
{
public class ReadOnlyVolatileFileCommand : VolatileFileCommand, IEnlistmentNotification
{
#region ctor
public ReadOnlyVolatileFileCommand(SPFile objFile) : base(objFile)
{
}
#endregion
 
public override void Prepare(PreparingEnlistment preparingEnlistment)
{
preparingEnlistment.Done();
}
}
}

The following code listing demonstrates the use of our new class. If you run it, you’ll find out that the Commit() and Rollback() methods of the ReadOnlyVolatileFileCommand class aren’t called by the transaction coordinator, causing File 1 to remain in a checked out state.

using (TransactionScope ts = new TransactionScope())
{
objCommand5 = new ReadOnlyVolatileFileCommand(objFile1);
objCommand5.SetValue("Dossiernummer", "value " + DateTime.Now);
objCommand5.SetValue("Dossiernummer", "another value" + DateTime.Now);
objCommand5.SetValue("Gerelateerde documenten", "value of related docs" + DateTime.Now);
objCommand5.Update();
 
objCommand2 = new VolatileFileCommand(objFile2);
objCommand2.SetValue("Dossiernummer", "command 2 value " + DateTime.Now);
objCommand2.Update();
 
ts.Complete();
}

Creating a durable resource manager

You can also create durable resource managers. If you want to do so, you need to change the enlistment process a little bit, and make sure you call the EnlistDurable() method of the current transaction object. This method expects a GUID that uniquely defines the current transaction and the current System.Transactions resource manager, so you’ll need to pass such a GUID. The following code shows an example that demonstrates how to enlist a durable resource manager:

Transaction tran = Transaction.Current;
if (tran != null)
{
tran.EnlistDurable(MyGuid, this, EnlistmentOptions.None);
}

A durable resource manager should be able to recover from failure. That’s why you need to pass along a GUID (also known as the resource manager identifier), so that the durable resource manager can use it in case of an emergency, such as a resource manager failure or a reboot, to retrieve recovery information. Because of this you’ll need to persist and keep track of these GUIDs.

In our implementation of a durable resource manager we won’t add recovery support (although we will discuss how to implement it in section "Recovery support"). Because of this, we’ll just generate a new GUID every time we’re enlisting a new durable resource manager and we won’t bother to keep these GUIDs safely stored away somewhere.

Please note that you also need to choose which enlistment options you want to use. By default, you should set this option to None. Only if you need to the System.Transactions resource manager to perform additional work during the Prepare phase (Phase 0) you should change this by setting EnlistmentOptions.EnlistDuringPrepareRequired. By setting this parameter, the System.Transactions resource manager indicates that it wants to receive a Prepare notification while new enlistments are still allowed for the transaction.

Apparently you can do some advanced stuff using this option, for instance, you could create a caching resource manager (which is mentioned briefly on http://blogs.msdn.com/florinlazar/archive/2006/01/29/518956.aspx ). The caching resource manager could use the Prepare notification to decide that it needs to transfer its cached contents to a durable resource, such as a database. By doing this, the durable resource enlists on the transaction and also becomes a part of it.

If you don’t set enlistment options (EnlistmentOptions.None) you will receive a Prepare notification once no new enlistments will be accepted by the transaction coordinator. The aforementioned caching resource manager would try to persist its cache, the durable resource would try to enlist on the transaction and this would result in an exception since no enlistments are allowed anymore at this stage.

All in all it’s pretty useless to deviate from the default EnlistmentOptions.None mode, unless you have some advanced motives for being a deviant deviator.

In order to support durable enlistment, we’ve created a new base class called DurableMossResourceManager that has a property called TransactionGuid. The durable resource manager creates a GUID that uniquely identifies itself and the current transaction, and passes it to the transaction coordinator during enlistment. The complete implementation of the DurableMossResourceManager is almost identical to the VolatileMossResourceManager and looks like this:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Transactions;
 
namespace TxTest.MossTx.Durable
{
public abstract class DurableMossResourceManager : IEnlistmentNotification
{
public void EnlistTransaction()
{
if (IsTransactionEnlisted) return;
 
Transaction tran = Transaction.Current;
if (tran != null)
{
tran.EnlistDurable(TransactionGuid, this, EnlistmentOptions.None);
}
 
IsTransactionEnlisted = true;
}
 
#region IEnlistmentNotification Members
public void Commit(Enlistment enlistment)
{
enlistment.Done();
}
 
public void InDoubt(Enlistment enlistment)
{
// Do nothing.
}
  
public virtual void Prepare(PreparingEnlistment preparingEnlistment)
{
preparingEnlistment.Prepared();
}
 
public abstract void Rollback(Enlistment enlistment);
#endregion
 
protected void SaveOrgValue(string strKey, string strOldValue)
{
if (!UndoLog.ContainsKey(strKey))
{
UndoLog.Add(strKey, strOldValue);
}
}
 
#region props
private bool _blnIsTransactionEnlisted;
public bool IsTransactionEnlisted
{
get { return _blnIsTransactionEnlisted; }
set { _blnIsTransactionEnlisted = value; }
}
 
private bool _blnMetadataIsDirty;
public bool MetadataIsDirty
{
get { return _blnMetadataIsDirty; }
set { _blnMetadataIsDirty = value;
}
}
 
private Dictionary _objUndoLog = new Dictionary();
public Dictionary UndoLog
{
get { return _objUndoLog; }
set { _objUndoLog = value; }
}
 
private Guid _objTransactionGuid = Guid.NewGuid();
private Guid TransactionGuid
{
get { return _objTransactionGuid; }
set { _objTransactionGuid = value; }
}
#endregion
}
}

If you enlist a resource as being durable within a transaction, the transaction will immediately be promoted to a distributed transaction and handled by the DTC. Because it is a durable transaction, we’ll also implement the file resource manager a bit different, so that it only locks files for a very short time (as opposed to locking the file for the entire transaction, as we did when we created our volatile file resource manager). The next code listing, which resembles the previous volatile implementation a lot, shows the code for our durable file System.Transactions resource manager.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Transactions;
using Microsoft.SharePoint;
 
namespace TxTest.MossTx.Durable
{
public class DurableFileCommand : DurableMossResourceManager, IEnlistmentNotification
{
#region ctor
public DurableFileCommand(SPFile objFile)
{
File = objFile;
}
#endregion
 
public void SetValue(string strKey, string strValue)
{
EnlistTransaction();
SaveOrgValue(strKey, File.Properties[strKey].ToString());
File.Properties[strKey] = strValue;
}
 
public void CheckOut()
{
File.CheckOut();
}
 
public void CheckIn(string strComment)
{
File.CheckIn(strComment);
}
 
public void Update()
{
File.Update();
}
 
public override void Rollback(Enlistment enlistment)
{
if (File.CheckOutStatus == SPFile.SPCheckOutStatus.None)
{
File.CheckOut();
}
 
foreach (string strKey in UndoLog.Keys)
{
File.Properties[strKey] = UndoLog[strKey];
}
 
File.Update();
 
File.CheckIn("rollback because of a failed transaction");
 
enlistment.Done();
}
 
#region props
private SPFile _objFile;
public SPFile File
{
get { return _objFile;
}
set { _objFile = value; }
}
#endregion
}
}

Please note that a Durable folder System.Transactions resource manager is basically identical to its volatile counterpart. There is only one thing you need to change: it needs to inherit from our custom DurableMossResourceManager base class. The next code listing shows the complete class:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Transactions;
using Microsoft.SharePoint;

namespace TxTest.MossTx.Durable
{
public class DurableFolderCommand : DurableMossResourceManager, IEnlistmentNotification
{
#region ctor
public DurableFolderCommand(SPFolder objFolder)
{
Folder = objFolder;
}
#endregion

public void SetValue(string strKey, string strValue)
{
EnlistTransaction();

if (Folder.Properties.Contains(strKey))
{
SaveOrgValue(strKey, Folder.Properties[strKey].ToString());
}
else
{
SaveOrgValue(strKey, String.Empty);
}

Folder.Properties[strKey] = strValue;
}

public void Update()
{
Folder.Update();
}

public override void Rollback(Enlistment enlistment)
{
foreach (string strKey in UndoLog.Keys)
{
Folder.Properties[strKey] = UndoLog[strKey];
}

Folder.Update();

enlistment.Done();
}

#region props
private SPFolder _objFolder;
public SPFolder Folder
{
get { return _objFolder; }
set { _objFolder = value; }
}
#endregion
}
}

As a point of interest, we will also use the DistributedTransactionStarted event of the TransactionManager object to keep us notified when the current transaction becomes a distributed transaction. To do this, we’ll need to define an event handler for this event, like so:

TransactionManager.DistributedTransactionStarted += new TransactionStartedEventHandler(TransactionManager_DistributedTransactionStarted);
 
static void TransactionManager_DistributedTransactionStarted(object sender, TransactionEventArgs e)
{
Console.WriteLine("Transaction {0} became distributed, promoted from LTM to DTC",
Transaction.Current.TransactionInformation.DistributedIdentifier);
}

Once you start creating a durable resource, you can monitor the DTC (see section "Background info") and notice that a new transaction appears in the Transaction List. You will also notice that the DistributedTransactionStarted event is fired as soon as the first durable resource is enlisted. If you don’t like this, you might consider implementing a resource manager that supports the Single-Phase commit protocol. See section "Implementing a Single-Phase commit resource manager" for more information about that.

In the TransactionManager_DistributedTransactionStarted event handler you may have noticed that we’re outputting something called a distributed identifier. This value is filled once the transaction becomes distributed and can be used to map the current running transaction to the transactions you can monitor in the DTC.

The next code listing shows the part where a new distributed transaction started event handler is defined as well as the creation of a new transaction spanning multiple durable resource managers:

TransactionManager.DistributedTransactionStarted += new
TransactionStartedEventHandler(TransactionManager_DistributedTransactionStarted);
using (TransactionScope ts = new TransactionScope())
{
objCommand6 = new DurableFileCommand(objFile1, objTransactionGuid);
objCommand6.CheckOut();
objCommand6.SetValue("ValueA", "value " + DateTime.Now);
objCommand6.SetValue("ValueA", "another value" + DateTime.Now);
objCommand6.SetValue("ValueB", "value of related docs" + DateTime.Now);
objCommand6.Update();<BR>objCommand6.CheckIn("checked in file 1");
 
objCommand7 = new DurableFileCommand(objFile2, objTransactionGuid);
objCommand7.CheckOut();
objCommand7.SetValue("ValueA", "command 2 value " + DateTime.Now);
objCommand7.Update();
objCommand7.CheckIn("checked in file 2");
 
ts.Complete();
}

Recovery support

The main difference between volatile System.Transactions resource manager and durable System.Transactions resource managers is that a volatile System.Transactions resource manager doesn’t need recovery support at all and a durable System.Transactions resource manager should be able to recover after a failure. We won’t get to deep into this topic, but if you want to implement recovery support there are a couple of things you need to do.

First of all, you need to change your implementation of the Prepare() method. In this method, you need to save recovery info to some durable storage system (such as the file system, or a database). You may want to reread the section "Redo log" to check what kind of info you want to put in a redo log.

You will also need to change your implementation of the Commit() method. Since the System.Transactions resource manager finished it’s part of the job, you don’t need to recover anything and you don’t need a recovery log anymore. So, this is a good time to remove the recovery information concerning this particular transaction.

The Rollback() method needs to be changed as well in much the same way as the Commit() method. After successfully undoing any changes you won’t need a recovery log anymore, so this is a good place to remove the recovery information concerning this particular transaction.

Also, you need to call the Reenlist() method of the transaction coordinator and pass it a GUID that uniquely identifies a specific transaction and a specific System.Transactions resource manager. This GUID must be identical to the GUID you’ve used during the enlistment of the System.Transactions resource manager during the initial transaction. In our example, we’ve created a property called TransactionGuid that could be used for this purpose. If you’re planning to support a recovery process, you’ll need to persist such transaction GUIDs (something we didn’t do). If you’re interested in implementing recovery support, the following links may be of interest to you:

Implementing a Single-Phase commit resource manager

As you’ve seen in the section "Creating a durable resource manager", the transaction is promoted once the first durable resource manager is enlisted. As discussed in section "Background info", things don’t have to be like that. If you’d create a Single-phase commit resource manager it uses the LTM transaction coordinator until it really needs to promote a lightweight transaction to a distributed transaction. So let’s do exactly that...

Every Single-Phase commit resource manager needs to implement the ISinglePhaseNotification interface. This interface defines a single method called SinglePhaseCommit() that is called if the transaction is successful and isn’t promoted during its execution.

First off, we’ll create an abstract base class that implements that interface but leaves the implementation of the SinglePhaseCommit() method to its children. The main task of this abstract base class, which we will call SinglePhaseMossResourceManager, is to implement the enlistment process. To demonstrate the difference between the example described in section "Creating a durable resource manager" we’ll have participants enlist as durable resource managers. If you recall, in the previous example the transaction became a distributed transaction as soon as the first durable resource manager was enlisted. In this example, a transaction is only promoted if necessary. For instance, this might happen once the second durable resource manager is enlisted. The following code listing shows the enlist implementation of the abstract SinglePhaseMossResourceManager class.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Transactions;
using TxTest.MossTx.TM;

namespace TxTest.MossTx.SinglePhase
{
public abstract class SinglePhaseMossResourceManager : ISinglePhaseNotification
{
#region ctor
public SinglePhaseMossResourceManager()
{
}
#endregion

public void EnlistTransaction()
{
if (IsTransactionEnlisted) return;

Transaction tran = Transaction.Current;
if (tran != null)
{
tran.EnlistDurable(TransactionGuid, this, EnlistmentOptions.None);
}

IsTransactionEnlisted = true;
}

#region IEnlistmentNotification Members
public void Commit(Enlistment enlistment)
{
enlistment.Done();
}

public void InDoubt(Enlistment enlistment)
{
// Do nothing.
}

public virtual void Prepare(PreparingEnlistment preparingEnlistment)
{
preparingEnlistment.Prepared();
}

public abstract void Rollback(Enlistment enlistment);
public abstract void SinglePhaseCommit(SinglePhaseEnlistment enlistment);
#endregion

protected void SaveOrgValue(string strKey, string strOldValue)
{
if (!UndoLog.ContainsKey(strKey))
{
UndoLog.Add(strKey, strOldValue);
}
}

#region props
private bool _blnIsTransactionEnlisted;
public bool IsTransactionEnlisted
{
get { return _blnIsTransactionEnlisted; }
set { _blnIsTransactionEnlisted = value; }
}

private bool _blnMetadataIsDirty;
public bool MetadataIsDirty
{
get { return _blnMetadataIsDirty; }
set { _blnMetadataIsDirty = value; }
}

private Dictionary _objUndoLog = new Dictionary(); public Dictionary UndoLog
{
get { return _objUndoLog; }
set { _objUndoLog = value; }
}

private Guid _objTransactionGuid = Guid.NewGuid();
public Guid TransactionGuid
{
get { return _objTransactionGuid; }
set { _objTransactionGuid = value; }
}
#endregion
}
}

That leaves the implementation of the SinglePhaseCommit() method to its children, which will let the transaction coordinator know that the commit was successful. The rest of the implementations of the child classes isn’t that different from what we’ve seen before, so we won’t discuss it explicitly. Without further ado, here’s the implementation of the SinglePhaseFileCommand:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Transactions;
using Microsoft.SharePoint;
using TxTest.MossTx.TM;

namespace TxTest.MossTx.SinglePhase
{
public class SinglePhaseFileCommand : SinglePhaseMossResourceManager, ISinglePhaseNotification
{
#region ctor
public SinglePhaseFileCommand(SPFile objFile)
{
File = objFile;
}
#endregion

public override void SinglePhaseCommit(SinglePhaseEnlistment enlistment)
{
enlistment.Done();
//enlistment.Committed();
}

public void SetValue(string strKey, string strValue)
{
EnlistTransaction();
SaveOrgValue(strKey, File.Properties[strKey].ToString());
File.Properties[strKey] = strValue;
}

public void CheckOut()
{
File.CheckOut();
}

public void CheckIn(string strComment)
{
File.CheckIn(strComment);
}

public void Update()
{
File.Update();
}

public override void Rollback(Enlistment enlistment)
{
if (File.CheckOutStatus == SPFile.SPCheckOutStatus.None)
{
File.CheckOut();
}

foreach (string strKey in UndoLog.Keys)
{
File.Properties[strKey] = UndoLog[strKey];
}

File.Update();
File.CheckIn("rollback because of a failed transaction");

//enlistment.Done();
}

#region props
private SPFile _objFile;
public SPFile File
{
get { return _objFile; }
set { _objFile = value; }
}
#endregion
}
}

The Single-Phase commit variant of the Folder command looks like this:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Transactions;
using Microsoft.SharePoint;
using TxTest.MossTx.TM;

namespace TxTest.MossTx.SinglePhase
{
public class SinglePhaseFolderCommand : SinglePhaseMossResourceManager, ISinglePhaseNotification
{
#region ctor
public SinglePhaseFolderCommand(SPFolder objFolder)
{
Folder = objFolder;
}
#endregion

public override void SinglePhaseCommit(SinglePhaseEnlistment enlistment)
{
enlistment.Committed();
}

public void SetValue(string strKey, string strValue)
{
EnlistTransaction();

if (Folder.Properties.Contains(strKey))
{
SaveOrgValue(strKey, Folder.Properties[strKey].ToString());
}
else
{
SaveOrgValue(strKey, String.Empty);
}

Folder.Properties[strKey] = strValue;
}

public void Update()
{
Folder.Update();
}

public override void Rollback(Enlistment enlistment)
{
foreach (string strKey in UndoLog.Keys)
{
Folder.Properties[strKey] = UndoLog[strKey];
}

Folder.Update();

enlistment.Done();
}

#region props
private SPFolder _objFolder;
public SPFolder Folder
{
get { return _objFolder; }
set { _objFolder = value; }
}
#endregion
}
}

And the client that starts the transaction goes something like this:

using (TransactionScope ts = new TransactionScope())
{
objCommand9 = new SinglePhaseFileCommand(objFile1);
objCommand9.CheckOut();
objCommand9.SetValue("Dossiernummer", "value " + DateTime.Now);
objCommand9.Update();
objCommand9.CheckIn("test");

objCommand10 = new SinglePhaseFileCommand(objFile2);
objCommand10.CheckOut();
objCommand10.SetValue("Dossiernummer", "value " + DateTime.Now);
objCommand10.Update();
objCommand10.CheckIn("test");

ts.Complete();
}

If you’d check the TransactionManager_DistributedTransactionStarted event handler (discussed in section "Creating a durable resource manager"), you’ll notice that the enlistment of the first durable resource manager does not result in the creation of a distributed transaction... yet. This happens as soon as the second durable resource manager is enlisted which causes the transaction to be promoted. If, instead of enlisting two durable resource managers you would have enlisted only one, you would have seen the Single-Phase commit protocol in action. In such a scenario, when the transaction is completed the SinglePhaseCommit() method is called. In the current scenario we’re dealing with a distributed transaction, which, when the transaction completes, results in the calling of the 2PC methods Prepared() and Commit().

Implementing a custom transaction manager

It’s possible to create a custom transaction manager. In all likelihood, you will not do this as you can already use the LTM and OleTx transaction coordinators, but it would be nice to take a look at how one would accomplish this. In this section, we’re discussing a simple custom transaction manager.

First of all, such a custom transaction manager needs to implement the IPromotableSinglePhaseNotification interface (see section "IPromotableSinglePhaseNotification interface" for further details). In this example, we won’t bother to implement the Initialize() method that is used to notify transaction participants that enlistment has been completed. We’ll implement a very basic version of the Rollback() method that indicates the transaction is aborted. The implementation of the SinglePhaseCommit() method is also very simple; it just indicates the transaction is committed. These implementations look like this:

 public void Initialize()
{
//
}

public void Rollback(SinglePhaseEnlistment singlePhaseEnlistment)
{
singlePhaseEnlistment.Aborted();
}

public void SinglePhaseCommit(SinglePhaseEnlistment singlePhaseEnlistment)
{
singlePhaseEnlistment.Committed();
}

 You’ll also need to implement the Promote() method that returns a propagation token when the transaction handled by the custom System.Transactions resource manager is promoted. You can call the GetTransmitterPropagationToken() method of the TransactionInterop class to accomplish this, like so:

public byte[] Promote()
{
return TransactionInterop.GetTransmitterPropagationToken(Transaction.Current);
}

In case our custom System.Transactions resource manager feels the need to communicate with its transaction participants, we’ll also add a collection of such participants. This is shown in the next code listing:

public void AddResourceManager(ISinglePhaseNotification objResourceManager)
{
_objResManagers.Add(objResourceManager);
}

#region prop
private List _objResManagers = new List();
public List ResourceManagers
{
get { return _objResManagers; }
set { _objResManagers = value; }
}
#endregion

The complete implementation of our custom System.Transactions resource manager looks like this:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Transactions;

namespace TxTest.MossTx.TM
{
public class CustomTransactionManager : IPromotableSinglePhaseNotification
{
public CustomTransactionManager()
{
}

public void Enlist()
{
}

#region IPromotableSinglePhaseNotification Members
public void Initialize()
{
//
}

public void Rollback(SinglePhaseEnlistment singlePhaseEnlistment)
{
singlePhaseEnlistment.Aborted();
}

public void SinglePhaseCommit(SinglePhaseEnlistment singlePhaseEnlistment)
{
singlePhaseEnlistment.Committed();
}
#endregion

#region ITransactionPromoter Members
public byte[] Promote()
{
return TransactionInterop.GetTransmitterPropagationToken(Transaction.Current);
}
#endregion

public void AddResourceManager(ISinglePhaseNotification objResourceManager)
{
_objResManagers.Add(objResourceManager);
}

#region prop
private List<ISinglePhaseNotification> _objResManagers = new List<ISinglePhaseNotification>();
public List<ISinglePhaseNotification> ResourceManagers
{
get { return _objResManagers; }
set { _objResManagers = value; }
}
#endregion
}
}

We will use the Single-Phase commit System.Transactions resource managers we’ve created earlier in combination with our custom transaction coordinator. To support this, we have to change the enlistment process a little bit. The enlistment process is implemented in the EnlistTransaction() method of the SinglePhaseMossResourceManager class and needs to take care of two things:

  1. Let the current transaction know it will be handled by our custom transaction coordinator.
  2. Add the transaction participant (our Single-Phase commit System.Transactions resource manager) to the list of participant members of our custom transaction coordinator.

The following code listing shows the enlistment process:

public void EnlistTransaction()
{
if (IsTransactionEnlisted) return;

Transaction tran = Transaction.Current;
if (tran != null)
{
tran.EnlistPromotableSinglePhase(CurrentTransactionManager
}

IsTransactionEnlisted = true;
}

This results in the following implementation of the SinglePhaseMossResourceManager class:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Transactions;
using TxTest.MossTx.TM;

namespace TxTest.MossTx.SinglePhase
{
public abstract class SinglePhaseMossResourceManager : ISinglePhaseNotification
{
#region ctor
public SinglePhaseMossResourceManager(CustomTransactionManager objCurrentTransactionManager, Guid objTransactionGuid)
{
CurrentTransactionManager = objCurrentTransactionManager;
TransactionGuid = objTransactionGuid;
}
#endregion

public void EnlistTransaction()
{
if (IsTransactionEnlisted) return;

Transaction tran = Transaction.Current;
if (tran != null)
{
tran.EnlistPromotableSinglePhase(CurrentTransactionManager);
CurrentTransactionManager.AddResourceManager(this);
}

IsTransactionEnlisted = true;
}

#region IEnlistmentNotification Members
public void Commit(Enlistment enlistment)
{
//enlistment.Done();
}

public void InDoubt(Enlistment enlistment)
{
// Do nothing.
}

public virtual void Prepare(PreparingEnlistment preparingEnlistment)
{
preparingEnlistment.Prepared();
}

public abstract void Rollback(Enlistment enlistment);
public abstract void SinglePhaseCommit(SinglePhaseEnlistment enlistment);
#endregion

protected void SaveOrgValue(string strKey, string strOldValue)
{
if (!UndoLog.ContainsKey(strKey))
{
UndoLog.Add(strKey, strOldValue);
}
}

#region props
private bool _blnIsTransactionEnlisted;
public bool IsTransactionEnlisted
{
get { return _blnIsTransactionEnlisted; }
set { _blnIsTransactionEnlisted = value; }
}

private bool _blnMetadataIsDirty;
public bool MetadataIsDirty
{
get { return _blnMetadataIsDirty; }
set { _blnMetadataIsDirty = value; }
}

private Dictionary _objUndoLog = new Dictionary();
public Dictionary UndoLog
{
get { return _objUndoLog; }
set { _objUndoLog = value; }
}

private CustomTransactionManager _objCurrentTransactionManager;
public CustomTransactionManager CurrentTransactionManager
{
get { return _objCurrentTransactionManager; }
set { _objCurrentTransactionManager = value; }
}

private Guid _objTransactionGuid;
public Guid TransactionGuid
{
get { return _objTransactionGuid; }
set { _objTransactionGuid = value; }
}
#endregion
}
}

The client code looks like this:

TransactionManager.DistributedTransactionStarted += new
TransactionStartedEventHandler(TransactionManager_DistributedTransactionStarted);
using (TransactionScope ts = new TransactionScope())
{
CustomTransactionManager objTM = new CustomTransactionManager();

objCommand9 = new SinglePhaseFileCommand(objFile1, objTM, objTransactionGuid);
objCommand9.CheckOut();
objCommand9.SetValue("KeyA", "value " + DateTime.Now);
objCommand9.Update();
objCommand9.CheckIn("test");

objCommand10 = new SinglePhaseFileCommand(objFile2, objTM, objTransactionGuid);
objCommand10.CheckOut();
objCommand10.SetValue("KeyA", "value " + DateTime.Now);
objCommand10.Update();
objCommand10.CheckIn("test");

ts.Complete();
}

This concludes our visit of an implementation of a basic custom transaction coordinator.

Conclusion

The SharePoint 2007 framework lacks transaction support when dealing with list items, especially a transaction mechanism that can be used in conjunction with the .NET System.Transactions namespace. There are situations when this is very inconvenient, even when dealing with semi-structured information. We hope this is something that’ll change in the future, and in the mean time, you’ll have to create a System.Transactions resource manager yourself to be able to deal with scenario’s that require transactions in MOSS. Creating a System.Transactions resource manager turns out to be not that difficult, understanding most aspects that are involved can get pretty complex. If you’re planning on walking the System.Transactions road to SharePoint, this article will get you a long way.

More information

If you want more information about related topics discussed in this article, you could check out the following links: