Tuesday, 8 May 2012

Why the WCF Workflow Application is no good as an Orchestration layer

I've been architecting and implementing a Service Oriented Architecture where I decided on using the WCF Workflow application as the middle Orchestration tier.  This enabled us to connect together the many backend services as single business processes for the UI.  It all looked good on paper and prototyping it raised some small issues but nothing too serious that we couldn't overcome. 

So it was all hands on deck as we developed using this as our Orchestration.  However much to my annoyance, and my teams, we started coming across issues with it, and after this last week of research, I have taking the decision to replace all our middle Orchestration with simple WCF Services and coding the Orchestration process in C#.

Here's a list of reasons why I had to ditch the nice GUI orchestration via Workflow and had to manually do the coding:
  • Transaction handling - although the WCF workflow service can handle transactions via the TransactedReceiveScope Activity I was always frustrated about this as my original design allowed the Orchestration to maintain the transaction, thus removing the need for the UI to know about or keep longish (> 5 sec) running processes waiting for responses.  Ideally I wanted the Orchestration to start a transaction and roll it back or commit it based on its underlying method calls.  This would have enabled me to wrap up multiple requests from the UI in to a single request and make the Orchestration commit or rollback internal processes as required.  Due to the WCF service not being able to start a transaction  I instead had to pass this down from the UI and keep it in the UI layer, which I found a bit of a pain.  By re-writing the workflow as a WCF service I instead have the ability to keep transactions around many different services as and where I need them.
  • XAML Development - the ability to visually see the processes while developing is a very cool idea and I used it to even present back to the client Business Processes and identify issues.  For example I had a Workflow that calculated taxation which was very complex.  The user could visually see all the coding steps and could quite easily point out problems with the business process.  This in itself will be missed with the transition to the code centric version.  However we noticed that once the XAML got to a reasonable size, which it can do when acting as a facade, Visual Studio became steadily unusable.  For example my development machine has 16GB of RAM with an Intel Core i7 processor and a Solid State Hard drive, yet to open a reasonably size XAML file took over 5 minutes.  Then to make any change to the business process would result in Visual Studio going in to hibernation for close to a minute...and I mean any change.  The benefits of the visual representation of the business process soon became something close to a nightmare.  Obviously the XAML editor was not meant for editing medium sized workflows.
  • Non thread safe - this became the killer for the WCF workflow service.  I started noticing issues where the WCF Workflow service threw some very weird errors like "item is already in dictionary" upon initialisation of a service call.  This raised its head due to us wanting to wrap up multiple requests and execute them in parallel thus making use of multi-core CPU's.  I found that running a single unit of work by itself was no issue however when I parallelled 5 together the first one would work and the rest would fail.  This was probably due to the orchestration having to accept transactions and trying to do 5 at the same time.  I would always get errors on at least 1 of the packets of work.  I tried a few workarounds like initialising the ChannelFactory first and then firing off the requests, and also delaying each request by 100ms.  These all improved the chance of getting a packet through the orchestration but it was not full proof.  Also I was left wondering what would happen if 50 users were on the site and hit the same request at the same time.  I am sure errors would be aplenty!  This left me feeling like the WCF workflow service was a ticking timebomb ready for failure.
So unfortunately we have taken the hit and started replicating the functionality in the orchestration as WCF services, which after initial testing is working fine with non of the above issues.  I do miss the UI that comes with XAML but I am a lot more comfortable in proceeding without WCF Workflow Services.

Hope this helps someone deciding on solutions.

Thursday, 3 May 2012

Automatic Login for SharePoint using Claims authentication

The Scenario

I've recently struck a scenario where I wanted to integrate my MVC web application to a SharePoint 2010 server and upload documents.  This process happened on our server and occured in some parallel processing from our website.

The SharePoint implementation we have is cloud based and hence is set up with Claims authentication.  This can be a bit of a challenge to understand but there are a lot of good articles on the web about it (although some are really complex).  Basically claims authentication allows a 3rd party to control the credentials for access to the site.  In this case Microsoft Office365 Live is the claim provider which provides the authenticated token to the SharePoint site which trusts Microsoft Office365 Live to give it a legitimate token.  Thats why when you are using SharePoint and login it redirects you to the Office365 login page and then redirects you back to SharePoint afterwards.

This is all good except when you want to automate this process, for example you have the username and password for SharePoint encrypted somewhere and you want to access SharePoint using these credentials.  This becomes hard for Claims based authentication as opposed to Forms or Windows where you can just set the username/password combination in the security token.

The Solutions

Firstly I don't by any means claim that I came up with the entire solution, I did however do a LOT of investigation and cobbled together a few different articles to come up with my solution.  My main sources of documentation came from these two great articles:

Remote Authentication in SharePoint Online Using Claims-Based Authentication - provided by Microsoft, which provides a mechanism for challenging and obtaining credentials from a 3rd party application using a popup Web Browser.

Using the WebBrowser Control in ASP-NET - is an awesome CodeProject article by Dianyang Yu which shows how to use any type of Windows control within the context of a web application.  I was very impressed by this and is well worth the read.

I basically took these and added my own code to make it work for my scenario as these don't fit my needs by themselves for these 2 reasons:
  1. The MSDN article is for a Windows based application where the WebBrowser control is used to popup and ask for the credentials.  I could grab the cookie values from this and store is somewhere for use within the MVC application but those credentials expire after 10 hours (I think) so I would constantly have to update them.
  2. The CodeProject article didn't automate the credentials in the browser although it was pretty close.

My Tweaks

I started with the MSDN article code and the first thing I did was create a Sub class for the WebBrowser control which posted the credentials automatically to the server.  This was a simple class:
   public class AutoWebBrowser : WebBrowser  
   {  
     private static readonly ILog log = LogManager.GetLogger("RALPH");  
     public void SetCredentials(string username, string password)  
     {  
       username = HttpUtility.HtmlEncode(username);  
       password = HttpUtility.HtmlEncode(password);  
       Match match = Regex.Match(this.DocumentText, @".*PPFT.*value=""([^""]*)"".*");  
       string ppft = "";  
       if (match.Success)  
       {  
         ppft = match.Groups[1].Value;  
       }  
       else  
       {  
         throw new Exception(@"Could not find PPFT element in document:  
 " + this.DocumentText);  
       }  
       string postDataFormat = @"login={0}&passwd={1}&type=11&LoginOptions=2&MEST=&PPSX=Pass&PPFT={2}&PwdPad=&sso=&i1=1&i2=2&i3=4296&i4=&i8=&i9=&i10=&i12=1";  
       string postData = string.Format(postDataFormat, username, password, ppft);  
       byte[] data = new System.Text.UTF8Encoding().GetBytes(postData);  
       string postURL = "https://login.microsoftonline.com/ppsecure/post.srf" + this.Url.Query;  
       this.Navigate(postURL, "", data, "Content-Type: application/x-www-form-urlencoded\r\n");  
     }  
   }  

In order to post the credentials to the browser I had to set the content type and obtain the correct Flow Token and URL.  The post data contains the Flow Token attribute which changes everytime you login so I had to obtain that from the Login screen and put it in to the Post Data.  Likewise the URL to post to is also included in the Document data but I elected to just hard code this in instead of pulling it out of the document text.  Fiddler came in very handy in figuring all this out.

Next I pulled in the IEBrowserContext from the CodeProject article and created a new class for that which runs the ClaimsWebAuth (from the MSDN article) in its context:
   public class IEBrowserContext : ApplicationContext, IDisposable  
   {  
     private static readonly ILog log = LogManager.GetLogger("RALPH");  
     private Thread thread;  
     private ClaimsWebAuth claims;  
     private AutoResetEvent parentNotify;  
     public CookieCollection AuthenticationCookies;  
     public IEBrowserContext(string url, string username, string password, AutoResetEvent resultEvent)  
     {  
       parentNotify = resultEvent;  
       string lurl;  
       Uri nurl;  
       this.GetClaimParams(url, out lurl, out nurl);  
       thread = new Thread(new ThreadStart(  
       delegate  
       {  
         Init(lurl, nurl, username, password);  
         System.Windows.Forms.Application.Run(this);  
       }));  
       // set thread to STA state before starting  
       thread.SetApartmentState(ApartmentState.STA);  
       thread.Start();  
     }  
     private void Init(string loginUrl, Uri navigationEndUrl, string username, string password)  
     {  
       claims = new ClaimsWebAuth(loginUrl, navigationEndUrl, this);  
       claims.UserName = username;  
       claims.Password = password;  
     }  
     public void Complete()  
     {  
       parentNotify.Set();  
     }  
     protected override void Dispose(bool disposing)  
     {  
       if (thread != null)  
       {  
         thread.Abort();  
         thread = null;  
         return;  
       }  
       claims.Dispose();  
       base.Dispose(disposing);  
     }  
     private void GetClaimParams(string targetUrl, out string loginUrl, out Uri navigationEndUrl)  
     {  
       HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(targetUrl);  
       webRequest.Method = Constants.WR_METHOD_OPTIONS;  
 #if DEBUG  
       ServicePointManager.ServerCertificateValidationCallback = new RemoteCertificateValidationCallback(IgnoreCertificateErrorHandler);  
 #endif  
       try  
       {  
         WebResponse response = (WebResponse)webRequest.GetResponse();  
         ExtraHeadersFromResponse(response, out loginUrl, out navigationEndUrl);  
       }  
       catch (WebException webEx)  
       {  
         ExtraHeadersFromResponse(webEx.Response, out loginUrl, out navigationEndUrl);  
       }  
     }  
     private bool ExtraHeadersFromResponse(WebResponse response, out string loginUrl, out Uri navigationEndUrl)  
     {  
       loginUrl = null;  
       navigationEndUrl = null;  
       try  
       {  
         navigationEndUrl = new Uri(response.Headers[Constants.CLAIM_HEADER_RETURN_URL]);  
         loginUrl = (response.Headers[Constants.CLAIM_HEADER_AUTH_REQUIRED]);  
         return true;  
       }  
       catch  
       {  
         return false;  
       }  
     }  
     private bool IgnoreCertificateErrorHandler  
       (object sender,  
       System.Security.Cryptography.X509Certificates.X509Certificate certificate,  
       System.Security.Cryptography.X509Certificates.X509Chain chain,  
       System.Net.Security.SslPolicyErrors sslPolicyErrors)  
     {  
       return true;  
     }  
   }  

The main difference with this is that we had to raise notification events so that the application could manually wait for the response from the Claims Authentication.  I also made this use the MSDN article code as the basis for authentication.

Finally it was just a case of modifying the ClaimsWebAuth.cs file so that it automatically logs in.

I altered the Constructor as follows:
     public ClaimsWebAuth(string loginUrl, Uri navigationUrl, IEBrowserContext ctx)  
     {  
       Owner = ctx;  
       LoginPageUrl = loginUrl;  
       NavigationEndUrl = navigationUrl;  
       log.Debug("Constructing ClaimsWebAuth for URL: " + loginUrl);  
       this.webBrowser = new AutoWebBrowser();  
       this.webBrowser.Navigated += new WebBrowserNavigatedEventHandler(ClaimsWebBrowser_Navigated);  
       this.webBrowser.DocumentCompleted += new WebBrowserDocumentCompletedEventHandler(webBrowser_DocumentCompleted);  
       this.webBrowser.ScriptErrorsSuppressed = true;  
       if (string.IsNullOrEmpty(this.LoginPageUrl)) throw new ApplicationException(Constants.MSG_NOT_CLAIM_SITE);  
       // navigate to the login page url.  
       this.webBrowser.Navigate(this.LoginPageUrl);  
     }  

The constructor registers a new event for the DocumentCompleted and Navigates to the login page.
The DocumentCompleted event then sets the credentials on the AutoWebBrowser and I altered the Navigated event to notify the IEBrowserContext that we were all finished.
     private void ClaimsWebBrowser_Navigated(object sender, WebBrowserNavigatedEventArgs e)  
     {  
       // check whether the url is same as the navigationEndUrl.  
       if (NavigationEndUrl != null && NavigationEndUrl.Equals(e.Url))  
       {  
         Owner.AuthenticationCookies = ExtractAuthCookiesFromUrl(this.LoginPageUrl);  
         Owner.Complete();  
       }  
     }  
     void webBrowser_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e)  
     {  
       if (!credentialsSet)  
       {  
         this.webBrowser.SetCredentials(UserName, Password);  
         credentialsSet = true;  
       }  
     }  

I then just remembered to dispose the AutoWebBrowser when the ClaimsWebAuth is disposed.

Usage

To use the new automatic login I created a simple Singleton object which stores the cookies as follows:
     public void LoadCredentials(string url, string username, string password)  
     {  
       resultEvent = new AutoResetEvent(false);  
       System.Timers.Timer timer = new System.Timers.Timer(30000);  
       timer.AutoReset = false;  
       timer.Elapsed += new System.Timers.ElapsedEventHandler(timer_Elapsed);  
       using (IEBrowserContext webAuth = new IEBrowserContext(url, username, password, resultEvent))  
       {  
         timer.Start();  
         EventWaitHandle.WaitAll(new AutoResetEvent[] { resultEvent });  
         timer.Stop();  
         AuthenticationCookies = webAuth.AuthenticationCookies;  
       }  
       CredentialsLoaded = AuthenticationCookies != null;  
       timer.Dispose();  
     }  
     void timer_Elapsed(object sender, System.Timers.ElapsedEventArgs e)  
     {  
       resultEvent.Set();  
     }  
     public CookieCollection AuthenticationCookies {get;set;}  

I also set a timer so that if no response is returned in 30 seconds it fires an exception.

So thats how I did it.  I did notice some problems when installing on the server, firstly make sure that the server can Browse to the login page and you can login (i.e Javascript is enabled etc).  I also had some security issues around the web identity being used to run the WebBrowser control, in the end I set a new Credential on the Application pool which enabled the server to access the login page fine.

Once you have the authentication cookies then just follow the MSDN article to store them on the SharePoint access.

Good luck!

Wednesday, 4 April 2012

Welcome

About my blog

Welcome to my technical blog. I'm a .NET architect and developer and have been for more years than I would care to remember.  I really enjoy the challenge of green fields developments designing and creating new systems to meet clients requirements.

When new projects come along I tend to up skill myself to the latest technologies and as part of that process I usually come across a lot of undocumented and raw features that need investigation and understanding.

So I've decided to keep track of issues that I come across...hence this!

I have been involved in many diverse projects with the main focus being core IT solutions for clients, I would say that 90% of the projects I worked on were replacing core infrastructure for companies.

I'm expecting my blog will be mainly focused on .NET technologies ,since that is predominantly the area I am working in, but I can see myself delving in to other areas as well since I have to deal with older technologies as well.