ASP.NET: Understanding OWIN, Katana, and the Middleware Pipeline

Posted on January 4 2015 02:12 PM by jatten in C#, ASP.Net, ASP.NET MVC, CodeProject   ||   Comments (5)

Sunova-500The tools and architectural patterns we use to build and evolve web applications have undergone dramatic change over the past few years. Modern web application development is a fast-paced, dynamic activity reliant to ever an greater degree on modular, loosely-coupled application components, rapidly-evolving frameworks, and shorter development cycles.

Historically, the ASP.NET ecosystem (including Web Forms, MVC, Web Api, SignalR, and others) sat on top of System.Web, and was tightly coupled to the underlying .NET framework as a whole. Further, ASP.NET web applications have been reliant on Microsoft Internet Information Services (IIS) to provide the hosting environment necessary to run in production.

Image by Sunova Surfboards  |  Some Rights Reserved

We expand upon the middleware concepts discussed here, and tie things together with ASP.NET Web API, in the next few posts:

In the past two years, the ASP.NET team has been evolving the .NET web development ecosystem away from this approach, and instead creating a growing set of pluggable components. Beginning with ASP.NET 4.5.1, we have seen the introduction of more and more pluggable application components which are not dependent on System.Web, and which can be configured to run outside the constraints of IIS using custom hosts.

My understanding is that ASP.NET 5 ("vNext") will be moving way, way further in this direction.

Understanding the relationship between the hosting process, the web server, and our application components is going to become increasingly important as the ASP.NET ecosystem becomes more and more modular. More and more, this relationship, and the pluggable architecture upon which our .NET web applications will depend, will be defined by the Open Web Interface for .NET (OWIN) specification.

And we need to understand how it works in order to take full advantage of the evolving .NET Web Stack.

UPDATE 1/5/2015: ASP.NET 5 is indeed moving further in this direction. Katana itself will apparently be fully integrated into ASP.NET 5. OWIN will be available through an interop, but greenfield projects will be best off using the integrated middleware pipeline. However, most of what we discuss here will still apply, either directly, or conceptually (thanks to Rick Anderson and the ASP.NET team for the clarification!).

We will examine the ASP.NET 5 middleware pipeline in an upcoming post.

What is OWIN, and Why Do I Care?

OWIN (the Open Web Interface for .NET) is an open-source specification describing an abstraction layer between web servers and application components. The goal of the OWIN specification is to facilitate a simple, pluggable architecture for .NET-based Web applications and the servers upon which they rely, encouraging development of small, focused application components (known as "middlewares" in the OWIN parlance) which can be assembled into a processing pipeline through which the server can then route incoming HTTP requests.

From the Owin.org About page:

OWIN defines a standard interface between .NET web servers and web applications. The goal of the OWIN interface is to decouple server and application, encourage the development of simple modules for .NET web development, and, by being an open standard, stimulate the open source ecosystem of .NET web development tools.

OWIN is a specification, not an implementation. As such, OWIN describes a minimal set of types, and a single application delegate through which interactions between an application component and the server will occur.

Note that the OWIN specification is an open source community effort independent of Microsoft.

OWIN Definitions

OWIN provides the following general definitions for software elements in an OWIN-based application:

Server – The HTTP server that directly communicates with the client and then uses OWIN semantics to process requests.  Servers may require an adapter layer that converts to OWIN semantics.

Web Framework – A self-contained component on top of OWIN exposing its own object model or API that applications may use to facilitate request processing.  Web Frameworks may require an adapter layer that converts from OWIN semantics.

Web Application – A specific application, possibly built on top of a Web Framework, which is run using OWIN compatible Servers.

Middleware – Pass through components that form a pipeline between a server and application to inspect, route, or modify request and response messages for a specific purpose.

Host – The process an application and server execute inside of, primarily responsible for application startup. Some Servers are also Hosts.

The OWIN Environment Dictionary

The host in an OWIN-compatible application provides an Environment Dictionary in the form of IDictionary<string, object> containing all of the relevant information about the request, response, and server/host information. The OWIN specification defines a minimum set of keys and values which must be present in the dictionary. However, servers, host environments, middleware, and application code may add additional data, or modify the data in the dictionary as a result of processing.

The OWIN Application Delegate Function

As mentioned above, OWIN also specifies a primary interface, the Application Delegate (also known as AppFunc). Interactions between the server and application components occurs through calls to the AppFunc delegate.

An Application Delegate function will take the Environment Dictionary (IDictionary<string, object>) as an argument, use the request/response information contained in the environment dictionary to perform whatever processing is required to meet its responsibilities, and return a Task when processing is complete.

The Application Delegate Function Signature specified by OWIN:
Func<IDictionary<string, object>, Task>

 

In our code, we will often use the alias AppFunc for this delegate to improve readability:

Alias the Application Delegate for use in code:
using AppFunc = Func<IDictionary<string, object>, Task>;

 

The OWIN Middleware Pipeline

In keeping with the above, individual application components (middleware) perform their processing duties when the AppFunc delgate is called. However, in order to maintain the pipeline, each middleware component is also responsible for invoking the next component in the chain (or, intentionally NOT calling the next component and short-circuiting the chain if appropriate.

owin-middleware-chain

In light of this, each middleware component needs to provide an AppFunc delegate to be called in order to do its own work in the pipeline, and also needs to receive a reference to the next AppFunc delegate, to be called (in most cases) once the current component has completed processing.

In other words, a middleware can be expressed with a signature which accepts an AppFunc delegate as an argument (which is retained and called as the next process in the pipeline), and which returns an AppFunc Delegate (which is used to perform the current middleware processing:

Middleware Delegate Signature:
Func<AppFunc, AppFunc>

 

In code, this might look something like this:

Example Middleware as Function:
public AppFunc SomeMiddleware(AppFunc next)
{
    // Create and AppFunc using a Lambda expression:
    AppFunc appFunc = async (IDictionary<string, object> environment) =>
    {
        // Do some processing ("inbound")...
        // Invoke and await the next middleware component:
        await next.Invoke(environment);
 
        // Do additional processing ("outbound")
    };
    return appFunc;
}

 

We'll see how all this works shortly.

What is Katana?

Katana is a set of open source components for building and hosting OWIN-based web applications, maintained by the Microsoft Open Technologies Group.

Katana provides an implementation of the OWIN specification, and is in fact used in an increasing number of ASP.NET project templates. Additionally, Katana provides a wide variety of ready-to-use middleware components, ready for use in an OWIN-based application.

For our purposes, we will use some basic components from Katana to demonstrate and understand:

  • How an OWIN-based middleware pipeline is configured
  • How to construct a basic middleware component
  • How OWIN and the middleware pipeline fit into a web application generally

How all this comes together into the middleware pipeline, and the manner in which your application configures and interacts with it can be confusing at first. For one thing, we are dealing with a lot of delegate functions and generic types. Also, there are still some things happening behind the scenes that are not obvious at first.,

The best way to understand how OWIN, Katana, and the middleware pipeline works is, well, to jump in and mess about.

Console Application Example: Creating a Barebones Katana Application

An example of the simplicity available with an OWIN-based application is the fact that we can create a simple Console application, pull in a few Nuget packages, and spin up a self-hosted web server. To get started, create a new console application in visual studio. Then, add the following Nuget packages using the Package Manager Console:

Install Microsoft.Owin.Hosting via Nuget Package Manager Console:
PM> Install-Package Microsoft.Owin.Hosting

 

The Microsoft.Owin.Hosting package added a few library references to our project, notably Owin, Microsoft.Owin, and of course, Microsoft.Owin.Hosting.

Next, we need to add a means for our console application to listen for HTTP requests:

 
Install Microsoft.Owin.Host.HttpListener via Nuget Package Manager Console:
PM> Install-Package Microsoft.Owin.Host.HttpListener

 

With that, we have everything we need to put together a simple self-hosted web application.

Most of the examples we examine in this post will be overly (some might say "stupidly") simple, and let's bear in mind that we are focusing more on how basic middleware are constructed, and how the middleware pipeline works in general than we are on how to write specific middleware components, or how to use all of the Katana features. I stick with silly examples here so that we are focused on the core middleware structure, and how the pipeline works, and not on complex middleware implementation details.

First, add the following code the the Program.cs file in your application:

The basic KatanaConsole Application:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.IO;
 
// Add the Owin Usings:
using Owin;
using Microsoft.Owin.Hosting;
using Microsoft.Owin;
 
namespace KatanaConsole
{
    // use an alias for the OWIN AppFunc:
    using AppFunc = Func<IDictionary<string, object>, Task>;
 
    class Program
    {
        static void Main(string[] args)
        {
            WebApp.Start<Startup>("http://localhost:8080");
            Console.WriteLine("Server Started; Press enter to Quit");
            Console.ReadLine();
        }
    }
 
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            var middleware = new Func<AppFunc, AppFunc>(MyMiddleWare);
            app.Use(middleware);
        }
 
        public AppFunc MyMiddleWare(AppFunc next)
        {
            AppFunc appFunc = async (IDictionary<string, object> environment) =>
            {
                // Do something with the incoming request:
                var response = environment["owin.ResponseBody"] as Stream;
                using (var writer = new StreamWriter(response))
                {
                    await writer.WriteAsync("<h1>Hello from My First Middleware</h1>");
                }
                // Call the next Middleware in the chain:
                await next.Invoke(environment);
            };
            return appFunc;
        }
    }
}

 

Now, let's take a look at a few items of note here. First off, we have added an alias for the Application Delegate, so that in our code, we can refer to Func<IDictionary<string, object> , Task> by the name AppFunc.

Next, we have added a method to the Startup class, MyMiddleware(), which accepts an argument of type AppFunc named next, and returns and AppFunc. If we look closely, we see that the anonymous function returned by the MyMiddleware() method, when invoked by the host against an incoming HTTP request, will perform some basic processing against the incoming request (actually, writing to the response body), and will then invoke the AppFunc next passed in as an argument, passing to it the environment dictionary, and thereby continuing the pipeline processing of the request.

Bear in mind that the MyMiddleware() method simply returns the anonymous function to the caller, but does not invoke it. The function will be added the to request processing pipeline, and will be invoked when an incoming HTTP request needs to be processed.

Most importantly, let's take a look at the Startup class.

In the Katana implementation of the OWIN specification, the host will look for a startup entry point to build the middleware pipeline in one of four ways (in order as listed below):

  • The Startup class is specified as a command line argument, or a type argument (where applicable) when the host in initialized (usually when using OwinHost, or the Owin.Hosting API, which is what we did in our code above).
  • The host will look in the relevant app.Config or web.Config file for an appSettings entry with the key "owin:AppStartup"
  • The host will scan the loaded assemblies for the OwinStartup attribute and uses the type specified in the attribute.
  • If all of the preceding methods fail, then the host will use reflection and scan the loaded assemblies for a type named Startup with a method with the name and signature void Configuration(IAppBuilder).

The Startup class must provide a public Configuration() method, as mentioned above, with the signature void Configure(IAppBuilder app).

Katana and the IAppBuilder Interface

The IAppBuilder interface is NOT a part of the OWIN specification. It is, however, a required component for a Katana host. The IAppBuilder interface provides a core set of methods required to implement the OWIN standard, and serves as a base for additional extension methods for implementing middleware.

When the Katana host initializes the Startup class and calls Configuration(), a concrete instance of IAppBuilder is passed as the argument. We then use IAppBuilder to configure and add the application middleware components we need for our application, assembling the pipeline through which incoming HTTP requests will be processed.

The most common way to add middleware is by passing components to the Use() method. Middleware components will be added to the pipeline in the order they are passed to Use(). This is important to bear in mind as we configure our pipeline, as this will determine the order in which processing is applied to incoming requests (and in reverse, to outgoing responses).

In our code, we grab a reference to our middleware function by calling MyMiddleware(), and then add it to the pipeline be passing it to app.Use().

Running the Application

If we run the application, we se that our server has started:

Console output from first run of the Application:

run-katana-console-1

And, if we open a web browser and navigate to our URL, we see the expected output:

Navigate to URL in Browser:

open-url-in-browser-first-middleware

Presto! We have created a bare-bones, self-hosted web application using only a console application, and a handful of small Katana components.

More importantly, we have created our first OWIN middleware.

Now, let's see how the whole pipeline/chaining thing works.

Chaining Multiple Middlewares

Thus far, we have created an application with a single middleware component in the pipeline. While we properly included a parameter in our middleware function for a "next" AppFunc to be invoke, there is nothing to work with at the moment.

Let's create a second component to add to the pipeline, and see how that works. Add another method to the Startup class in our code below the MyMiddleware() method:

Add another Middleware function to the Startup class:
public AppFunc MyOtherMiddleWare(AppFunc next)
{
    AppFunc appFunc = async (IDictionary<string, object> environment) =>
    {
        // Do something with the incoming request:
        var response = environment["owin.ResponseBody"] as Stream;
        using (var writer = new StreamWriter(response))
        {
            await writer.WriteAsync("<h1>Hello from My Second Middleware</h1>");
        }
        // Call the next Middleware in the chain:
        await next.Invoke(environment);
    };
    return appFunc;
}

 

Next, update the Configuration() method to add this new middleware:

Add the New Middleware to the Processing Pipeline in the Configure() Method:
public void Configuration(IAppBuilder app)
{
    var middleware = new Func<AppFunc, AppFunc>(MyMiddleWare);
    var otherMiddleware = new Func<AppFunc, AppFunc>(MyOtherMiddleWare);
 
    app.Use(middleware);
    app.Use(otherMiddleware);
}

 

Now, all we have done is create another middleware and add it to the pipeline by passing it to app.Use(), similar to the first. However, if we run our application again, we see that both middlewares are executed:

Running the Application with Multiple Middlewares in the Pipeline:

open-url-in-browser-multiple-middleware

Now, it would be easy to think that maybe both functions are just executing anyway, but let's see what happens when we comment out the bit where we invoke the "next" AppFunc in our first middleware:

Comment Out Call to Invoke Next AppFunc:
        public AppFunc MyMiddleWare(AppFunc next)
        {
            AppFunc appFunc = async (IDictionary<string, object> environment) =>
            {
                // Do something with the incoming request:
                var response = environment["owin.ResponseBody"] as Stream;
                using (var writer = new StreamWriter(response))
                {
                    await writer.WriteAsync("<h1>Hello from My First Middleware</h1>");
                }
                // Call the next Middleware in the chain:
                // await next.Invoke(environment);
            };
            return appFunc;
        }

 

Refresh our browser, we see the second middleware never executed, even though is has been added to the pipeline:

Next Middleware Fails if Next is not Invoked:

open-url-in-browser-multiple-middleware-no-call-to-next

Clearly, if next is not invoked, the pipeline is short-circuited. Also, if we change the order in which we add the middlewares to the pipeline, the processing order is affected:

Change the order Middlewares are added (and uncomment call to next):
public void Configuration(IAppBuilder app)
{
    var middleware = new Func<AppFunc, AppFunc>(MyMiddleWare);
    var otherMiddleware = new Func<AppFunc, AppFunc>(MyOtherMiddleWare);
 
    // Swap the order here:
    app.Use(otherMiddleware);
    app.Use(middleware);
}

 

Refreshing the view in our browser, we should not be surprised:

Changing the Order in Which Middleware is Added to the Pipeline:

open-url-in-browser-multiple-middleware-reversed


Thus far we have implemented a very basic OWIN-based Processing pipeline, using the raw types expected by the OWIN specification. Now let's see if we can make life a little easier, using some tools provided by Katana, and by laying some abstraction on our middlewares to make them easier to think about.

Using Katana Abstractions: IOwinContext

In our previous example, we worked with the raw Environment Dictionary as specified by OWIN. This provides a flexible, low-level mechanism, but is less than handy when we want to work with strongly-typed objects, and perhaps raise our abstractions up a level in our pipeline implementation code.

Katana provides us with a handy interface, IOwinContext, and a concrete wrapper for the Environment Dictionary, OwinContext. We can use IOwinContext to access some of the information in the Environment Dictionary in a more convenient, strongly typed manner. For example, we could modify our code like so:

Modify Middleware Code to use IOwingContext:
public AppFunc MyMiddleWare(AppFunc next)
{
    AppFunc appFunc = async (IDictionary<string, object> environment) =>
    {
        IOwinContext context = new OwinContext(environment);
        await context.Response.WriteAsync("<h1>Hello from My First Middleware</h1>");
        await next.Invoke(environment);
    };
    return appFunc;
}
 
public AppFunc MyOtherMiddleWare(AppFunc next)
{
    AppFunc appFunc = async (IDictionary<string, object> environment) =>
    {
        IOwinContext context = new OwinContext(environment);
        await context.Response.WriteAsync("<h1>Hello from My Second Middleware</h1>");
        await next.Invoke(environment);
    };
    return appFunc;
}

 

In fact, the IOwinContext object, and similar interfaces provided by Katana such as IOwinRequest and IOwinResponse provide a large number of useful, strongly-typed abstractions which simplify our interaction with the environment.  These interfaces are, in fact, quite similar to the familiar HttpContext, HttpRequest, and HttpResponse objects we are accustomed to using in a standard MVC or Web Api application.

Creating Middleware Components as Stand-Alone Classes

So far, we've taken the raw, bare-bones approach to creating middleware for our application, by using a method with the signature Func<AppFunc, AppFunc> and pushing it into our pipeline. However, a more modular approach would be to create our middleware are individual classes.

We can do this, so long as the class we create adheres to some specific requirements.

The class must have a constructor which accepts an argument of (wait for it…) AppFunc, and must provide a method named Invoke which accepts an argument of IDictionary<string, object> and returns Task.

To continue our trivial example, we can take our two methods, MyMiddleWare()and MyOtherMiddleware() and create classes instead:

Create Stand-Alone Middleware as Separate Classes:
public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        app.Use<MyMiddlewareComponent>();
        app.Use<MyOtherMiddlewareComponent>();
    }
}
 
public class MyMiddlewareComponent
{
    AppFunc _next;
    public MyMiddlewareComponent(AppFunc next)
    {
        _next = next;
    }
    public async Task Invoke(IDictionary<string, object> environment)
    {
        IOwinContext context = new OwinContext(environment);
        await context.Response.WriteAsync("<h1>Hello from My First Middleware</h1>");
        await _next.Invoke(environment);
    }
}
 
public class MyOtherMiddlewareComponent
{
    AppFunc _next;
    public MyOtherMiddlewareComponent(AppFunc next)
    {
        _next = next;
    }
    public async Task Invoke(IDictionary<string, object> environment)
    {
        IOwinContext context = new OwinContext(environment);
        await context.Response.WriteAsync("<h1>Hello from My Second Middleware</h1>");
        await _next.Invoke(environment);
    }
}

 

Note, we have pulled our Func<AppFunc, AppFunc> methods out and refactored them into classes. Also, we have modified our Configuration() method on the Startup class to use the overloaded Use<T>() method, which allows us to specify the type which represents our middleware as a generic type argument.

Once again, we can run our application as-is, and all should work as expected.

Add Custom Extensions for IAppBuilder

Middleware implementations often utilize extension methods to extend the IAppBuilder interface, making it easier for the developer to add middleware into the pipeline.

For example, we can add a static class for our extension methods like so:

Add Extension Methods to IAppBuilder for Our Custom Middleware:
public static class AppBuilderExtensions
{
    public static void UseMyMiddleware(this IAppBuilder app)
    {
        app.Use<MyMiddlewareComponent>();
    }
 
    public static void UseMyOtherMiddleware(this IAppBuilder app)
    {
        app.Use<MyOtherMiddlewareComponent>();
    }
}

 

Then we can update our Configuration() method again, and we see that our new extension methods are available:

Update Configuration() to Use Middleware Extension Methods:
public void Configuration(IAppBuilder app)
{
    app.UseMyMiddleware();
    app.UseMyOtherMiddleware();
}

 

Once again, running our application, and refreshing the browser, we see everything still works as expected.

Adding Middleware Configuration Options

Often we want the ability to pass in some configuration options for our middleware as it is added to the pipeline. For example, suppose we wanted some control over the text to be displayed when MyMiddleware is invoked. Let's set things up so we can pass in the message to be displayed during the call to Configuration(), instead of hard-coding it into the middleware itself:

Add a String Configuration Parameter to MyMiddleware:
public class MyMiddlewareComponent
{
    AppFunc _next;
 
    // Add a member to hold the greeting:
    string _greeting;
 
    public MyMiddlewareComponent(AppFunc next, string greeting)
    {
        _next = next;
        _greeting = greeting;
    }
 
    public async Task Invoke(IDictionary<string, object> environment)
    {
        IOwinContext context = new OwinContext(environment);
 
        // Insert the _greeting into the display text:
        await context.Response.WriteAsync(string.Format("<h1>{0}</h1>", _greeting));
        await _next.Invoke(environment);
    }
}

 

Of course, now the compiler is telling you you need to also modify the extension method we use to add MyMiddlewareComponent to the pipeline, because we need to provide for the new constructor argument:

Modify the Extension Method to Accept and Pass the New Configuration Argument:
public static class AppBuilderExtensions
{
    public static void UseMyMiddleware(this IAppBuilder app, string greetingOption)
    {
        app.Use<MyMiddlewareComponent>(greetingOption);
    }
 
    public static void UseMyOtherMiddleware(this IAppBuilder app)
    {
        app.Use<MyOtherMiddlewareComponent>();
    }
}

 

And last, of course, we need to modify the code in Configuration() in the Startup class to pass in an acceptable argument:

Modify the Configuration() Method to Pass an Appropriate Configuration Argument:
public void Configuration(IAppBuilder app)
{
    app.UseMyMiddleware("This is the new greeting for MyMiddleware!");
    app.UseMyOtherMiddleware();
}

 

In our simplistic example here, we were able add a string argument to our middleware constructor, and everything worked out just fine. More commonly though, middleware will likely require more configuration options. Also, this does not represent a very modular design approach. Instead, we might be better off using a configuration class, to be passed to the constructor instead.

Use Configuration Objects for Configuring Middleware

To take our contrived example to the limit, let's rethink how we have implemented the configuration options for our middleware. Instead of passing in an arbitrary string when we add the middleware to the pipeline, lets create a configuration class which will use some pre-defined elements to construct a message.

First, let's create a (very contrived) configuration options class:

Configuration Options Class for MyMiddleware:
public class MyMiddlewareConfigOptions
{
    string _greetingTextFormat = "{0} from {1}{2}";
    public MyMiddlewareConfigOptions(string greeting, string greeter)
    {
        GreetingText = greeting;
        Greeter = greeter;
        Date = DateTime.Now;
    }
 
    public string GreetingText { get; set; }
    public string Greeter { get; set; }
    public DateTime Date { get; set; }
 
    public bool IncludeDate { get; set; }
 
    public string GetGreeting()
    {
        string DateText = "";
        if(IncludeDate)
        {
            DateText = string.Format(" on {0}", Date.ToShortDateString());
        }
        return string.Format(_greetingTextFormat, GreetingText, Greeter, DateText);
    }
}

 

Now, we will once again need to update our extension methods:

Modify Extension Methods to Pass Configuration Options:
public static class AppBuilderExtensions
{
    public static void UseMyMiddleware(this IAppBuilder app, 
        MyMiddlewareConfigOptions configOptions)
    {
        app.Use<MyMiddlewareComponent>(configOptions);
    }
 
    public static void UseMyOtherMiddleware(this IAppBuilder app)
    {
        app.Use<MyOtherMiddlewareComponent>();
    }
}

 

And finally, we now need to prepare our configuration during the Configuration() method of the Startup class (which actually makes a lot of sense, no?):

Perform Middleware Configuration During Call to Configuration() Method:
public void Configuration(IAppBuilder app)
{
    // Set up the configuration options:
    var options = new MyMiddlewareConfigOptions("Greetings!", "John");
    options.IncludeDate = true;
 
    // Pass options along in call to extension method:
    app.UseMyMiddleware(options);
    app.UseMyOtherMiddleware();
}

 

Running the application, and refreshing the browser, we see the impact of our configuration options:

Refresh Browser to View Effect of Configuration Options:

browser-greeting-using-config-options

Ok, we have just about exhausted the usefulness of these two example middleware components. Let's take a look at some (still silly and contrived) mocked up components that represent something we might actually find in a pipeline.

Create Mock Components for Logging, and Authentication

As before, we are going to use some overly simple, contrived examples here. Katana actually provides for the addition of both Logging and Authentication components, and we aren't going to get bogged down in the complexities of writing code to actually perform either of these functions beyond mocking their effects on pipeline flow. Each of those topics could (and probably will be) a post unto itself.

For now, let's add two new classes to our project. This time, though, let's add these as individual class files. This means we will need to specify our AppFunc alias in each class, as well as make sure the using statements at the top of the file include Microsoft.Owin.

Add a Mock Authentication Middleware Class as a Separate Code File:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Owin;
using Microsoft.Owin;
 
namespace KatanaConsole
{
    // use an alias for the OWIN AppFunc:
    using AppFunc = Func<IDictionary<string, object>, Task>;
 
    public class SillyAuthenticationComponent
    {
        AppFunc _next;
        public SillyAuthenticationComponent(AppFunc next)
        {
            _next = next;
        }
 
        public async Task Invoke(IDictionary<string, object> environment)
        {
            IOwinContext context = new OwinContext(environment);
 
            // In the real world we would do REAL auth processing here...
 
            var isAuthorized = context.Request.QueryString.Value == "john";
            if(!isAuthorized)
            {
                context.Response.StatusCode = 401;
                context.Response.ReasonPhrase = "Not Authorized";
 
                // Send back a really silly error page:
                await context.Response.WriteAsync(string.Format("<h1>Error {0}-{1}", 
                    context.Response.StatusCode, 
                    context.Response.ReasonPhrase));
            }
            else
            {
                // _next is only invoked is authentication succeeds:
                context.Response.StatusCode = 200;
                context.Response.ReasonPhrase = "OK";
                await _next.Invoke(environment);
            }
        }
    }
}

 

In the above code, note that we totally fake an authorization request. Instead of grabbing an auth token from the request header or some other secure way of doing things, we are cheating, and simply passing in a query string to check.

Also notice that if authorization fails, _next is never invoked. This matters in a moment.

Now let's add a hokey logging middleware:

Add a Mock Logging Middleware Class as a Separate Code File:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.Owin;
 
namespace KatanaConsole
{
    // use an alias for the OWIN AppFunc:
    using AppFunc = Func<IDictionary<string, object>, Task>;
 
    public class SillyLoggingComponent
    {
        AppFunc _next;
        public SillyLoggingComponent(AppFunc next)
        {
            _next = next;
        }
 
        public async Task Invoke(IDictionary<string, object> environment)
        {
            // Pass everything up through the pipeline first:
            await _next.Invoke(environment);
 
            // Do the logging on the way out:
            IOwinContext context = new OwinContext(environment);
            Console.WriteLine("URI: {0} Status Code: {1}", 
                context.Request.Uri, context.Response.StatusCode);
        }
    }
}

 

Here, we are logging the incoming URI, and the status code of each request. Since we want to know the status code AFTER the request has been processed, we are going to place this component first in the pipeline, but do no processing until after the call to _next.Invoke() returns. In other words, we want to log status after all subsequent processing happens.

With this done, let's go ahead and add Extension methods for both of these components for ease of use with IAppBuilder:

Add Extension Methods for Auth and Logging Components:
public static class AppBuilderExtensions
{
    public static void UseMyMiddleware(this IAppBuilder app, 
        MyMiddlewareConfigOptions configOptions)
    {
        app.Use<MyMiddlewareComponent>(configOptions);
    }
 
    public static void UseMyOtherMiddleware(this IAppBuilder app)
    {
        app.Use<MyOtherMiddlewareComponent>();
    }
 
    public static void UseSillyAuthentication(this IAppBuilder app)
    {
        app.Use<SillyAuthenticationComponent>();
    }
 
    public static void UseSillyLogging(this IAppBuilder app)
    {
        app.Use<SillyLoggingComponent>();
    }
}

 

Now let's see how we might use these examples in modeling some "real-world" application behavior.

Requests, Responses, and Short-Circuiting the Middleware Pipeline

Recall our diagram of the middleware pipeline. We have a basic idea of how the request/response flow is supposed to occur under normal circumstances. Let's use our two new middlewares, and re-configure our application somewhat.

First, we want to log the URL for each incoming request, and the status code of the response for each. Since we can't know the final status code until all of the pipeline processing has completed, we will put this middleware in the pipeline first. in other words, the logging middleware will be the first component to see each incoming request, and the last component to see the outgoing repose.

Next, we will add our Authentication component. We want to test authentication early in the pipeline, and prevent unauthorized users from proceeding any further than necessary into our application.

Finally, we will add our MyMiddleware component, which display a helpful greeting in the browser window.

We set all this up by modifying the Configuration() method of the Startup class as follows:

Configure Application with Mock Logging and Authentication Middleware:
public void Configuration(IAppBuilder app)
{
    app.UseSillyLogging();
    app.UseSillyAuthentication();
    // Set up the configuration options:
    var options = new MyMiddlewareConfigOptions("Greetings!", "John");
    options.IncludeDate = true;
    app.UseMyMiddleware(options);
}

 

Recall that the way we set up our Authentication middleware, the only valid login will be a URL with a query string value of "john":

The "Authenticated User" Login URL:
http://localhost:8080/?john

 

So now, we can run our re-configured application and check out the refreshed view in the browser:

Browser View with "Authenticated" Request:

browser-greeting-using-after-auth

Looks like everything worked as expected. Now lets take a look at our console window, and see how our logging middleware did:

Console Output from Logging Middleware:

console-output-authenticated

Well THAT'S interesting… even though everything seems to have worked, we are getting a 404 ("Not Found") status code.

This is because the last middleware in our pipeline is calling _next.Invoke() , but there is no AppFunc available to call. In a real middleware, this would likely need some proper handling.

In our case, the MyMiddleWareComponent actually appears to be designed to be a final component in a chain (the one writing to the response body and returning to the client), so we could actually place the work of the component after the call to invoke _next, knowing that unless some really special circumstances arose, there will not likely be any additional components.

Modify MyMiddleWareComponent to process after call to next:
public class MyMiddlewareComponent
{
    AppFunc _next;
 
    // Add a member to hold the greeting:
    //string _greeting;
    MyMiddlewareConfigOptions _configOptions;
 
    public MyMiddlewareComponent(AppFunc next, MyMiddlewareConfigOptions configOptions)
    {
        _next = next;
        _configOptions = configOptions;
    }
 
    public async Task Invoke(IDictionary<string, object> environment)
    {
        // If there is no next component, a 404 Not Found will be written as 
        // the response code here:
        await _next.Invoke(environment);
 
        IOwinContext context = new OwinContext(environment);          
 
        // Insert the _greeting into the display text:
        await context.Response.WriteAsync(string.Format("<h1>{0}</h1>", _configOptions.GetGreeting()));
 
        // Update the response code to 200 OK:
        context.Response.StatusCode = 200;
        context.Response.ReasonPhrase = "OK";
    }
}

 

If we run things again with our modified code, we should see the expected 200 OK response status in the console output.

Now, let's try reloading the browser with a different URI/query string:

An "Invalid" User URL:
http://localhost:8080/?bill

 

If we type this new, "invalid" user URL into the address bar of the browser, we see our poor-man's Error page:

Load Browser with "Invalid" User URL:

browser-greeting-invalid-user

We can also see that our logging middleware properly logged the invalid attempt out to the console:

Console Output after "Invalid" login:

console-output-invalid

So what happened here in terms of our pipeline?

As you may have reasoned, the SillyAuthenticationComponent intentionally short-circuited the pipeline, by not invoking the next component in the chain once use authentication failed. In this case, our pipeline flow looked something like this instead of the previous diagram:

Flow in the Short-Circuited Pipeline Due to Failed Authentication:

owin-middleware-chain-short-circuited

Create MiddleWare Components as Stand-Alone Assemblies

Unlike most of what we have done here, most of the time, OWIN middleware would tend to be composed as a stand-alone assembly in its own class library. Most likely, the middleware itself will take some dependencies on other libraries, but it would not tend to be part of the application assembly itself.

To carry our most recent examples to their logical conclusion, we might extract each of our middleware components into its own project, which, when ready for deployment, we might even host on Nuget as packages to be added to client projects.

Looking at our SillyAuthentication middleware as an example, let's add a new project to our solution, named "SillyAuthentication." The project type should be "Class Library."

Once we have done that, we can use Manage Nuget Packages for Solution to add the Microsoft.Owin package to our new class library.

Now, we want to add two classes to the project. First, add the SillyAuthentication class itself:

Add the SillyAuthentication Class to the New Project:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Owin;
using Microsoft.Owin;
 
namespace SillyAuthentication
{
    // use an alias for the OWIN AppFunc:
    using AppFunc = Func<IDictionary<string, object>, Task>;
 
    public class SillyAuthentication
    {
        AppFunc _next;
        public SillyAuthentication(AppFunc next)
        {
            _next = next;
        }
 
        public async Task Invoke(IDictionary<string, object> environment)
        {
            IOwinContext context = new OwinContext(environment);
 
            // In the real world we would do REAL auth processing here...
 
            var isAuthorized = context.Request.QueryString.Value == "john";
            if (!isAuthorized)
            {
                context.Response.StatusCode = 401;
                context.Response.ReasonPhrase = "Not Authorized";
 
                // Send back a really silly error page:
                await context.Response.WriteAsync(string.Format("<h1>Error {0}-{1}",
                    context.Response.StatusCode,
                    context.Response.ReasonPhrase));
            }
            else
            {
                // _next is only invoked is authentication succeeds:
                context.Response.StatusCode = 200;
                context.Response.ReasonPhrase = "OK";
                await _next.Invoke(environment);
            }
        }
    }
}

 

Note in the above, that we have changed the name of the class from SillyAuthenticationComponent to simply SillyAuthentication. Secondly, if we copies the code from the original project, we need to change the namespace from KatanaConsole to SillyAuthentication.

Also, the way we set the alias for AppFunc must be specified for each code file where the alias will be used, so we need to do that here as well.

Next, we will need to add a new AppBuilderExtensions class, so that when we reference our component within another project, the extension method is there and ready to use:

Add a new AppBuilderExtensions Class to the new Project:
// Add reference to Owin:
using Owin;
 
namespace SillyAuthentication
{
    public static class AppBuilderExtensions
    {
        public static void UseSillyAuthentication(this IAppBuilder app)
        {
            app.Use<SillyAuthentication>();
        }
    }
}

 

Obviously, since this assembly is specific to our SillyAuthentication component, we don't need the other extension methods we defined in our original project.

We can do the same thing for our other components and we should have separate assemblies for the authentication component, the logging component, and our MyMidddleware component. In each case, we will probably want to rename the classes, dropping the "component" from each class name. Also, we need to use Manage Nuget Packages for Solution and bring Microsoft.Owin into each project.

Make sure to specify the AppFunc alias in each file.

Finally, for the MyMiddleware project, we will make sure to bring the MyMiddlewareConfiguration into the project as well.

Consuming the Stand-Alone Components from the Sample Project

Now, we can remove the SillyAuthenticationComponent class from our example project, as well as delete the extension method we created related to the SillyAuthenticationComponent class.

If we go to Solution Explorer => Project Dependencies and indicate that KatanaConsole depends all three of our new class library assemblies, and then also add a reference to each assembly using References => Add Reference, we are ready to clean up and simplify our Project.cs file.

At this point, we can ditch all of the previous middleware class files we were using within the KatanaConsole project itself. All we need is our main method, and our Startup class, like so:

Simplified KatanaSamples Project:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.IO;
 
// Add the Owin Usings:
using Owin;
using Microsoft.Owin.Hosting;
using Microsoft.Owin;
 
// Add references to separate assemblies:
using SillyAuthentication;
using SillyLogging;
using MyMiddleware;
 
namespace KatanaConsole
{
    class Program
    {
        static void Main(string[] args)
        {
            WebApp.Start<Startup>("http://localhost:8080");
            Console.WriteLine("Server Started; Press enter to Quit");
            Console.ReadLine();
        }
    }
 
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            app.UseSillyLogging();
            app.UseSillyAuthentication();
 
            // Set up the configuration options:
            var options = new MyMiddlewareConfigOptions("Greetings!", "John");
            options.IncludeDate = true;
 
            app.UseMyMiddleware(options);
        }
    }
}

 

If we have done everything correctly, and didn't miss a namespace change or anything like that, our KatanaConsole application should run exactly as before. However, this time, we have pulled in our custom middleware as references, which could just as easily come from Nuget.

In fact, the code in our Startup class may look fairly familiar to you. If you take a look at the Startup.auth.cs file from the Web Api project template, you will see similar goings on, as ASP.NET Identity components are added to the OWIN pipeline.

Why Do I Care?

In this post we have taken a look at how the OWIN/Katana pipeline works, seen some of the basics of how middleware is created and added to the pipeline, and developed an understanding of how our application interacts with the server in an OWIN-based environment.

Why do you care?

For one thing, more and more of the .NET web development ecosystem is moving in this direction. At present, ASP.NET Web Api can be hosted directly in the OWIN/Katana pipeline (although in the template projects available in Visual Studio, the Web Api is added to the ASP.NET/System.Web pipeline instead), and the ASP.NET Identity Framework IS added to the Katana pipeline.

My understanding is, going forward ASP.NET 5 ("vNext") is going to go all the way in this direction, with the various bits and pieces we want to add to our project added as pipeline components.

UPDATE 1/5/2015: ASP.NET 5 is indeed moving further in this direction. Katana itself will apparently be fully integrated into ASP.NET 5. OWIN will be available through an interop, but greenfield projects will be best off using the integrated middleware pipeline. However, most of what we discuss here will still apply, either directly, or conceptually (thanks to Rick Anderson and the ASP.NET team for the clarification!).

Understanding the hosting and server environment, and being able to dig down into the abstractions will allow us to better leverage the tools at our disposal, and write better, learner, meaner applications.

Are we going to be writing a bunch of custom middleware components ourselves? Likely not. But understanding how the pieces fit is important.

Additional Resources and Items of Interest

 

Posted on January 4 2015 02:12 PM by jatten     

Comments (5)

Biggy Available as Pre-Release on Nuget

Posted on December 22 2014 10:41 AM by jatten in Biggy, C#, Database   ||   Comments (0)

super-charger-240Biggy is a very fast, synchronized, in-memory document/relational query tool for .NET.

If you have been following the Biggy project over the past year, you are aware that there has been a rapid evolution of ideas, changes in structure, and changes in stewardship.

Biggy was originally an experiment by Rob Conery, a man who likes to challenge convention, break rules, and in general stir things up in a way which causes folks to re-think some closely held conventions. All of the above are why I follow Rob's blog, Twit-stream, and Github repo.

Image by fortfan  |  Some Rights Reserved

Plus, he's just a damn fine fellow.

Rob recently withdrew from active development on Biggy, and I am now doing my best to shepherd this interesting project. We'll see how this goes. My goal is to try to maintain the spirit of the original project, and find some happy medium between what I think Rob would have done, what I would like to see, and what the community wants (if anything) from this unique data access and query tool. I'll have more to say on that in another post, but I had a fantastic time working on this with Rob, and as you might imagine, I learned a lot.

On that note, I am pleased to announce that there is now a pre-release version of Biggy available on Nuget.

Getting to Know Biggy

Biggy works by loading all of the data in your store (be it a flat JSON file, or from a relational database) into memory, which makes querying super fast. Also, because your data is deserialized into POCO objects behind and ICollection<T> interface, it is all queryable using LINQ.

This works with relational data in the way you might expect, but it really shines when you work with JSON documents. Complex, nested JSON documents are now queryable at blazing speed.

In its current form, Biggy is comprised of a core package, and data extension packages specific to each supported backing store. Biggy directly supports simple file-based JSON as a store, as well as SQLite and Postgresql relational data bases. However, Biggy is designed to be extensible, so alternative stores can be supported by implementing the IDataStore interface found in Biggy.Core.

The basic Nuget packages look like this:

The store-specific packages all take a dependency on Biggy.Core, and it is not necessary to pull down Biggy.Core as a separate package, unless you intend to implement your own store against the interfaces and base classes therein.

Inject Stores into Lists

At present, the underpinning of how Biggy works is based on the idea of synchronizing an in-memory list with a backing store. An implementation of IDataStore<T> knows how to talk to a database of some sort (even if it may be a flat JSON file). The BiggyList<T> , an in-memory implementation of ICollection<T>, doesn't know or care what specific store it is working with, it just knows how to implement the ICollection interface (plus a few additional methods) and present a LINQ-queryable API to the world.

The relationship between the two is achieved by injecting an instance of IDataStore<T> into an instance of BiggyList<T> upon instantiation. There are a number of ways to do this, depending upon the needs of your application.

A Simple How-To: File-Based JSON Data

We'll take a quick look at the very basics of using Biggy using a Visual Studio Console Application. To get started, you'll need to create a new Console Application, and pull down the Biggy.Data.Json package:

Get the Biggy.Data.Json Package from Nuget:
PM> Install-Package Biggy.Data.Json

 

Now we can work against flat-file JSON data.

The IDataStore<T> interface offers a fairly simple CRUD interface to the world. We can work directly with an implementation of IDataStore<T> to Add, Update, and Delete records, and to read the entirety of the store into an IEnumerable<T> for use in our code.

Since we have started with the JSON implementation, let's take a look at how the JsonStore<T> works on it's own.

We will need to add the following namespaces to the usings at the top of our code file:

Add Biggy References to the Program.cs Code File:
using Biggy.Core;
using Biggy.Extensions;
using Biggy.Data.Json;

 

Then, we might do the following:

Add a Document to the Json Store:
public class ArtistDocument
{
    public ArtistDocument()
    {
        this.Albums = new List<AlbumDocument>();
    }
 
    [PrimaryKey(Auto: false)]
    public int ArtistDocumentId { get; set; }
    public string Name { get; set; }
    public List<AlbumDocument> Albums;
}
 
 
public partial class AlbumDocument
{
    public AlbumDocument()
    {
        this.Tracks = new List<Track>();
    }
 
    public int AlbumId { get; set; }
    public string Title { get; set; }
    public int ArtistId { get; set; }
    public virtual List<Track> Tracks { get; set; }
}
 
 
class Program
{
    static void Main(string[] args)
    {
        var jsonArtistStore = new JsonStore<ArtistDocument>();
        var newArtist = new ArtistDocument
        {
            ArtistDocumentId = 1,
            Name = "Nirvana"
        };
        newArtist.Albums.Add(new AlbumDocument
        {
            AlbumId = 1,
            ArtistId = 1,
            Title = "Bleach"
        });
        newArtist.Albums.Add(new AlbumDocument
        {
            AlbumId = 2,
            ArtistId = 1,
            Title = "Incesticide"
        });
        jsonArtistStore.Add(newArtist);
    }
}

 

If we run this code, the following happen behind the scenes:

  • By default, new directory is created in our project root named ~\Data\Json\BiggyDemo1
  • A file is created in that directory, named artistdocuments.json
  • The new artist data is persisted as raw JSON in the artistdocuments.json file

If we crack open the file, we find:

Contents of Artist Documents JSON File:
[ {
    "Albums": [
      { "AlbumId": 1, "Title": "Bleach", "ArtistId": 1, "Tracks": [ ] },
      { "AlbumId": 2, "Title": "Incesticide", "ArtistId": 1, "Tracks": [ ]
      } ],
    "ArtistDocumentId": 1,
    "Name": "Nirvana"
  } ]

 

Yup. The POCO objects we used in our .NET code have been serialized to JSON and saved in the file.

We can use the JsonStore<T> this way to our heart's content if we like, but that's not really the point of Biggy. Instead, let's use our store in conjunction with a BiggyList<T> instance, and get some real work done.

Inject IDataStore into BiggyList

Biggy is intended to present an in-memory representation of the data in your store, so you can query away using LINQ, make additions, updates, and deletions, and the data in memory will remain in sync with your backing store.

Let's expand on what we did above, and inject a store into a BiggyList<T>. In this case, we will use some data from the Chinook Database which I have used to create a full set of JSON artist documents, including albums and tracks for each artist. There are 275 artists in the JSON data set, including even more albums (nested under each artist), and several thousand tracks (nested under the appropriate album).

For example, the JSON data for a single artist in the modified Chinook sample data looks like this:

Singe Sample JSON Artist Document Record Using Modified Chinook Data:
[
  {
    "Albums": [
      {
        "AlbumId": 1,
        "Title": "For Those About To Rock We Salute You",
        "ArtistId": 1,
        "Tracks": [
          { "TrackId": 1, "AlbumId": 1, "Name": "For Those About To Rock (We Salute You)" },
          { "TrackId": 6, "AlbumId": 1, "Name": "Put The Finger On You" },
          { "TrackId": 7, "AlbumId": 1, "Name": "Let's Get It Up" },
          { "TrackId": 8, "AlbumId": 1, "Name": "Inject The Venom" },
          { "TrackId": 9, "AlbumId": 1, "Name": "Snowballed" },
          { "TrackId": 10, "AlbumId": 1, "Name": "Evil Walks" },
          { "TrackId": 11, "AlbumId": 1, "Name": "C.O.D." },
          { "TrackId": 12, "AlbumId": 1, "Name": "Breaking The Rules" },
          { "TrackId": 13, "AlbumId": 1, "Name": "Night Of The Long Knives" },
          { "TrackId": 14, "AlbumId": 1, "Name": "Spellbound" }
        ]
    },
    {
        "AlbumId": 4,
        "Title": "Let There Be Rock",
        "ArtistId": 1,
        "Tracks": [
          { "TrackId": 15, "AlbumId": 4, "Name": "Go Down" },
          { "TrackId": 16, "AlbumId": 4, "Name": "Dog Eat Dog" },
          { "TrackId": 17, "AlbumId": 4, "Name": "Let There Be Rock" },
          { "TrackId": 18, "AlbumId": 4, "Name": "Bad Boy Boogie" },
          { "TrackId": 19, "AlbumId": 4, "Name": "Problem Child" },
          { "TrackId": 20, "AlbumId": 4, "Name": "Overdose" },
          { "TrackId": 21, "AlbumId": 4, "Name": "Hell Ain't A Bad Place To Be" },
          { "TrackId": 22, "AlbumId": 4, "Name": "Whole Lotta Rosie" }
        ]
      }
    ],
    "ArtistDocumentId": 1,
    "Name": "AC/DC"
  }
]

 

I went ahead and overwrote artistdocuments.json in our Data\Json directory with this Chinook file. We could change our Main() method to look like this:

Query Chinook Document Data:
static void Main(string[] args)
{
    var artistDocumentStore = new JsonStore<ArtistDocument>();
    var artistDocuments = new BiggyList<ArtistDocument>(artistDocumentStore);
 
    // Select artists with names beginning with 'M' and write to the console, 
    // along with a count of albums for each:
    var selected = from a in artistDocuments where a.Name.StartsWith("M") select a;
 
    foreach(var artistDoc in selected)
    {
        Console.WriteLine("{0}: {1} Albums", artistDoc.Name, artistDoc.Albums.Count);
    }
    Console.Read();
}

 

Notice how we simply initialized a store, injected it into the BiggyList<ArtistDocument> , and presto, our document data was loaded and queryable using LINQ?

Again, by default, Biggy is going to look in the ~\Data\Json directory for a folder with the current project name, then look for a file with a name matching the POCO class represented by T. If one is found, the data is loaded. If not, a new file is created the first time data is added.

In this example, a (new) file named artistdocuments.json already exists in the default directory, so the Chinook document data is loaded up during initialization, and is ready for querying.

Output from the above would resemble the following:

Console Output from LINQ Query Against Chinook Artist Documents:

console-output-1

Or, we might want to query up a specific artist and examine the albums and tracks on file for that artist. If we change our Main() method again:

Query a Specific Artist and Output the Albums and Tracks to the Console:
static void Main(string[] args)
{
    var artistDocumentStore = new JsonStore<ArtistDocument>();
    var artistDocuments = new BiggyList<ArtistDocument>(artistDocumentStore);
 
    // Select a single artist, and list the albums/tracks for that artist:
    var selected = artistDocuments.FirstOrDefault(a => a.Name == "Metallica");
    Console.WriteLine("Albums by {0}:", selected.Name);
    foreach(var albumDoc in selected.Albums)
    {
        Console.WriteLine(albumDoc.Title);
        foreach(var track in albumDoc.Tracks)
        {
            Console.WriteLine("\t{0}", track.Name);
        }
    }
    Console.Read();
}

 

Output here is as expected, and makes clear that the maintainers of Chinook Database are pretty big Metallica fans.

Biggy and Relational Database Stores

There are limitations to using a flat-file JSON store. For one, concurrency can become an issue. When your application needs call for a more robust persistence mechanism, you can use a relational database, both in the conventional sense, and for persisting document data.

SQLite is a nice simple solution if a file-based relational data store is a good option. SQLite required no configuration or administration, and is easily added to your project. Also, like Postgres, SQLite is free, cross-platform, and open-source. In terms of scaling up, SQLite makes a logical next step from flat-file JSON as your application grows.

Postgresql is also free, cross-platform, and open-source, and is our default, large-scale client-server database of choice.

We've designed Biggy to minimize the pain associated with moving between different backing stores. While file-based JSON stores, SQLite, and Postgresql all have different capabilities, advantages, and disadvantages, the IDataStore interface and the BiggyList don't care. Concrete implementations of IDataStore can capitalize on the various strengths of each storage format, but you can also pass them into an existing BiggyList and everything should "just work."

RelationalStore and DocumentStore

When working with relational data in the traditional sense, use a concrete implementation of RelationalStore<T> . In this case, Biggy will expect to find a table and schema which match the POCO class <T> specified as a type argument (or appropriately mapped using some extensions found in Biggy.Core).

When working with Document data, even stored in a relational database, Biggy will serialize/de-serialize the POCO class <T> into JSON, nesting any child collections or contained objects.

Also, Biggy will create Document tables on the fly, as any document table used by Biggy will have the same schema: id, body, and created_at.

Document Storage in a Relational Data Store

Once again, one of the primary use-cases for Biggy is to work with JSON-format document data. Postgres, because it is awesome, implements its own JSON data type (and now, with the release of version 9.4, additionally supports Binary JSON, or bson). Persisting documents in Postgres takes full advantage of the JSON data type.

For SQLite (or any other concrete implementation of IDataStore<T> you choose to make), JSON is persisted as simple string data.

For working with Relational data stores, the Chinook Database once again provides a handy, ready-to-use data set for both SQLite and Postgresql. We will use Chinook in the following examples.

Using Biggy with SQLite

To use Biggy with a SQLite backing store, just pull Biggy.Data.SQLite down from Nuget:

Use Nuget Package Manager Console to get Biggy.Data.SQLite:
PM> Install-Package Biggy.Data.Sqlite

 

If we drop the Chinook Database file into our project Data directory (if needed, change the file extension to .db), we can get right to work querying and using the relational data present in Chinook out of the box.

When using SQLite with Biggy, the primary constructor argument will at a minimum specify the name of the database file to look for. If no other arguments are provided, Biggy will look in the ~\Data directory at the root of our project directory, and try to match the string value with a file by the same name, with a .db extension.

If no database with that name is found, Biggy will create one.

If we change the using statement to Biggy.Data.SQLite instead of Biggy.Data.Json, we can do the following, working with some slightly different POCO models, and the basic relational tables in the Chinook Database just as they are. Notice here, we specify SQLiteRealtionalStore<T> , and not SQLiteDocumentStore<T> because we will be working with a relational data set.

Note: Biggy will create DocumentStore<T> tables on the fly. However, Biggy cannot, at present, create standard relational tables - to work against relational data, the tables need to already exist.

Connect BiggyList<T> to a SQLite Database:
public partial class Artist
{
    public int ArtistId { get; set; }
    public string Name { get; set; }
}
 
public partial class Album
{
    public int AlbumId { get; set; }
    public string Title { get; set; }
    public int ArtistId { get; set; }
}
 
 
static void Main(string[] args)
{
    // Pass the name of the database file as a constructor argument:
    var artistStore = new SqliteRelationalStore<Artist>("Chinook");
    var albumStore = new SqliteRelationalStore<Album>("Chinook");
 
    // Pass the store into the list:
    var artists = new BiggyList<Artist>(artistStore);
    var albums = new BiggyList<Album>(albumStore);
 
    var someArtist = artists.FirstOrDefault(a => a.Name == "AC/DC");
    var artistAlbums = albums.Where(a => a.ArtistId == someArtist.ArtistId);
 
    Console.WriteLine("Albums by {0}:", someArtist.Name);
    foreach(var album in artistAlbums)
    {
        Console.WriteLine(album.Title);
    }
}

 

Creating a Document Store Using SQLite

We can use SQLite to persist documents as well as work with traditional relational data, simply be using SQLiteDocumentStore<T> , similar to the way we did with the JSON store. The following code will create a new table in the Chinook database we are using for our back-end, and add a single artist document record:

Use SQLite to Create a Document Store in the Chinook Database:
var artistDocumentStore = new SqliteDocumentStore<ArtistDocument>("Chinook");
var artistDocuments = new BiggyList<ArtistDocument>(artistDocumentStore);
 
var newArtist = new ArtistDocument
{
    ArtistDocumentId = 1,
    Name = "Nirvana"
};
 
newArtist.Albums.Add(new AlbumDocument
{
    AlbumId = 1,
    ArtistId = 1,
    Title = "Bleach"
});
 
newArtist.Albums.Add(new AlbumDocument
{
    AlbumId = 2,
    ArtistId = 1,
    Title = "Incesticide"
});
 
artistDocuments.Add(newArtist);

 

In this case, the artistdocuments table has three simple fields: id, body, and created_at. The id will be the same as the primary key for each artist object. the body field contains the JSON document itself, and the created_at is simply a date-time stamp.

We could, of course, decide we want to pull all of the artist/album/track data together, and compose it all into artist documents similar to the JSON file I used in the previous section, and then push it out into a new SQLite database.

Aggregate Artists, Albums,and Tracks into a Document Store Using SQLite:
// Pass the name of the database file as a constructor argument:
var artistStore = new SqliteRelationalStore<Artist>("Chinook");
var albumStore = new SqliteRelationalStore<Album>("Chinook");
var trackStore = new SqliteRelationalStore<Track>("Chinook");
 
// Pass the store into the list:
var artists = new BiggyList<Artist>(artistStore);
var albums = new BiggyList<Album>(albumStore);
var tracks = new BiggyList<Track>(trackStore);
 
// Use a list, and do a bulk add when all the artist documents have been created:
var newArtistDocs = new List<ArtistDocument>();
foreach (var artist in artists)
{
    var artistDoc = new ArtistDocument { ArtistDocumentId = artist.ArtistId };
    var artistAlbums = albums.Where(a => a.ArtistId == artist.ArtistId);
 
    foreach (var album in artistAlbums)
    {
        var albumDoc = new AlbumDocument();
        var albumtracks = tracks.Where(t => t.AlbumId == album.AlbumId);
        albumDoc.Tracks.AddRange(albumtracks);
        artistDoc.Albums.Add(albumDoc);
    }
    newArtistDocs.Add(artistDoc);
}
 
// Now let's new up a NEW SQLite-based database:
var artistDocumentStore = new SqliteDocumentStore<ArtistDocument>("ChinookDocuments");
var artistDocumentsList = new BiggyList<ArtistDocument>(artistDocumentStore);
 
// Push our new nested artist documents into the new database:
artistDocumentsList.Add(newArtistDocs);

 

That's a large mess of code, but you get the idea. In the above, we materialized some relational data, composed it into document form, and persisted it into a brand-new SQLite database (this time named "ChinookDocuments.db" and again, in our ~\Data\ directory) as JSON string values in a table named "artistdocuments".

Using Biggy with Postgres

Unlike simple JSON files and SQLite, PostgreSql is a full-fledged client-server database. This means we need to work with real connection strings, connect to a real database, and that Biggy cannot create a database for us.

However, Postgres is an awesome database, and if you are not familiar, I recommend you go check it out.

With the Json store, and the SQLite store, we were able to provide some minimal initialization arguments to the concrete implementation of IDataStore<T> to get up and moving. With Postgres, we still don't need much, but we do need to specify a connection string in our App.config or Web.config file.

You can pull down the Biggy.Data.Postgres package from Nuget using the Package Manager Console:

Get Biggy.Data.Postgres from Nuget:
PM> Install-Package Biggy.Data.Postgres

 

Once again, we can get started by pulling down the Chinook Database for Postgres, running the CREATE script, and Voila.

Specifics may vary, but your App.Config should look something like this:

Example App.config File with Postgres Connection:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <startup>
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0" />
  </startup>
  <connectionStrings>
    <add name="chinook" connectionString="server=localhost;user id=biggy;password=password;database=chinook" />
  </connectionStrings>
</configuration>

 

Note in the above, our connection string is named "chinook" and we are using a local PG instance.

With that, we can once again start with some code that should be looking familiar by now:

Query Chinook Database Using Biggy with Postgres:
// Pass the name of the database file as a constructor argument:
var artistStore = new PgRelationalStore<Artist>("chinook");
var albumStore = new PgRelationalStore<Album>("chinook");
 
// Pass the store into the list:
var artists = new BiggyList<Artist>(artistStore);
var albums = new BiggyList<Album>(albumStore);
 
var someArtist = artists.FirstOrDefault(a => a.Name == "AC/DC");
var artistAlbums = albums.Where(a => a.ArtistId == someArtist.ArtistId);
 
Console.WriteLine("Albums by {0}:", someArtist.Name);
foreach (var album in artistAlbums)
{
    Console.WriteLine(album.Title);
}
Console.Read();

 

Here again, we are able to directly query the relational data loaded into the BiggyList<T> from Chinook Database. Biggy looks in the App.config file for a connection string matching the one provided as a constructor argument, and then does its thing.

From here, working with document data in Postgres is much the same as we saw from SQLite. Remember though, that Biggy cannot create a new database on the fly with Postgres as we did using SQLite.

What Next for Biggy?

As mentioned, the packages available on Nuget at the moment are definitely "pre-release" in that we expect to be making changes. I'm not feeling the product is all the way there in terms of its front-side API, and usefulness.

At this point, I would love to hear some feedback from those who have taken some time to play with it. What would make it more useful? Is there a critical feature missing that would also be widely used (as opposed to special-case features)?

My interest now is to arrive at a stable, minimal feature set, and a maximally useful API. I'm less interested in adding additional store options or features, and more in refining what is already there.

Please do check it out, and please do open an issue on the Github Repo for any bugs, suggestions, or other comments you may have.

Want to Contribute?

Again, please do! I will happily accepts Pull Requests for bug fixes and features, so long as they fit in the general scheme of things. A word of caution, though, creeping feature-itis was one of the problems we had with the "version 1" repo of Biggy. We really want to make sure any new feature is needed, and I really would like to keep things as simple as possible until we have a ready-for-prime-time release version.

What's with the Strange Code Formatting in the Repo?

The code in the repo is very much not idiomatic C# format. We are using two spaces for indentation, and same-line braces. This is  something I picked up from Rob, who I think picked it up during his years in the Ruby/JS wilderness. However, after struggling with it for a bit, I came to really like the compact style.

For the moment, I plan to maintain the repo using this format. However, any PR's you send my way do not need to match -I'll fix them

At some point, I may cave and revert back to idiomatic C#. But I challenge you to check it out, try it out, and see if you don't find yourself thinking most C# code looks a little spread out after a while…

Where can I Find More Info and Better Docs?

A more fully developed documentation page is coming, There is a lot more to Biggy then you saw here - this was a quick and dirty intro, since the code has changed so much in the past year. There are better ways to do almost everything I discussed in this article, but the examples here were kept purposefully simple.

We are also gratefully accepting contributions to documentation :-)

Additional Resources and Items of Interest

Some History on Biggy:

 

Posted on December 22 2014 10:41 AM by jatten     

Comments (0)

C#: Avoiding Performance Issues with Inserts in SQLite

Posted on December 15 2014 08:49 PM by jatten in SQLite, Database, C#, CodeProject   ||   Comments (2)

Coronado-Island Parking-Meter-320

If you are new to SQLite, you may well run across one of the most confounding of its implementation details the moment you attempt to do some sort of bulk or batch processing of inserts or updates.

What you will discover is that unless properly implemented, inserting or updating multiple records in a SQLite database can seem abysmally slow. Slow to the point of unsuitability in certain cases.

Not to fear, this has to do with some default (and not entirely improper) design choices in SQLite, for which there is an easy workaround.

Image by Lance McCord  |  Some Rights Reserved

SQLite is a wonderfully simple to use, cross-platform/open source database with terrific performance specs. It is a mature product, and, if we are to believe the estimates of SQLite.org, is the most widely deployed SQL database in the world.

SQLite manages to cram a host of mature, well-developed features into a compact and well-documented package, including full transaction support.

This transaction support, and the way it is implemented, has a significant impact on certain performance characteristics of SQLite.

Transactions by Default in SQLite

As stated previously, one of the selling points of SQLite, despite it being a simple, file-based database, is that it is fully transactional. What does this mean?

From Wikipedia:

A transaction comprises a unit of work performed within a database management system (or similar system) against a database, and treated in a coherent and reliable way independent of other transactions. Transactions in a database environment have two main purposes:

  1. To provide reliable units of work that allow correct recovery from failures and keep a database consistent even in cases of system failure, when execution stops (completely or partially) and many operations upon a database remain uncompleted, with unclear status.
  2. To provide isolation between programs accessing a database concurrently. If this isolation is not provided, the program's outcome are possibly erroneous.

A database transaction, by definition, must be atomic, consistent, isolated and durable.[1] Database practitioners often refer to these properties of database transactions using the acronym ACID.

Transactions provide an "all-or-nothing" proposition, stating that each work-unit performed in a database must either complete in its entirety or have no effect whatsoever. Further, the system must isolate each transaction from other transactions, results must conform to existing constraints in the database, and transactions that complete successfully must get written to durable storage.

SQLite is not alone, of course, in implementing transactions - in fact, transactions are a core concept in database design. However, the implementation of SQLite proposes that, unless otherwise specified, each individual write action against your database (any action through which you modify a record) is treated as an individual transaction.

In other words, if you perform multiple INSERTs (or UPDATEs, or DELETEs) in a "batch," each INSERT will be treated as a separate transaction by SQLite.

The trouble is, transactions carry processing overhead. When we decide we need to perform multiple INSERTs in a batch, we can run into some troubling performance bottlenecks.

Batch Processing in SQLite - A Console Example

If we are using SQLite from the SQLite Console, we can see exactly what I am talking about by running an easy insert script, and seeing how things go. For this example, I borrowed a few lines from the Chinook Database to create and populate a table of Artists. If you don't have the SQLite Command Line Console on your machine, install it now (see Installing and Using SQLite on Windows for details). Then copy the SQL script from my Gist on Github, paste it into a text file, and save the file in your user folder as create-insert-artists.sql.

The script should look like this in the text file before you save:

Paste the SQL Script Into a Text File and Save:
DROP TABLE IF EXISTS [Artist];
 
CREATE TABLE [Artist]
(
    [ArtistId] INTEGER  NOT NULL,
    [Name] NVARCHAR(120),
    CONSTRAINT [PK_Artist] PRIMARY KEY  ([ArtistId])
);
 
INSERT INTO [Artist] ([ArtistId], [Name]) VALUES (1, 'AC/DC');
INSERT INTO [Artist] ([ArtistId], [Name]) VALUES (2, 'Accept');
INSERT INTO [Artist] ([ArtistId], [Name]) VALUES (3, 'Aerosmith');
INSERT INTO [Artist] ([ArtistId], [Name]) VALUES (4, 'Alanis Morissette');
 
-- . . . A bunch more artist records . . . 
 
INSERT INTO [Artist] ([ArtistId], [Name]) VALUES (273, 'C. Monteverdi, Nigel Rogers 
INSERT INTO [Artist] ([ArtistId], [Name]) VALUES (274, 'Nash Ensemble');
INSERT INTO [Artist] ([ArtistId], [Name]) VALUES (275, 'Philip Glass Ensemble');

 

If we open a new database in the SQLite Console (navigate to your User folder to do this for our purposes here) and read the script, we can see how long it takes. There are 275 Artist records in the script to be INSERTED.

Run SQLite3, Open a New Database, and Read the Artists Script:
Microsoft Windows [Version 6.3.9600]
(c) 2013 Microsoft Corporation. All rights reserved.
C:\Users\John>sqlite3
SQLite version 3.8.7.3 2014-12-05 22:29:24
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> .open txdemo.db
sqlite> .read create-insert-artists.sql

 

We can see that (depending on your machine - your mileage may vary) executing the script takes roughly 10 seconds. Inserting 275 records should NOT take 10 seconds. Ever.

Console Output from Running Script (Took Way Too Long!):

run-create-insert-artists-script-no-transactions

As mentioned previously, unless we tell it otherwise, SQLite will treat each of those INSERT commands as an individual transaction, which slows things WAAAYYYY DOOOOWWWWN. We can do better. We tell SQLite to override this behavior by explicitly specifying our own transaction, beginning before the INSERT batch, and committing after each INSERT batch.

Add Transactions to SQLite Scripts Using BEGIN and COMMIT

When we are executing batches of INSERTs, UPDATEs, or DELETEs in a script, wrap all the writes against each table up in a transaction using the BEGIN and COMMIT SQLite Keywords. Modify the create-insert-artists.sql script in out text file by adding a BEGIN before the table INSERTs, and a COMMIT after the table inserts (for scripts involving more than one table, do this for the INSERTs for each table):

Modified Script Wraps INSERTs in single transaction:
DROP TABLE IF EXISTS [Artist];
 
CREATE TABLE [Artist]
(
    [ArtistId] INTEGER  NOT NULL,
    [Name] NVARCHAR(120),
    CONSTRAINT [PK_Artist] PRIMARY KEY  ([ArtistId])
);
 
BEGIN;
INSERT INTO [Artist] ([ArtistId], [Name]) VALUES (1, 'AC/DC');
INSERT INTO [Artist] ([ArtistId], [Name]) VALUES (2, 'Accept');
INSERT INTO [Artist] ([ArtistId], [Name]) VALUES (3, 'Aerosmith');
INSERT INTO [Artist] ([ArtistId], [Name]) VALUES (4, 'Alanis Morissette');
 
-- . . . A bunch more artist records . . . 
 
INSERT INTO [Artist] ([ArtistId], [Name]) VALUES (273, 'C. Monteverdi, Nigel Rogers 
INSERT INTO [Artist] ([ArtistId], [Name]) VALUES (274, 'Nash Ensemble');
INSERT INTO [Artist] ([ArtistId], [Name]) VALUES (275, 'Philip Glass Ensemble');
COMMIT;

 

If we re-run our script now, we see a significant performance boost. In fact, the script execution is nearly immediate.

Re-Run the Script in the SQLite Console (this time, with a Transaction):

run-create-insert-artists-script-with-transaction

The above will apply to all INSERTs, UPDATEs, and DELETEs when you execute scripts in the SQLite console.

Improve SQLite Performance in Your .NET Application Using Transactions

We see a similar problem when we use SQLite in a .NET application, and the solution is conceptually the same, although the implementation is necessarily a little different. If you are new to using SQLite (and many .NET developers are, at some point), this is exactly the type of confounding quirk that can have you running back to yet another "integrated" Microsoft database solution before giving this great database a chance. "I tried SQLite, but the inserts and updates were too damn slow . . ."

Sample .NET Application - The Slow, Hard Way

Consider the following Console application example. It is a small, simplistic example, and has no exception handling, but you get the idea. The Main() method performs some basic set-up, then builds a List<User> which is passed to the AddUsers() method.

Program to Insert a List of Users Using System.Data.SQLite:
class Program
{
    static string _connectionString;
    static void Main(string[] args)
    {
        // 'Data' directory in the current directory ( ..\bin\Debug\):
        string dbDirectory = Environment.CurrentDirectory;
        string dbName = "test.db";
 
        // Add System.IO to the using statements at the top of your code file:
        string dbPath = Path.Combine(dbDirectory, dbName);
        _connectionString = string.Format("Data Source = {0}", dbPath);
 
        CreateDbIfNotExists(dbPath);
        CreateUsersTable();
 
        int qtyToAdd = 100;
 
        // Load some users into a list...
        var usersToAdd = new List<User>();
        for(int i = 0; i < qtyToAdd; i++)
        {
            usersToAdd.Add(new User { Name = "User #" + i });
        }
 
        // And THEN add them:
        var sw = new System.Diagnostics.Stopwatch(); ;
        sw.Start();
        int qtyAdded = AddUsers(usersToAdd);
        sw.Stop();
 
        Console.WriteLine("Added {0} Users successfully in {1} ms", 
        	qtyAdded, sw.ElapsedMilliseconds);
 
        var allUsers = ReadUsers();
 
        Console.WriteLine("Read {0} Users from SQLite", allUsers.Count());
        Console.Read();
    }
 
 
    static void CreateDbIfNotExists(string dbPath)
    {
        string directory = Path.GetDirectoryName(dbPath);
        if (!File.Exists(dbPath))
        {
            // Creates directory if it doesn't already exist:
            Directory.CreateDirectory(directory);
 
            // Creates file if it doesn't already exist:
            SQLiteConnection.CreateFile(dbPath);
        }
    }
 
 
    static SQLiteConnection CreateConnection()
    {
        return new SQLiteConnection(_connectionString);
    }
 
 
    static void CreateUsersTable()
    {
        string sqlTestTable =
            @"CREATE TABLE IF NOT EXISTS Users 
            ( 
                Id INTEGER PRIMARY KEY AUTOINCREMENT, 
                Name TEXT NOT NULL 
            )";
 
        using (var cn = new SQLiteConnection(_connectionString))
        {
            using (var cmd = new SQLiteCommand(sqlTestTable, cn))
            {
                cn.Open();
                cmd.ExecuteNonQuery();
                cn.Close();
            }
        }
    }
 
 
    class User
    {
        public int Id { get; set; }
        public string Name { get; set; }
    }
 
 
    static int AddUsers(IEnumerable<User> users)
    {
        var results = new List<int>();
        string sqlInsertUsers =
            @"INSERT INTO Users (Name) VALUES (@0);";
 
        using (var cn = new SQLiteConnection(_connectionString))
        {
            // Open the connection, and also atransaction:
            cn.Open();
            using(var transaction = cn.BeginTransaction())
            {
                foreach (var user in users)
                {
                    using (var cmd = cn.CreateCommand())
                    {
                        cmd.CommandText = sqlInsertUsers;
                        cmd.Parameters.AddWithValue("@0", user.Name);
                        results.Add(cmd.ExecuteNonQuery());
                    }
                }
                transaction.Commit();
            }
            cn.Close();
        }
        return results.Sum();
    }
}

 

The AddUsers() method creates a connection and a command, opens the connection, and then iterates over the IEnumerable<User>, successively inserting the user data for each into the SQLite database. We are using a System.Diagnostics.Stopwatch to time the execution of the call to AddUsers() from Main().

It looks like we've done everything right here - we set up the connection only once, open it only once (opening and closing connections for each loop iteration causes its own performance hit). However, it still takes upwards of four seconds to insert only 100 users. We can see the results in our console output.

Console Output from Example Program Inserting 100 Users:

add-users-no-transaction-dotnet

Pretty lame, but not surprising, given what we have learned about transactionality defaults in SQLite. but, once again, we can do better.

Wrap SQLite Batch Operations in an ADO Transaction in Your .NET Application

Similar to using the SQLite console, the solution here is also to use a transaction. We can modify the code in the AddUsers() method as follows:

Modified Code for AddUsers() Method Wrapping Command Execution in a Transaction:
static int AddUsers(IEnumerable<User> users)
{
    var results = new List<int>();
    string sqlInsertUsers =
        @"INSERT INTO Users (Name) VALUES (@0);";
 
    using (var cn = new SQLiteConnection(_connectionString))
    {
        // Open the connection, and also atransaction:
        cn.Open();
        using(var transaction = cn.BeginTransaction())
        {
            foreach (var user in users)
            {
                using (var cmd = cn.CreateCommand())
                {
                    cmd.CommandText = sqlInsertUsers;
                    cmd.Parameters.AddWithValue("@0", user.Name);
                    results.Add(cmd.ExecuteNonQuery());
                }
            }
            transaction.Commit();
        }
        cn.Close();
    }
    return results.Sum();
}

 

With that, if we run our application again, we see an order of magnitude performance improvement:

Improved SQLite Insert Performance Using Transaction in .NET:

add-users-with-transaction-dotnet

Yep. 52 milliseconds, down from over 4,000 milliseconds.

Be Cautious with Your Transactions in SQLite

We've seen how we can realize some serious performance wins in SQLite by using transactions to wrap up bulk operations. However, let's not put the cart before the horse without thinking it through. Sometimes, you actually need a more granular level of transaction to ensure data integrity.

It simply would not do to maximize performance of a banking application if transactions were implemented only at the top level of a batch operation. After all, transactions in the world of relational databases are first and foremost about creating assurance that an operation succeed in its entirety, or not at all.

Additional Resources and Items of Interest

 

Posted on December 15 2014 08:49 PM by jatten     

Comments (2)

About the author

My name is John Atten, and my "handle" on many of my online accounts is xivSolutions. I am Fascinated by all things technology and software development. I work mostly with C#, JavaScript/Node, and databases of many flavors. Actively learning always. I dig web development. I am always looking for new information, and value your feedback (especially where I got something wrong!). You can email me at:

johnatten at typecastexception dot com

Web Hosting by