Logging with Log4Net on the Azure platform

Log4Net on Azure

I have always been a big fan of the Log4Net framework for all of my logging needs in applications and I would like to use this framework in my azure projects as well. And to be honest it isn’t even that difficult: just configure it to the TraceAppender as everything written to the tracelog will be transferred by the diagnostics manager to azure table storage. 

But there are two problems, or inconveniences, with this approach. First of all, you have to both configure log4net and the diagnostics monitor. This is a pretty repetitive task that you don’t want to perform over and over again for every application you create.  Secondly log4net is configured using the application’s configuration file, which is rather inconvenient on azure as it isn’t editable.

AzureAppender

To solve both of these problems we can instead create a custom appender that both sets up the diagnostics monitor as well as read the log4net configuration from the service configuration file instead of the application configuration file.

In order to do so, you have to inherit a class from AppenderSkeleton.

public sealed class AzureAppender : AppenderSkeleton

First thing that needs to be done is ensure that the configuration values are read from the service configuration file, if they are present. This is a bit clumbsy on azure as you cannot check for the presence of a configuration key, all you can do is act on the exception thrown when the key is not present. Make sure to set all values before the ActivateOptions method is called.

The following example shows you how to read the error level from config and apply it to the log4net environment.

private static string GetLevel()        
{            
       try            
       {                
           return RoleEnvironment.GetConfigurationSettingValue(LevelKey);            
       }           
       catch (Exception)            
       {                
            return "Error";            
       }        
}

 private void ConfigureThreshold()
 {
     var rootRepository = (Hierarchy) log4net.LogManager.GetRepository();
     Threshold = rootRepository.LevelMap[GetLevel()];
 }

The appender for this article supports the following configuration settings:

  • Diagnostics.ConnectionString Sets the connection string to be used when transferring the log entries to table storage
  • Diagnostics.Level Sets the threshold that log4net will use to filter the logs to output.
  • Diagnostics.Layout Defines the layout and content that log4net will use to create the log entries
  • Diagnostics.ScheduledTransferPeriod Specifies the interval, in minutes, that will be used by the diagnostics manager to transfer logs to azure table storage
  • Diagnostics.EventLogs Configures which of the event log sections will be transferred from the azure instance to azure table storage

When the options have been set and activated, the log4net environment has been completely configured to make proper use of our custom appender and we can start azure diagnostics monitor. Note that the diagnostics monitor also has a threshold that allows you to filter the logs written to storage. But as log4net is already filtering, we don’t need to do it here anymore so we set the filter to Verbose.

 private void ConfigureAzureDiagnostics()
{
    var traceListener = new DiagnosticMonitorTraceListener();
    Trace.Listeners.Add(traceListener);

    var dmc = DiagnosticMonitor.GetDefaultInitialConfiguration();

    //set threshold to verbose, what gets logged is controled by the log4net level
    dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;

    ScheduleTransfer(dmc);

    ConfigureWindowsEventLogsToBeTransferred(dmc);

    DiagnosticMonitor.Start(ConnectionStringKey, dmc);
}

private void ScheduleTransfer(DiagnosticMonitorConfiguration dmc)
{
    var transferPeriod = TimeSpan.FromMinutes(ScheduledTransferPeriod);
    dmc.Logs.ScheduledTransferPeriod = transferPeriod;
    dmc.WindowsEventLog.ScheduledTransferPeriod = transferPeriod;
}

private static void ConfigureWindowsEventLogsToBeTransferred(DiagnosticMonitorConfiguration dmc)
{
    var eventLogs = GetEventLogs().Split(';');
    foreach (var log in eventLogs)
    {
         dmc.WindowsEventLog.DataSources.Add(log);
    }
}

That’s all there is to it basically, the only thing we need to do now is to apply the appender to the environment. This is done by creating an instance of the appender, configure it either in code or using the settings in the service configuration file, and finally configure the log4net environment.

var appender = new AzureAppender();
appender.ActivateOptions();
BasicConfigurator.Configure(appender);

You can find the source for this appender in the NServiceBus project and feel free to use it in your projects. I also want to give special credits to Andreas Ohlund for creating the first version of this appender.

Use Case: Cross-organizational collaboration

The general idea

Cross organizational collaboration

Every organisation has to collaborate with others (suppliers, vendors, partners, customers, …) in order to be successful in todays global economy. This leads to the emergence of cross organizational business processes that need to be implemented by the different participants in the value chain. Several problems arise when trying to automate these business processes.

Either every participant automates its part of the process and the different pieces are integrate with one another afterwards. This leads to significant implementation time and associated costs, as every participant will do this at different schedules. Furthermore there is a serious loss in business agility, as for every change to the process multiple participants need to change their part of the automation.

Alternatively, one of the participants could be choosen to automate the entire process, but this leads to another slew of problems. Who will pay who?, whos is dependent on who?, what kind of guarantees does your company get? Now think about it, would you like your company to be directly dependent of a specific supplier or customer? Guess not…

Why is cloud a good solution?

Cloud computing is a good solution for this problem because it is ‘Neutral Ground’, none of the participants is solely responsible for implementing or hosting it, while at the same time every participant can provide it’s own details, or customizations, to the implementation.

What cloud offering do you need?

For ad-hoc collaborations or small automated processes the partners could use an online collaboration platform like Office 365 (former BPOS) in order to quickly set up an environment. This is especially useful when occasionally collaborating on tenders and the like.

For more complicated scenario’s, where different systems from various partners need to be brought together in order to implement end to end business processes accross the entire value chain, one should look at BPMaas vendors (like cordys.com, or ibm blueworks). Microsoft used to have such an bpm engine as well in the Azure AppFabric during the beta timeframe, but I’m not sure what today’s status is?

And obviously a platform as a service, like Azure, will prove it’s value here as well in order to implement custom business logic that is not specific to any of the partners.

Getting started with NHibernate on Azure table Storage services

In order to get NHibernate running on top of azure table storage, you first need an azure account obviously, or at least have installed the development fabric. I assume you’ve got that covered before attempting this tutorial.

Next up is to download and compile the NHibernate azure table storage driver from http://nhazuredriver.codeplex.com/. It already includes a test project that shows you how to get started. If you want to try it out, I suggest you continue from there.

Setting up the driver and it’s connection

First thing to do is to set up a SessionFactory that has been configured to use the driver and that has a connection to your azure storage account. The easiest way to do this is to use the Fluent NHibernate API, for which a configuration is included in the driver. This configuration is connecting to development storage by default, but you can pass it a connection string in any of the formats specified by MSDN:

var fluentConfiguration = Fluently.Configure().Database(
                AzureConfiguration.TableStorage
                .ProxyFactoryFactory(typeof(ProxyFactoryFactory).AssemblyQualifiedName)
                .ShowSql())
fluentConfiguration.Mappings(cfg => cfg.HbmMappings.AddFromAssemblyOf<NewsItem>());
sessionFactory = fluentConfiguration
                .ExposeConfiguration(cfg => nHibernateConfiguration = cfg)
                .BuildSessionFactory();

Note that I’ve exposed the internal nHibernate configuration, I will use it to tell NHibernate to create the schema in the table storage service. In reality, the underlying store doesn’t have a concept of schema, only the table name is registered.

using (var session = sessionFactory.OpenSession())
{
       var export = new SchemaExport(nHibernateConfiguration);
       export.Execute(true, true, false, session.Connection, null);
       session.Flush();
}

Mapping files

The azure storage environment does pose some restrictions to what you can specify in a mapping file as well:

  • The identifier must be a composite key which includes the fields RowKey and PartitionKey, both must be of type string (no exceptions)
  • All references, between entities in different tables, must be lazy loaded, join fetching (or any other relational setting for that matter) is not supported

A simple mapping file would look like:

<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"
                   assembly="NHibernate.Drivers.Azure.TableStorage.Tests"
                   namespace="NHibernate.Drivers.Azure.TableStorage.Tests.Domain">

  <class name="NewsItem" table="NewsItems">
    <composite-id>
      <key-property name="Id" column="RowKey" />
      <key-property name="Category" column="PartitionKey"/>
    </composite-id>
    <property name="Title" type="String" />
  </class>

</hibernate-mapping>

Persisting instances

Now we’re ready to go: save, update, get, load, list, delete, etc are all operational. In order to test this quickly, you could run a PersistenceSpecification from the FluentNHibernate library.

using (var session = SessionFactory.OpenSession())
 {
        new PersistenceSpecification<NewsItem>(session)
            .CheckProperty(c => c.Id, "1")
            .CheckProperty(c => c.Title, "Test Title")
            .CheckProperty(c => c.Category, "Test Category")
            .VerifyTheMappings();
 }

Some remarks

Please note that azure table storage DOES NOT support transactions, so all data you put in the store during testing must be removed before executing the next test. PersistenceSpecifciation does this by default, but in other tests you might have to do it yourself.

Also most NHibernate settings that rely on relational storage features, such as joins, batches, complex queries, etc… don’t work (yet). There is a lot of room for improvement, so any contributions are welcome…

Happy coding.