Bluetooth Command Queuing for Android

Bluetooth 4 (aka BLE – Bluetooth Low Energy) on Android has some important limitations that need to be addressed in apps that need a high degree of user interactivity with a Bluetooth device. Very quickly when you set out to write such an app you’ll come to realize that you’ll need some sort of Bluetooth command queuing system.

First, a little background. Recently I had to develop a client app that would act as a control panel to a prototype hardware device using BLE. This app had to allow commands to be sent to the device on-demand from the user through a UI with a lot of “moving parts”. I’ve worked with synchronizing fitness wearables, but this was a whole different ball game! Fitness wearables tend to be a simple sync and download mechanism, but for this new device, user’s could tap around a set of controls, with each interaction issuing a command (or even a set of commands) to a device. Add to this the complexity that the app would also need to receive ad-hoc notifications at any time from the device, and would also have to send periodic timed commands in the background!

Since Android 4.3 the android.bluetooth classes support BLE, but I came to realize that they would not be enough this time to deal with the complexity alone. It’s because Android’s underlying BLE implementation is a bit quirky. The infamous google issue 58381 illustrates. Other helpful nuggets of information can be gleaned at this stack overflow post here. The critical piece of information from user OneWorld in that posting is: “Gatt always can process one command at a time. If several commands get called short after another, the first one gets cancelled due to the synchronous nature of the gatt implementation.”

I’ve come up against this problem myself. You can issue multiple BLE commands ad-hoc from a multi-threaded app, but BLE won’t respond very well. If you issue a second command before it has a chance to deal with the previous one, the BLE stack can (particularly on Android 4.3) get into a state where it won’t even respond anymore and even require a manual user restart of Bluetooth! Not ideal. You can delay before sending a second command, but to cover all possibilities in hardware and BLE conditions, this introduces an unnecessary delay into every command. Fortunately, in my app, I could assume (like most BLE applications) that the device responds to a command. So I could send the next command immediately after receiving the command response back (via a BLE notification). To create a robust, stable, experience for the user, I had to implement a queuing system layer on top of Android’s bluetooth classes so that the Android BLE stack was assured of only having to deal with one command (or event) at a time.

Let’s look at the details of implementing the command queue.
(The code for a sample app showing the concept is in my github.)

The first thing we need is an object to represent a command. This can be extended in subclasses to do any kind of fancy thing you want, e.g. perform multiple writes to different characteristics.
For this example, we’ll just have a simple command that reads the device’s serial number. To see the idea of the queue in action the DelayCommand will also pause briefly before doing the read, so that you can see commands backing-up in the queue.

public class DelayCommand extends BluetoothCommand {
    public void execute(BluetoothGatt gatt){
        try {
            synchronized (Thread.currentThread()) {
        }catch(InterruptedException e){

        //As an example, read from serial number characteristic

The main activity can create a command object and add it to the queue by calling the service’s method queueCommand, and that’s where we really see the queue in-action:

synchronized (mCommandQueue) {
            mCommandQueue.add(command);  //Add to end of stack
            ExecuteCommandRunnable runnable = new ExecuteCommandRunnable(command);

The first thing to note is the mCommandQueue is simply a Java LinkedList, which works as a FIFO queue.

When the command is added to the queue a runnable is also created that actually executes the command. These runnables are executed on a Single Thread Executor to ensure only one is run at a time. Here’s the Runnable code that actually does the work of executing the command:

    class ExecuteCommandRunnable implements Runnable{

        BluetoothCommand mCommand;

        public ExecuteCommandRunnable(BluetoothCommand command) {
            mCommand = command;

        public void run() {
            //Acquire semaphore lock to ensure no other operations can run until this one completed
            //Tell the command to start itself.

You might think it’s enough to just run a queue of runnables off a single thread to implement the queue, right? Wrong. Here’s why.
When mCommand.execute is called, it starts the bluetooth read and returns immmediately. This would cause the next runnable
to run its command. Remember what I said about Android BLE not liking when you issue a new command if the first hasn’t responded yet?
So here’s the trick: we also need a lock that will prevent the next runnable from executing it’s command until the previous command
has a response. This is done in the BLE GattCallback, where the characteristic read response actually comes back to the caller:

    public void onCharacteristicRead(BluetoothGatt gatt, BluetoothGattCharacteristic characteristic, int status) {
        super.onCharacteristicRead(gatt, characteristic, status);
            //... Send string response to listener here..

    protected void dequeueCommand(){

DequeueCommand releases the lock, and the next runnable then waiting to acquire the lock can run it’s own command.
The lock is just a semaphore with a limit of 1 permit:

Semaphore mCommandLock = new Semaphore(1,true);

And there you have it – the beginnings of a BLE command queue.

A more complete implementation has to allow for other important cases, such as the device failing to respond to the command at all ( – quick answer: implement a time-out to release the command lock and continue to the next command – ), but that’s for another post!

LED Strip Dimming

I’m using LED strip lights for a model I’m working on. The trouble is, they are too bright. You can buy dimmers for LED strips fairly cheaply, but they are overkill for my purposes – I’m only using a portion of the strip therefore the current is low and I want a small, non-bulky circuit that I can pack in neatly with the rest of the electronics.

LEDs are always on, or off, so dimming is done by switching them on an off very fast – PWM, or Pulse Width Modulation. I’ve noticed I’m very sensitive to the flicker rate on dimmed LEDs (low refresh rates can give me headaches), so another benefit of building my own dimming circuit would be to get a higher cycle rate than a purchased dimmer might offer.

The first step was to use the trusty Arduino to get the parameters for the pulse width and duty cycle, i.e. how often to refresh, and how long each pulse needs to be low to achieve the right amount of dimming. A quick bit of breadboard work and a little code told me that I needed a pulse width of 2ms or less and a duty cycle of <10%.

Armed with that, I designed a hardware circuit to mimic those outputs using a 555 timer IC in astable operation. Low duty cycles with a 555 need some special allowances in the circuit (a Schottky diode). LTSpice is great for mocking up circuits, to confirm the calculated component values and test the outputs by tweaking them:


R2 is the key component that controls the duty cycle (hence the amount of dimming). To allow for a bit of adjustment and leeway, I used a trimmer potentiometer for this so I could adjust it in-place on the model to account for varying ambient light conditions.

Finally, I added a current amplifier to the output to ensure the 555 was not drawing excessive current. A 2N222 transistor could supply 800mA – comfortably more than the 400mA for the meter length of LED strip.

A little on-line shopping and soldering on veroboard, and here’s the final circuit in action, ready to go:

The Truth about Remote Working

There’s a lot of buzz about remote working, especially today. It’s the new big thing, allegedly. Thought influencers like Tim Ferriss and 37signals are making great efforts to “disrupt” the geographical bias of working with their books. However, I’ve found the reality of thought out there is very different. Don’t get me wrong – I’m personally invested in the notion of telecommuting. In my case, it’s not from any lifestyle preference, or lofty notions of reinventing work, but because I have to.

So having had more reason than most to work remotely, and having to go job hunting again recently, I’ve discovered a lot about the true reality of remote work opportunities out there. So I thought I’d share with you the real view of remote working from the trenches. I’ve discovered that companies’ attitutudes to employing remote workers always falls into one of four categories.

Employers’ attitudes to Remote Work

1. Those who won’t

This is by far the commonest attitude, even amoung software companies who know and build internet-enabled tools. You might expect them to be more forward-thinking about such things, but I’ve found it is not commonly so. Yahoo’s recent stance on remote working shows that the traditional thinking is still alive and well even among the IT set. Attitudes can differ with the size of the company too. Company policies can be hostile towards telecommuting. Most large companies are still stuck in the bums-on-seats mode of thinking: if staff are sitting at their computer, where we can see them, then they’re working; if they’re not, then they’re not! Large companies invest in expensive offices, so they have to use them – let’s fill the seats. Who can blame them?
It’s an old mode of thinking, you might feel, but it’s still dominant. There is still a lot of suspicion out there about remote workers. Culturally, Ireland is far less progressive than the US and UK about considering remote workers, especially if they are actually living locally anyway! Some have even tried remote work experiments, and failed. The few failed telecommuting experiments who swing the lead can ruin the reputation for those of use who don’t slack and who actually need to telecommute. The reality is that remote workers need to prove they are productive to earn the trust. This is hard enough if you are an established worker in a company already; it is an uphill marathon if you are a new hire. And trust me, it is rare to find companies that are progressively minded about such working arrangements with new hires – anywhere.

2. To get the best talent

Opening the recruitment gates to remote workers means you can potentially choose from a list of candidates around the globe. The pool of talent is larger. That’s tempting for some companies, especially small startup tech businesses. If you’re looking for the resources who are the very best at their job, or very specifically skilled, then broadening the field of candidates beyond those who can commute to you daily is seen as the smart thing to do.

3. To get any talent

Some companies are based in areas that can be starved of local resources skilled in the tools or the experience needed for the job. Looking beyond the commute pool of talent means you might actually find that skilled Blackberry, COBOL, or Delphi developer that you need, even if it means working remotely with them.

4. To get the cheapest talent.

Unfortunately, some companies do embrace the new wave of telecommuting only to reduce costs. I’m not saying it isn’t a valid reason to do it, but it is by no means the best reason. There’s no denying that for cash-strapped small businesses or bootstrapped startups it’s a bonus not having to pay rent for an office. Sourcing your developers from countries where expected salaries might be a lot lower can be attractive too.

Yeah, so we’re down with that remote work thing, but only if ….

Even from those companies who are remote worker friendly, it is seldom all-inclusive. Most tend to have limiting criteria – either explicitly stated, or implicit – about which remote worker they will take on. These criteria further limits the opportunities available for us remote working candidates.

1. Proven track-record of remote work

Remote working is not for everyone. I know, I’ve been doing it for nearly a decade now, and so know what it takes. Some people can’t hack the isolation, even with regular skype chats. Some people can’t get the quiet working environment free of distractions (although the most distracting environments I’ve had to code in are offices). A dedicated home office space is pretty essential, speaking from personal experience. And the big one – some people just aren’t self-motivators or self-disciplined enough. It’s takes a certain kind of person to be able to do it effectively and constantly.

2. “Timezone relative”

This is a big implicit assumption in most remote work jobs. I’m particularly aware of it because I’m in the GMT zone. The US is a huge adopter of remote working, and it is also a big labor market, so can choose to only take on remote workers whose working hours overlap with main office hours. To compensate for the lack of on-site availability, some companies need to substitute frequent on-line interaction. Some jobs actually require frequent, or constant, interactivity with co-workers or customers. As such, it is very valid to want workers who are either in the same timezone, or willing to work the same office hours. Customer support operatives is one obvious example.

However, not all companies look for timezone relative workers for the same reason. For some it is to constantly monitor you, to compensate (or over-compensate) for the loss of on-site oversight. Some freelance sites even go to the extent of taking regular screenshots to prove you are “on-task”. In the end, this is another one of those of those intrusive fallacies about ensuring productivity – all it ensures is that an employee has adapted his way of work to to meet the demands of the monitoring process, not necessarily making the work more productive. Sometimes it can even be counter-productive.Such as employer is simply not comfortable, and hence not invested, in using remote working. Give the worker a small task, let them off to do it their own way – the way they themselves know is their most productive way of working – and then examine the work that’s returned to decide if you want to use them.

3. Commutable distance

For some jobs again, occasionally an in-person appearance may be necessary. If it is, then you’ll want to know that the worker can travel to the office, how long that might take, and what it might cost. Visiting an office in Australia from Europe for a weekly pow-wow can be a non-runner!

3. Employment legal zones

Finally, there are genuine concerns relating to employment or tax laws and they will differ from country to country. Can you legally employ someone in that country? Do you pay tax for them, if not, do they pay tax in their jurisdiction? Is there a double-taxation agreement between the two countries? Are you sure the remote worker is paying taxes in their country, and if they aren’t, will those tax authorities come looking for you? It can be sticky. Again, a track record of having worked remotely cross-border and having a verifyable track record of paying taxes is a benefit for both employer and remote worker.


When I started looking for remote work employers I thought it would be simple: I needed to do it; I had a track record of doing it; it would open up opportunities to work with telecommunting friendly employers outside of my local area. It was the new big thing so surely everyone would want to do it – surely a win-win for all. I was wrong. The truth about getting remote work is that it is a lot harder than getting a standard on-site desk job. In the end, telecommuting does not work unless both employee and employer are genuinely invested in it. Employers with such vision are out there – but they are still a rarity.

Remote: Office Not Required.
TEDx Talk: Jason Fried – Why work doesn’t happen at work.

Scaling Subdomains with Redis and ZeroMQ

You know those nice web apps that allow customised subdomains per customer, i.e. Nice to have, sure, but there are challenges in keeping this feature performant in a scalable server farm and it’s non-trivial, as I discovered.

To determine which customer organization a web request is for based on the subdomain you could make a call to the service layer to determine the organization ID for the subdomain, but that is massively inefficient – it is a guaranteed database round-trip per web request. To avoid this a cache server is the answer (such as Redis or memcached). Here is the configuration I use:


You might wonder why a Redis instance per web server instead of just having one shared Redis instance. There’s two main reasons for this:

  1. Since this is a per-request cache fetch I want it to be as fast as possible to increase throughput, so accessing Redis on the same server will be faster than a call over the network.
  2. If one Redis instance is used per server then you are more likely to get a cache hit and so minimize the need to go to the database. Note that the configuration is important for this to work as intended (e.g. use one of the LRU options for Redis maxmemory-policy, and configure the load balancer to use the Source IP algorithm or better.)

The web app on each server is configured to handle the wildcard domain (*, and a request handler is used to query the Host Header subdomain and determine if there is a cache entry for it. If there is no cache entry, a call to the service layer/database can then be done and stored in the cache. Once it is available, the web server can use this for whatever it likes. A common example is to use different or custom UI themes per organization, and so the web server would return different HTML or CSS links depending on the customer organization. With the necessary details cached on the web server, it can do its UI rendering tasks without having to call the service layer.

Sounds good, right? Yes – except for one tiny problem. Anytime a cache is introduced to an architecture it raises the issue of keeping the cache in-sync with the authoritative data store (i.e. the database). To put it simply, if a customer changes the theme they want to use and saves that, the web server cache does not yet reflect the change and so will continue to deliver the old theme until the customer data is evicted from the Redis cache. The customer, having changed theme, will probably then keep making requests, refreshing the page, wondering why the theme hasn’t changed – because each requests is keeping the old data in the cache! The problem is magnified because a stale cache entry for the customer could, potentially, now exist on more than one redis instance – the downside of using multiple Redis instances.

But there is a solution. This is the kind of problem messaging is great at solving. Enter ZeroMQ.

The task of saving the customer record update is done by the service (or data) layer, so it has the responsibility of informing any component that needs to know the customer data is now potentially out-of-date. But how does the service layer know what components to send the message too? (There could be multiple web servers on-line caching the data). The answer is that the service layer server does need to know. It’s not its responsibility. It just needs to send the message – fire and forget.

ZeroMQ is a lightweight messaging option. We could use something like RabbitMQ which can be configured for guaranteed end-to-end messaging, etc., but if the message being sent isn’t mission critical you can decide to trade reliability for performance. ZeroMQ is blazingly fast. MSMQ also is slower and configuring and testing it is a bit more of a pain than using lightweight, embedded messaging components like ZeroMQ.

To handle the notification to multiple web servers I use the pub-sub messaging model. Basically, one web server instance (the primary server) can be set-up as the messaging hub. Yes, it is just one point of failure, but again, these messages aren’t mission critical. You could use a more elaborate message broker set-up with redundancy and message storage but that means trading performance. Let’s look at the ZeroMQ pub-sub implementation in practice.


We’ll use the Pub-Sub Proxy Pattern to handle the registration of web servers and forwarding of messages to them. As a web server comes on-line, it registers as a message subscriber on the XPUB socket on the primary web server (which is configured to listen). When a service tier server publishes a change message the NetMQ proxy (or hub) sends the message on to all subscribers. Each subscribers simply checks the contents of the message to see if the customer id is one it is holding in its Redis cache. If so, it refreshes the entry immediately.

ZeroMQ is a C implementation, so you’ve got (at present) two choices for using it in .NET. You can use clrzmq which is managed DLL wrapper around an unmanaged ZeroMQ library, or you can use NetMQ, which is a native C# implementation of the ZeroMQ functionality. At the time of writing NetMQ is not yet considered production ready, so it’s your call which to use – .NET code not production ready but easier to debug, or native code that will be harder to debug and is potentially open to memory leaks.

Thankfully, NetMQ has an implementation of the Proxy pattern ready built for us.

Here is a sample of the proxy code. Typically this would be run as a separate process or service on the primary web server, or you could run it as a Task or Thread in the main web app (but there are startup/shutdown issues involved which I won’t go into here.)

private void MessagingProxyTaskFunc()
    //Use the common context to create the mq sockets - created earlier and stored on the AppDomain
    NetMQContext cxt = (NetMQContext)AppDomain.CurrentDomain.GetData("WB_NetMQContext");

    using (NetMQSocket frontend = cxt.CreateXSubscriberSocket(), backend = cxt.CreateXPublisherSocket())

        frontend.Bind("tcp://*:9100"); //Receive published messages on this server, port 9100
        backend.Bind("tcp://*:9101");  //Subscribers will bind to this server, port 9101, listening for forwarded messages

        //Create & start the proxy and begin listening for published messages
        NetMQ.Proxy proxy = new NetMQ.Proxy(frontend, backend, null);
        while (true)
            if (taskCancelToken.IsCancellationRequested) break;

            //Blocks until message received or interupted
            NetMQMessage message = frontend.ReceiveMessage();

            //Forward message to the subscribers to this proxy

Next we need the business service to publish the message when the customer data changes:

public Organization SaveOrganization(Organization org)
     //Do data store logic here


            //Get the publisher socket, created when the business service was created using:
            //NetMQSocket socket = cxt.CreatePublisherSocket();
            //socket.Connect("tcp://<Primary Web Server IP Address>:9100");

            NetMQSocket socket = (NetMQSocket)AppDomain.CurrentDomain.GetData("WB_PubSocket");
            NetMQMessage msg = new NetMQMessage();
            msg.Append(new NetMQFrame(Encoding.UTF8.GetBytes("ORG")));
            msg.Append(new NetMQFrame(Encoding.UTF8.GetBytes(org.PublicID)));
            msg.Append(new NetMQFrame(Encoding.UTF8.GetBytes(org.Serialize())));

Finally, the code for the Message Listener on each individual web server. Again, this function needs to run as its own process/thread to avoid blocking and ensure timely response to messages:

private void MessagingTaskFunc()
    NetMQContext cxt = (NetMQContext)AppDomain.CurrentDomain.GetData("WB_NetMQContext");

    using (NetMQSocket socket = cxt.CreateSubscriberSocket())
        socket.Subscribe(Encoding.UTF8.GetBytes("ORG")); //Subscriber only listens for certain message header

        while (true) 
           if (taskCancelToken.IsCancellationRequested) break;

           NetMQMessage data = null;
              data = socket.ReceiveMessage(); //This blocks until message received. data is null if interrupted.

              if (data == null) break;
                 data.Pop(); //Pop first message frame - will always be "ORG"
                 //Get the next message frames which should contain the ID of organization, and the data
                 NetMQFrame frame = data.Pop();
                 string orgID = Encoding.UTF8.GetString(frame.Buffer);

                 //Check that the organization's ID is one cached in Redis. If so, refresh Redis data using 
                 //last message frame data.


               // Handle subscription receive error gracefully - ensure listener loop keeps running;  

There you have it – an architecture for scalable, synchronized, custom subdomains.

Localized String Templating in .NET

I’ve been building a mustache-style string template system for my Saas app. It will mainly be used for e-mail notifications sent to users via Amazon’s SES. The idea is simple; you have a text template where you want to substitute the tokens {{…}} with send-time specific data:

Here's a sample template for {{Person.Firstname}}! 
Generated on {{CreationDate:d}}

There’s a couple of important features to note here. Firstly, you can reference nested properties in the tokens – handy for passing existing business entities. Secondly, you can add format strings to determine how the token value should be formatted. This is a nice-to-have which means that if you have a locale associated with the user, you can format dates in e-mails to the user’s locale, not the sending server’s locale (i.e. mm/dd/yy or dd/mm/yy)

Here’s an simple example of how it would be called:

String template = @"{{Title}}
Here's a sample template for {{Person.Firstname}}! 
Generated on {{CreationDate:d}}";

PersonEntity Person = new PersonEntity();
Person.Firstname = "Brendan";
Person.Surname = "Whelan";
Person.Locale = "en-IE";

String localizedConcreteString = template.Inject(new {
                                                      Title = "Sample Injected Title", 
                                                      CreationDate = DateTime.UtcNow}, 

This generates localizedConcreteString as follows:

Sample Injected Title
Here's a sample template for Brendan!
Generated on 23/08/2013 

All the work for this is done by the Inject extension method, which means that it can be used generally, on any String where templating might be needed.

public static class StringInjectExtension
    public static string Inject(this string TemplateString, object InjectionObject)
         return Inject(TemplateString, InjectionObject, CultureInfo.InvariantCulture);

    public static string Inject(this string TemplateString, object InjectionObject, CultureInfo Culture)
         return Inject(TemplateString, GetPropertyHash(InjectionObject), Culture);

    public static string Inject(this string TemplateString, Hashtable values, CultureInfo Culture)
         string result = TemplateString;

         //Assemble all tokens to replace
         Regex tokenRegex = new Regex("{{((?<noprops>\\w+(?:}}|(?\<hasformat>:(.[^}]*))}}))|(\<hasprops>(\\w|\\.)+(?:}}|(?\<hasformat>:(.[^}]*))}})))",
                                         RegexOptions.IgnoreCase | RegexOptions.Compiled);
         foreach (Match match in tokenRegex.Matches(TemplateString))
             string replacement = match.ToString();

             //Get token version without mustache braces
             string shavenToken = match.ToString();
             shavenToken = shavenToken.Substring(2, shavenToken.Length - 4);

             string format = null;
             if (match.Groups["hasformat"].Length > 0)
                 format = match.Groups["hasformat"].ToString();
                 shavenToken = shavenToken.Replace(format, null);
                 format = format.Substring(1);
             if (match.Groups["noprops"].Length > 0) //matched {{foo}}
                 replacement = FormatValue(values, shavenToken, format, Culture);
             else //matched {{[...]}}
                 //Get the value of the nested property from the token and
                 //store it in value hashtable to avoid having to get it again (in case reused in current template)

                        string[] properties = shavenToken.Split(new char[] { '.' });
                        object propertyObject = values[properties[0]];
                        for(int propIdx = 1; propIdx < properties.Length; propIdx++){
                            if (propertyObject == null) break;
                            propertyObject = GetPropValue(propertyObject, properties[propIdx]);
                        values.Add(shavenToken, propertyObject);
                 replacement = FormatValue(values, shavenToken, format, Culture);
             result = result.Replace(match.ToString(), replacement);
            return result;

        private static string FormatValue(Hashtable values, string key, string format, CultureInfo culture){
            var value = values[key];

            if (format != null)
                //do a double string.Format - first to build the proper format string, and then to format the replacement value
                string attributeFormatString = string.Format(culture, "{{0:{0}}}", format);
                return string.Format(culture, attributeFormatString, value);
                return (value ?? String.Empty).ToString();

        private static object GetPropValue(object PropertyObject, string PropertyName)
            PropertyDescriptorCollection props = TypeDescriptor.GetProperties(PropertyObject);
            PropertyDescriptor prop = props.Find(PropertyName, true);

            return prop.GetValue(PropertyObject);

        private static Hashtable GetPropertyHash(object properties)
            Hashtable values = new Hashtable();
            if (properties != null)
				PropertyDescriptorCollection props = TypeDescriptor.GetProperties(properties);
				foreach (PropertyDescriptor prop in props)
				    values.Add(prop.Name, prop.GetValue(properties));
			return values;


.NET WCF Custom Headers

I use server-side error logging to trap and record any exceptions an end-user might be receiving. It’s handy for pro-active debugging and it’s also useful for tracking any potential intrusion attempts. To that end, I need to have the end user’s IP address to see if the intrusion attempts are all coming from a IP address or range that can potentially be blocked. In an N-tiered SOA app though, the service call that logs the exception will be in a different tier (and potentially on a different server) to the end-user. That means that the caller IP address for the service’s Log function will actually be the web server’s IP address, rather than the end-user’s browser IP address.

WCF allows for custom headers and it provides an ideal way to pass the end-user’s IP address (or any metadata) to the service layer from the web layer.

Firstly, we need to add the user’s IP address to every WCF call. This is done using a custom IClientMessageInspector to add a message header.

public class ClientMessageInspector : IClientMessageInspector
        private const string HEADER_URI_NAMESPACE = "";
        private const string HEADER_SOURCE_ADDRESS = "SOURCE_ADDRESS";

        public ClientMessageInspector()

        public void AfterReceiveReply(ref System.ServiceModel.Channels.Message reply, object correlationState)

        public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel)
            if (HttpContext.Current != null)
                MessageHeader header = null;
                    header = MessageHeader.CreateHeader(HEADER_SOURCE_ADDRESS , HEADER_URI_NAMESPACE, HttpContext.Current.Request.UserHostAddress);
                catch (Exception e)
                    header = MessageHeader.CreateHeader(HEADER_SOURCE_ADDRESS , HEADER_URI_NAMESPACE , null);
            else if (OperationContext.Current != null)
                //If service layer does a nested call to another service layer method, ensure that original web caller IP is passed through also 
                MessageHeader header = null;
                int index = OperationContext.Current.IncomingMessageHeaders.FindHeader(HEADER_SOURCE_ADDRESS, HEADER_URI_NAMESPACE);
                if (index > -1)
                    string remoteAddress = OperationContext.Current.IncomingMessageHeaders.GetHeader(index);
                    header = MessageHeader.CreateHeader(HEADER_SOURCE_ADDRESS, HEADER_URI_NAMESPACE, remoteAddress);
                    header = MessageHeader.CreateHeader(HEADER_SOURCE_ADDRESS , HEADER_URI_NAMESPACE , null);

            return null;


To make WCF service calls use this inspector, a behavior and behavior extension is needed:

 public class EndpointBehavior : IEndpointBehavior
        public EndpointBehavior() {}

        public void AddBindingParameters(ServiceEndpoint endpoint, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) {}

        public void ApplyClientBehavior(ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.ClientRuntime clientRuntime)
            ClientMessageInspector inspector = new ClientMessageInspector();

        public void ApplyDispatchBehavior(ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.EndpointDispatcher endpointDispatcher) {}

        public void Validate(ServiceEndpoint endpoint) {}

   public class BehaviorExtension : BehaviorExtensionElement
        public override Type BehaviorType
            get { return typeof(EndpointBehavior); }

        protected override object CreateBehavior()
            return new EndpointBehavior();

Now we can use the extension in the config file for the client endpoints.

        <add name="CustomExtension" type="Example.Service.BehaviorExtension, Example.Service, Version=, Culture=neutral, PublicKeyToken=null" />
        <behavior name="ClientEndpointBehavior">
        <binding name="ExampleServiceClientBinding/>
      <endpoint address="net.tcp://localhost:8091/CustomExample/DataService" binding="netTcpBinding" bindingConfiguration="ExampleServiceClientBinding" contract="Example.Service.Contract.IDataService" name="ExampleDataServiceClientEndpoint" behaviorConfiguration="ClientEndpointBehavior">

Using SvcTraceViewer we can see the new header being passed on the SOAP call:

<s:Envelope xmlns:a="" xmlns:s="">



Finally, to access this in the service code, I add a helper method to the service base class. A call to GetServiceCallerRemoteAddress() anywhere in service code will always give the IP address of the end-user caller of the service method.

    public abstract class BaseDataService 

        protected string GetServiceCallerRemoteAddress()
            ServiceSecurityContext cxtSec = ServiceSecurityContext.Current;
            int index = OperationContext.Current.IncomingMessageHeaders.FindHeader("SOURCE_ADDRESS", "");
            string remoteAddress = null;
            if (index > -1)
                remoteAddress = OperationContext.Current.IncomingMessageHeaders.GetHeader(index);
            return remoteAddress;

Android OpenGLES Water Caustics

I like water caustics – those sinuous reflections you get from the surface of water. I’ve always wanted a good “water pool” type animated background. The apps on the store don’t quite do what I’ve been looking for, so I wrote one.

Initially, I dipped back into 3D programming and tried to see if they could be done in real-time on the device. Water Caustics are notorious for being computationally intensive to get looking right, so I was sceptical it could be done on a mobile device, and I was right to be! Even using native c++ and the android NDK, Fast fourier transforms for smoothing the wave surface and OpenGL for sunlight ray rendering (using the fastest caustics mathematical algorithm I could find), the result is just too disappointing at 25 fps.


Mobile processors are good, but they are not that good yet. To get a real-time effect without jitters doesn’t give good enough detail on my HTC Desire – it just looks like a smoke filter effect. Not to mention the drain on the battery and other processes. Still, it was fun to get back to OpenGL 3D rendering again.

In the end, the best solution was to let a desktop app do the hardwork of rendering a set of tileable, loopable animation frames. I still use OpenGL and the NDK for the final live wallpaper app, but this time just to bitblit the frames onto the surface. It’s just about fast enough on my device and OS and looks pretty cool in motion. You’ll just have to believe me! (or else download it from my apps page).


.NET Scalable Server Push Notifications with SignalR and Redis

Modern web applications sometimes need to notify a logged-in user of an event that occurs on the server. Doing so involves sending data to the browser when the event happens. This is not easily achieved with the standard request-response model used by the HTTP protocol. A notification to the browser needs what’s known as “server push” technology. The server can not “push” a notification unless there is an open, dedicated connection to the client. HTML5 capable client browsers provide the WebSocket mechanism for this, but it is not widely available yet. Most browsers need to mimic push behavior, such as by using a long-polling technique in JavaScript, which simply means making frequent, light, requests to the server similar to AJAX.

To reduce the complexity of coding for the different browser capabilities the excellent SignalR library is available to use in .NET projects – it allows for the transport mechanisms mentioned, and some others. It automatically selects the best (read: performant) transport for the capabilities of the given browser & server combination. Crucially, it provides a means to configure itself so the developer can optimize it for performance and scalability. Using it for server initiated notifications is a “no-brainer”.

Here’s an example of how to set up such a notification mechanism.

To begin with, install required libraries into the project using NuGet.

PM> Install-Package Microsoft.AspNet.SignalR
PM> Install-Package ServiceStack.Redis
PM> Install-Package Microsoft.AspNet.SignalR.Redis

You can see that Redis is used too. This is to allow for web farm scaling. Redis is used to store the SignalR connections so they will always be available and synchronized no matter which web server the SignalR polling request arrives at. This can be achieved (depending on architectural demands) using just one Redis server instance, or by running multiple replicated Redis server instances (this is outside the scope of this example, but it’s easy to set-up).

Next configure SignalR to use Redis as the backing store and map the signalr route. This is done as part of RegisterRoutes (Global.asax.cs).

public static void RegisterRoutes(RouteCollection routes)
      //Use redis for signalr connections - set redis server connection details here
      GlobalHost.DependencyResolver.UseRedis("localhost", 6379, null, "WBSignalR");

      // Register the default SiganlR hubs route: ~/signalr
      // Has to be defined before default route, or it is overidden 
      RouteTable.Routes.MapHubs(new HubConfiguration { EnableDetailedErrors = true });

      //All other mvc routes are defined here            

A SignalR Hub subclass is needed to contain the server side code that both the SignalR client and server will use.

public class NotificationHub : Hub

We also use this class to keep the server aware of the open SignalR connections and – more importantly – which connections relate to which user. The events on the Hub class allow us to keep this up-to-date connection list.

There’s a lot to consider in the code for this class. The full code can be downloaded – NotificationHub.cs. Let’s look at it piece-by-piece.

The first thing is the nested ConnectionDetail class that is used to store the details of the connection in Redis.

public class ConnectionDetail
    public ConnectionDetail() { }

    public string ConnectionId { get; set; }

    public override bool Equals(object obj)
        if (obj == null) return false;
        if (obj.GetType() != this.GetType()) return false;

        return (obj as ConnectionDetail).ConnectionId.Equals(this.ConnectionId);

This class only has one property – the SignalR ConnectionId string. It is better to use a class instead of just the connection id string because we can extend it to store other detail about the connection that later on might affect what message we send, or how it should be treated on the client. For example we could record and store the type of browser associated with the connection (mobile, etc.)

The Equals implementation is needed to check if the connection object is already part of the user’s connection collection or not.

To store the connection detail object in Redis it will be serialized to a byte array using protocol buffers – hence the ProtoBuf attributes. Protocol buffers are a highly performant way of serializing/deserializing data. If you’re not familiar with, you really should check it out.

Next, we use the ServiceStack.Redis client to make all calls to Redis to store the list of connections per user. This is fairly trivial to set-up.

private RedisClient client;

public NotificationHub()
    client = new RedisClient();   //Default connection - localhost:6379

The connection to Redis is made when we want to add or remove a connection from the user’s connection list. Two methods provide that functionality – AddNotificationConnection and RemoveNotificationConnection. They are very similar, so I’ll just explain the first one.

public void AddNotificationConnection(string username, string connectionid)
    string key = String.Format("{0}:{1}", REDIS_NOTIF_PREFIX, username);

            List<ConnectionDetail> list = new List<ConnectionDetail>();
            byte[] data = client.Get(key);
            MemoryStream stream;
            if (data != null)
                stream = new MemoryStream(data);
                list = ProtoBuf.Serializer.Deserialize<List<ConnectionDetail>>(stream);
            ConnectionDetail cdetail = new ConnectionDetail() { ConnectionId = connectionid };
            if (!list.Contains(cdetail))
            stream = new MemoryStream();
            ProtoBuf.Serializer.Serialize<List<ConnectionDetail>>(stream, list);
            stream.Seek(0, SeekOrigin.Begin);
            data = new byte[stream.Length];
            stream.Read(data, 0, data.Length);

            using (var t = client.CreateTransaction())
                t.QueueCommand(c => c.Set(key, data));

The code looks for data in Redis under a unique key which is a combination of the constant prefix and the username. It keyed this way because we can do a fast key lookup, retrieve and lock a small block of data, and so keep the operation atomic, maintaining integrity of the user’s connection list in an environment where the user could open a new connection via a different web server at any time. Keying it on one user, rather than storing a list of connections for all users under one key, also avoids creating locking bottlenecks at scale.

Next, we use the connection events of the Hub class to maintain the user’s list, e.g.:

public override Task OnConnected()
    string Username = GetConnectionUser();

    if (Username != null)
        AddNotificationConnection(Username, Context.ConnectionId);

    return base.OnConnected();

It’s fairly simple – the ConnectionId is taken from the Hub Context object and stored. The main issue here is how to get the user name associated with the connection. The usual HttpContext.User is not available in the SingalR Hub implementation. SignalR uses Owin for it’s Hhttp pipeline, not the usual MVC pipeline, and one of the consequences of this is that SignalR does not load the session (based on the session cookie). However, the browser cookies are sent with the SignalR request. In this case, I use FormsAuthentication in the web application, so the user’s name is stored encrypted in the ticket when the user logs in. GetConnectionUser gets this data from the FormsAuthentication cookie.

private string GetConnectionUser(){
    if (Context.RequestCookies.ContainsKey(FormsAuthentication.FormsCookieName))
        string cookie = Context.RequestCookies[FormsAuthentication.FormsCookieName].Value;

        FormsAuthenticationTicket ticket = FormsAuthentication.Decrypt(cookie);
        return ticket.UserData;

    return null;

The final piece of the Hub code is the function that actually sends the message to the user’s client browser sessions. It will invoke the corresponding ReceiveNotification function in Javascipt on the client.

public bool SendNotificationToUser(string username, string message){

    List list = GetNotificationConnections(username);
    foreach(ConnectionDetail detail in list){

    return false;

To test this, we will call it from a controller action from a test page.


public ActionResult NotfTest(string touser, string message)
    var hubConnection = new HubConnection("http://localhost/SignalR.Notification.Sample");
    IHubProxy hubProxy = hubConnection.CreateHubProxy("NotificationHub");
        hubConnection.Start().Wait(2000); //Async call, 2s wait - should use await in C# 5

        hubProxy.Invoke("SendNotificationToUser", new object[] { touser, message });
    return View("NotfTestSent");

The call is made by the server creating a SignalR hub connection of its own and then sending a request to the Hub’s SendNotificationToUser function (similar to an RPC call).

That’s all the server side code, now for the client side.

To use the client side features of SignalR, we need to include the signalr javascript file, and the server-side generated hubs javascript.

How you want to display the notification in the browser is application dependant, and so up to you. For this, I use the jquery qtip plugin to show it as a tooltip pop-up.

    <!-- Add Script includes -->
    <script src="" type="text/javascript"/>
    <script src="@Url.Content("~/Scripts/jquery.signalR-1.1.2.js")" type="text/javascript"/>
    <script src="@Url.Content("~/signalr/hubs")"/>

Near the end of html page (or near the end of the template page html), some javascript makes the connection to the hub once the page is loaded. Finally, define the client-side implementation ReceiveNotification to handle the display of the message.

<script type="text/javascript">

        //Make a connection to the server hubs

        // Declare a proxy to reference the server-side signalr hub class. 
        var notfHub = $.connection.notificationHub;

        //Link a client-side function to the server hub event
        notfHub.client.receiveNotification = function (message) {

            //Use qtip library to show a tooltip message
                content: {
                    text: message,
                    title: 'Notification',
                    button: true
                position: {
                    at: 'top right',
                    my: 'bottom left'
                show: {
                    delay: 0,
                    ready: true,
                    effect: function (offset) {


Voila. Server side push notification to any number of users, no matter how many places each is logged-in, and whatever browser they use.


TeraServer 2

I need an upgraded server. I want something that can handle replication of a SaaS database with on-disc encryption. Keeping with my habit of building my own machines, it would require more space than my current TeraServer could provide, and frankly the disk access times are a little slow now on the old one (I suspect the RAID backplane I sourced wasn’t capable of delivering the SATA-2 speeds the drives could). It’s being used for far more applications than I had planned too.

5 years is a long time in hardware and I’m amazed to see how storage costs have dropped further. This time I could get two disks that would provide me with full drive RAID redundancy at a capacity of 3TB at SATA-6 speeds. That means I could go for the 1U rack mount case at (C2-RACK-V3). This case also supports a PCI-E card – so I can add a video capture card for some security monitoring using ZoneMinder too.


The drives are fast and surprisingly quiet too (Seagate 3.5″ 3TB Barracuda 7200RPM). To ramp up the system performance of this one even further I used a Crucial M4 SSD for the Linux system partition. That was a new experience! The thing boots in seconds.

With the spinning drives mounted using rubber washers to reduce vibration and noise, the three drives fitted in comfortably, with room to spare.


For the mainboard, I wanted something with a bit more processor power (it would need to handle Linux software RAID and video capture analysis for ZoneMinder). It also needed to fit the case and I didn’t want any cooling fan that would add to the height and noise. Eventually, I found an Asus C60-M1 at a very good price from an on-line store in Germany. It was a compromise – the processor is the same clock speed as the previous server board, but anything higher wouldn’t come fanless. I settled for the same clock speed (but twice the cores helps!) It fitted ok in height, but there was a bit too much of a gap between the board and the chassis back-plate. I don’t think any of the provided blanking plates would cover the entire back-plate either. That wasn’t important for me anyway.

4GB of 1333MHz DDR3 RAM would be plenty to handle the extra Linux apps I have been (and will be) running on the server. With that amount of RAM, I could even run the Linux tmpfs in a ramdisk to make it even faster.


I installed Centos 6, configured the drives, and I’m amazed at how fast this little mini-itx powered beast flies compared to my previous creation. 5 years is a long time in hardware.