Bluetooth Command Queuing for Android

Bluetooth 4 (aka BLE – Bluetooth Low Energy) on Android has some important limitations that need to be addressed in apps that need a high degree of user interactivity with a Bluetooth device. Very quickly when you set out to write such an app you’ll come to realize that you’ll need some sort of Bluetooth command queuing system.

First, a little background. Recently I had to develop a client app that would act as a control panel to a prototype hardware device using BLE. This app had to allow commands to be sent to the device on-demand from the user through a UI with a lot of “moving parts”. I’ve worked with synchronizing fitness wearables, but this was a whole different ball game! Fitness wearables tend to be a simple sync and download mechanism, but for this new device, user’s could tap around a set of controls, with each interaction issuing a command (or even a set of commands) to a device. Add to this the complexity that the app would also need to receive ad-hoc notifications at any time from the device, and would also have to send periodic timed commands in the background!

Since Android 4.3 the android.bluetooth classes support BLE, but I came to realize that they would not be enough this time to deal with the complexity alone. It’s because Android’s underlying BLE implementation is a bit quirky. The infamous google issue 58381 illustrates. Other helpful nuggets of information can be gleaned at this stack overflow post here. The critical piece of information from user OneWorld in that posting is: “Gatt always can process one command at a time. If several commands get called short after another, the first one gets cancelled due to the synchronous nature of the gatt implementation.”

I’ve come up against this problem myself. You can issue multiple BLE commands ad-hoc from a multi-threaded app, but BLE won’t respond very well. If you issue a second command before it has a chance to deal with the previous one, the BLE stack can (particularly on Android 4.3) get into a state where it won’t even respond anymore and even require a manual user restart of Bluetooth! Not ideal. You can delay before sending a second command, but to cover all possibilities in hardware and BLE conditions, this introduces an unnecessary delay into every command. Fortunately, in my app, I could assume (like most BLE applications) that the device responds to a command. So I could send the next command immediately after receiving the command response back (via a BLE notification). To create a robust, stable, experience for the user, I had to implement a queuing system layer on top of Android’s bluetooth classes so that the Android BLE stack was assured of only having to deal with one command (or event) at a time.

Let’s look at the details of implementing the command queue.
(The code for a sample app showing the concept is in my github.)

The first thing we need is an object to represent a command. This can be extended in subclasses to do any kind of fancy thing you want, e.g. perform multiple writes to different characteristics.
For this example, we’ll just have a simple command that reads the device’s serial number. To see the idea of the queue in action the DelayCommand will also pause briefly before doing the read, so that you can see commands backing-up in the queue.

public class DelayCommand extends BluetoothCommand {
    public void execute(BluetoothGatt gatt){
        try {
            synchronized (Thread.currentThread()) {
        }catch(InterruptedException e){

        //As an example, read from serial number characteristic

The main activity can create a command object and add it to the queue by calling the service’s method queueCommand, and that’s where we really see the queue in-action:

synchronized (mCommandQueue) {
            mCommandQueue.add(command);  //Add to end of stack
            ExecuteCommandRunnable runnable = new ExecuteCommandRunnable(command);

The first thing to note is the mCommandQueue is simply a Java LinkedList, which works as a FIFO queue.

When the command is added to the queue a runnable is also created that actually executes the command. These runnables are executed on a Single Thread Executor to ensure only one is run at a time. Here’s the Runnable code that actually does the work of executing the command:

    class ExecuteCommandRunnable implements Runnable{

        BluetoothCommand mCommand;

        public ExecuteCommandRunnable(BluetoothCommand command) {
            mCommand = command;

        public void run() {
            //Acquire semaphore lock to ensure no other operations can run until this one completed
            //Tell the command to start itself.

You might think it’s enough to just run a queue of runnables off a single thread to implement the queue, right? Wrong. Here’s why.
When mCommand.execute is called, it starts the bluetooth read and returns immmediately. This would cause the next runnable
to run its command. Remember what I said about Android BLE not liking when you issue a new command if the first hasn’t responded yet?
So here’s the trick: we also need a lock that will prevent the next runnable from executing it’s command until the previous command
has a response. This is done in the BLE GattCallback, where the characteristic read response actually comes back to the caller:

    public void onCharacteristicRead(BluetoothGatt gatt, BluetoothGattCharacteristic characteristic, int status) {
        super.onCharacteristicRead(gatt, characteristic, status);
            //... Send string response to listener here..

    protected void dequeueCommand(){

DequeueCommand releases the lock, and the next runnable then waiting to acquire the lock can run it’s own command.
The lock is just a semaphore with a limit of 1 permit:

Semaphore mCommandLock = new Semaphore(1,true);

And there you have it – the beginnings of a BLE command queue.

A more complete implementation has to allow for other important cases, such as the device failing to respond to the command at all ( – quick answer: implement a time-out to release the command lock and continue to the next command – ), but that’s for another post!

Scaling Subdomains with Redis and ZeroMQ

You know those nice web apps that allow customised subdomains per customer, i.e. Nice to have, sure, but there are challenges in keeping this feature performant in a scalable server farm and it’s non-trivial, as I discovered.

To determine which customer organization a web request is for based on the subdomain you could make a call to the service layer to determine the organization ID for the subdomain, but that is massively inefficient – it is a guaranteed database round-trip per web request. To avoid this a cache server is the answer (such as Redis or memcached). Here is the configuration I use:


You might wonder why a Redis instance per web server instead of just having one shared Redis instance. There’s two main reasons for this:

  1. Since this is a per-request cache fetch I want it to be as fast as possible to increase throughput, so accessing Redis on the same server will be faster than a call over the network.
  2. If one Redis instance is used per server then you are more likely to get a cache hit and so minimize the need to go to the database. Note that the configuration is important for this to work as intended (e.g. use one of the LRU options for Redis maxmemory-policy, and configure the load balancer to use the Source IP algorithm or better.)

The web app on each server is configured to handle the wildcard domain (*, and a request handler is used to query the Host Header subdomain and determine if there is a cache entry for it. If there is no cache entry, a call to the service layer/database can then be done and stored in the cache. Once it is available, the web server can use this for whatever it likes. A common example is to use different or custom UI themes per organization, and so the web server would return different HTML or CSS links depending on the customer organization. With the necessary details cached on the web server, it can do its UI rendering tasks without having to call the service layer.

Sounds good, right? Yes – except for one tiny problem. Anytime a cache is introduced to an architecture it raises the issue of keeping the cache in-sync with the authoritative data store (i.e. the database). To put it simply, if a customer changes the theme they want to use and saves that, the web server cache does not yet reflect the change and so will continue to deliver the old theme until the customer data is evicted from the Redis cache. The customer, having changed theme, will probably then keep making requests, refreshing the page, wondering why the theme hasn’t changed – because each requests is keeping the old data in the cache! The problem is magnified because a stale cache entry for the customer could, potentially, now exist on more than one redis instance – the downside of using multiple Redis instances.

But there is a solution. This is the kind of problem messaging is great at solving. Enter ZeroMQ.

The task of saving the customer record update is done by the service (or data) layer, so it has the responsibility of informing any component that needs to know the customer data is now potentially out-of-date. But how does the service layer know what components to send the message too? (There could be multiple web servers on-line caching the data). The answer is that the service layer server does need to know. It’s not its responsibility. It just needs to send the message – fire and forget.

ZeroMQ is a lightweight messaging option. We could use something like RabbitMQ which can be configured for guaranteed end-to-end messaging, etc., but if the message being sent isn’t mission critical you can decide to trade reliability for performance. ZeroMQ is blazingly fast. MSMQ also is slower and configuring and testing it is a bit more of a pain than using lightweight, embedded messaging components like ZeroMQ.

To handle the notification to multiple web servers I use the pub-sub messaging model. Basically, one web server instance (the primary server) can be set-up as the messaging hub. Yes, it is just one point of failure, but again, these messages aren’t mission critical. You could use a more elaborate message broker set-up with redundancy and message storage but that means trading performance. Let’s look at the ZeroMQ pub-sub implementation in practice.


We’ll use the Pub-Sub Proxy Pattern to handle the registration of web servers and forwarding of messages to them. As a web server comes on-line, it registers as a message subscriber on the XPUB socket on the primary web server (which is configured to listen). When a service tier server publishes a change message the NetMQ proxy (or hub) sends the message on to all subscribers. Each subscribers simply checks the contents of the message to see if the customer id is one it is holding in its Redis cache. If so, it refreshes the entry immediately.

ZeroMQ is a C implementation, so you’ve got (at present) two choices for using it in .NET. You can use clrzmq which is managed DLL wrapper around an unmanaged ZeroMQ library, or you can use NetMQ, which is a native C# implementation of the ZeroMQ functionality. At the time of writing NetMQ is not yet considered production ready, so it’s your call which to use – .NET code not production ready but easier to debug, or native code that will be harder to debug and is potentially open to memory leaks.

Thankfully, NetMQ has an implementation of the Proxy pattern ready built for us.

Here is a sample of the proxy code. Typically this would be run as a separate process or service on the primary web server, or you could run it as a Task or Thread in the main web app (but there are startup/shutdown issues involved which I won’t go into here.)

private void MessagingProxyTaskFunc()
    //Use the common context to create the mq sockets - created earlier and stored on the AppDomain
    NetMQContext cxt = (NetMQContext)AppDomain.CurrentDomain.GetData("WB_NetMQContext");

    using (NetMQSocket frontend = cxt.CreateXSubscriberSocket(), backend = cxt.CreateXPublisherSocket())

        frontend.Bind("tcp://*:9100"); //Receive published messages on this server, port 9100
        backend.Bind("tcp://*:9101");  //Subscribers will bind to this server, port 9101, listening for forwarded messages

        //Create & start the proxy and begin listening for published messages
        NetMQ.Proxy proxy = new NetMQ.Proxy(frontend, backend, null);
        while (true)
            if (taskCancelToken.IsCancellationRequested) break;

            //Blocks until message received or interupted
            NetMQMessage message = frontend.ReceiveMessage();

            //Forward message to the subscribers to this proxy

Next we need the business service to publish the message when the customer data changes:

public Organization SaveOrganization(Organization org)
     //Do data store logic here


            //Get the publisher socket, created when the business service was created using:
            //NetMQSocket socket = cxt.CreatePublisherSocket();
            //socket.Connect("tcp://<Primary Web Server IP Address>:9100");

            NetMQSocket socket = (NetMQSocket)AppDomain.CurrentDomain.GetData("WB_PubSocket");
            NetMQMessage msg = new NetMQMessage();
            msg.Append(new NetMQFrame(Encoding.UTF8.GetBytes("ORG")));
            msg.Append(new NetMQFrame(Encoding.UTF8.GetBytes(org.PublicID)));
            msg.Append(new NetMQFrame(Encoding.UTF8.GetBytes(org.Serialize())));

Finally, the code for the Message Listener on each individual web server. Again, this function needs to run as its own process/thread to avoid blocking and ensure timely response to messages:

private void MessagingTaskFunc()
    NetMQContext cxt = (NetMQContext)AppDomain.CurrentDomain.GetData("WB_NetMQContext");

    using (NetMQSocket socket = cxt.CreateSubscriberSocket())
        socket.Subscribe(Encoding.UTF8.GetBytes("ORG")); //Subscriber only listens for certain message header

        while (true) 
           if (taskCancelToken.IsCancellationRequested) break;

           NetMQMessage data = null;
              data = socket.ReceiveMessage(); //This blocks until message received. data is null if interrupted.

              if (data == null) break;
                 data.Pop(); //Pop first message frame - will always be "ORG"
                 //Get the next message frames which should contain the ID of organization, and the data
                 NetMQFrame frame = data.Pop();
                 string orgID = Encoding.UTF8.GetString(frame.Buffer);

                 //Check that the organization's ID is one cached in Redis. If so, refresh Redis data using 
                 //last message frame data.


               // Handle subscription receive error gracefully - ensure listener loop keeps running;  

There you have it – an architecture for scalable, synchronized, custom subdomains.

Localized String Templating in .NET

I’ve been building a mustache-style string template system for my Saas app. It will mainly be used for e-mail notifications sent to users via Amazon’s SES. The idea is simple; you have a text template where you want to substitute the tokens {{…}} with send-time specific data:

Here's a sample template for {{Person.Firstname}}! 
Generated on {{CreationDate:d}}

There’s a couple of important features to note here. Firstly, you can reference nested properties in the tokens – handy for passing existing business entities. Secondly, you can add format strings to determine how the token value should be formatted. This is a nice-to-have which means that if you have a locale associated with the user, you can format dates in e-mails to the user’s locale, not the sending server’s locale (i.e. mm/dd/yy or dd/mm/yy)

Here’s an simple example of how it would be called:

String template = @"{{Title}}
Here's a sample template for {{Person.Firstname}}! 
Generated on {{CreationDate:d}}";

PersonEntity Person = new PersonEntity();
Person.Firstname = "Brendan";
Person.Surname = "Whelan";
Person.Locale = "en-IE";

String localizedConcreteString = template.Inject(new {
                                                      Title = "Sample Injected Title", 
                                                      CreationDate = DateTime.UtcNow}, 

This generates localizedConcreteString as follows:

Sample Injected Title
Here's a sample template for Brendan!
Generated on 23/08/2013 

All the work for this is done by the Inject extension method, which means that it can be used generally, on any String where templating might be needed.

public static class StringInjectExtension
    public static string Inject(this string TemplateString, object InjectionObject)
         return Inject(TemplateString, InjectionObject, CultureInfo.InvariantCulture);

    public static string Inject(this string TemplateString, object InjectionObject, CultureInfo Culture)
         return Inject(TemplateString, GetPropertyHash(InjectionObject), Culture);

    public static string Inject(this string TemplateString, Hashtable values, CultureInfo Culture)
         string result = TemplateString;

         //Assemble all tokens to replace
         Regex tokenRegex = new Regex("{{((?<noprops>\\w+(?:}}|(?\<hasformat>:(.[^}]*))}}))|(\<hasprops>(\\w|\\.)+(?:}}|(?\<hasformat>:(.[^}]*))}})))",
                                         RegexOptions.IgnoreCase | RegexOptions.Compiled);
         foreach (Match match in tokenRegex.Matches(TemplateString))
             string replacement = match.ToString();

             //Get token version without mustache braces
             string shavenToken = match.ToString();
             shavenToken = shavenToken.Substring(2, shavenToken.Length - 4);

             string format = null;
             if (match.Groups["hasformat"].Length > 0)
                 format = match.Groups["hasformat"].ToString();
                 shavenToken = shavenToken.Replace(format, null);
                 format = format.Substring(1);
             if (match.Groups["noprops"].Length > 0) //matched {{foo}}
                 replacement = FormatValue(values, shavenToken, format, Culture);
             else //matched {{[...]}}
                 //Get the value of the nested property from the token and
                 //store it in value hashtable to avoid having to get it again (in case reused in current template)

                        string[] properties = shavenToken.Split(new char[] { '.' });
                        object propertyObject = values[properties[0]];
                        for(int propIdx = 1; propIdx < properties.Length; propIdx++){
                            if (propertyObject == null) break;
                            propertyObject = GetPropValue(propertyObject, properties[propIdx]);
                        values.Add(shavenToken, propertyObject);
                 replacement = FormatValue(values, shavenToken, format, Culture);
             result = result.Replace(match.ToString(), replacement);
            return result;

        private static string FormatValue(Hashtable values, string key, string format, CultureInfo culture){
            var value = values[key];

            if (format != null)
                //do a double string.Format - first to build the proper format string, and then to format the replacement value
                string attributeFormatString = string.Format(culture, "{{0:{0}}}", format);
                return string.Format(culture, attributeFormatString, value);
                return (value ?? String.Empty).ToString();

        private static object GetPropValue(object PropertyObject, string PropertyName)
            PropertyDescriptorCollection props = TypeDescriptor.GetProperties(PropertyObject);
            PropertyDescriptor prop = props.Find(PropertyName, true);

            return prop.GetValue(PropertyObject);

        private static Hashtable GetPropertyHash(object properties)
            Hashtable values = new Hashtable();
            if (properties != null)
				PropertyDescriptorCollection props = TypeDescriptor.GetProperties(properties);
				foreach (PropertyDescriptor prop in props)
				    values.Add(prop.Name, prop.GetValue(properties));
			return values;


.NET WCF Custom Headers

I use server-side error logging to trap and record any exceptions an end-user might be receiving. It’s handy for pro-active debugging and it’s also useful for tracking any potential intrusion attempts. To that end, I need to have the end user’s IP address to see if the intrusion attempts are all coming from a IP address or range that can potentially be blocked. In an N-tiered SOA app though, the service call that logs the exception will be in a different tier (and potentially on a different server) to the end-user. That means that the caller IP address for the service’s Log function will actually be the web server’s IP address, rather than the end-user’s browser IP address.

WCF allows for custom headers and it provides an ideal way to pass the end-user’s IP address (or any metadata) to the service layer from the web layer.

Firstly, we need to add the user’s IP address to every WCF call. This is done using a custom IClientMessageInspector to add a message header.

public class ClientMessageInspector : IClientMessageInspector
        private const string HEADER_URI_NAMESPACE = "";
        private const string HEADER_SOURCE_ADDRESS = "SOURCE_ADDRESS";

        public ClientMessageInspector()

        public void AfterReceiveReply(ref System.ServiceModel.Channels.Message reply, object correlationState)

        public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel)
            if (HttpContext.Current != null)
                MessageHeader header = null;
                    header = MessageHeader.CreateHeader(HEADER_SOURCE_ADDRESS , HEADER_URI_NAMESPACE, HttpContext.Current.Request.UserHostAddress);
                catch (Exception e)
                    header = MessageHeader.CreateHeader(HEADER_SOURCE_ADDRESS , HEADER_URI_NAMESPACE , null);
            else if (OperationContext.Current != null)
                //If service layer does a nested call to another service layer method, ensure that original web caller IP is passed through also 
                MessageHeader header = null;
                int index = OperationContext.Current.IncomingMessageHeaders.FindHeader(HEADER_SOURCE_ADDRESS, HEADER_URI_NAMESPACE);
                if (index > -1)
                    string remoteAddress = OperationContext.Current.IncomingMessageHeaders.GetHeader(index);
                    header = MessageHeader.CreateHeader(HEADER_SOURCE_ADDRESS, HEADER_URI_NAMESPACE, remoteAddress);
                    header = MessageHeader.CreateHeader(HEADER_SOURCE_ADDRESS , HEADER_URI_NAMESPACE , null);

            return null;


To make WCF service calls use this inspector, a behavior and behavior extension is needed:

 public class EndpointBehavior : IEndpointBehavior
        public EndpointBehavior() {}

        public void AddBindingParameters(ServiceEndpoint endpoint, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) {}

        public void ApplyClientBehavior(ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.ClientRuntime clientRuntime)
            ClientMessageInspector inspector = new ClientMessageInspector();

        public void ApplyDispatchBehavior(ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.EndpointDispatcher endpointDispatcher) {}

        public void Validate(ServiceEndpoint endpoint) {}

   public class BehaviorExtension : BehaviorExtensionElement
        public override Type BehaviorType
            get { return typeof(EndpointBehavior); }

        protected override object CreateBehavior()
            return new EndpointBehavior();

Now we can use the extension in the config file for the client endpoints.

        <add name="CustomExtension" type="Example.Service.BehaviorExtension, Example.Service, Version=, Culture=neutral, PublicKeyToken=null" />
        <behavior name="ClientEndpointBehavior">
        <binding name="ExampleServiceClientBinding/>
      <endpoint address="net.tcp://localhost:8091/CustomExample/DataService" binding="netTcpBinding" bindingConfiguration="ExampleServiceClientBinding" contract="Example.Service.Contract.IDataService" name="ExampleDataServiceClientEndpoint" behaviorConfiguration="ClientEndpointBehavior">

Using SvcTraceViewer we can see the new header being passed on the SOAP call:

<s:Envelope xmlns:a="" xmlns:s="">



Finally, to access this in the service code, I add a helper method to the service base class. A call to GetServiceCallerRemoteAddress() anywhere in service code will always give the IP address of the end-user caller of the service method.

    public abstract class BaseDataService 

        protected string GetServiceCallerRemoteAddress()
            ServiceSecurityContext cxtSec = ServiceSecurityContext.Current;
            int index = OperationContext.Current.IncomingMessageHeaders.FindHeader("SOURCE_ADDRESS", "");
            string remoteAddress = null;
            if (index > -1)
                remoteAddress = OperationContext.Current.IncomingMessageHeaders.GetHeader(index);
            return remoteAddress;

Android OpenGLES Water Caustics

I like water caustics – those sinuous reflections you get from the surface of water. I’ve always wanted a good “water pool” type animated background. The apps on the store don’t quite do what I’ve been looking for, so I wrote one.

Initially, I dipped back into 3D programming and tried to see if they could be done in real-time on the device. Water Caustics are notorious for being computationally intensive to get looking right, so I was sceptical it could be done on a mobile device, and I was right to be! Even using native c++ and the android NDK, Fast fourier transforms for smoothing the wave surface and OpenGL for sunlight ray rendering (using the fastest caustics mathematical algorithm I could find), the result is just too disappointing at 25 fps.


Mobile processors are good, but they are not that good yet. To get a real-time effect without jitters doesn’t give good enough detail on my HTC Desire – it just looks like a smoke filter effect. Not to mention the drain on the battery and other processes. Still, it was fun to get back to OpenGL 3D rendering again.

In the end, the best solution was to let a desktop app do the hardwork of rendering a set of tileable, loopable animation frames. I still use OpenGL and the NDK for the final live wallpaper app, but this time just to bitblit the frames onto the surface. It’s just about fast enough on my device and OS and looks pretty cool in motion. You’ll just have to believe me! (or else download it from my apps page).


.NET Scalable Server Push Notifications with SignalR and Redis

Modern web applications sometimes need to notify a logged-in user of an event that occurs on the server. Doing so involves sending data to the browser when the event happens. This is not easily achieved with the standard request-response model used by the HTTP protocol. A notification to the browser needs what’s known as “server push” technology. The server can not “push” a notification unless there is an open, dedicated connection to the client. HTML5 capable client browsers provide the WebSocket mechanism for this, but it is not widely available yet. Most browsers need to mimic push behavior, such as by using a long-polling technique in JavaScript, which simply means making frequent, light, requests to the server similar to AJAX.

To reduce the complexity of coding for the different browser capabilities the excellent SignalR library is available to use in .NET projects – it allows for the transport mechanisms mentioned, and some others. It automatically selects the best (read: performant) transport for the capabilities of the given browser & server combination. Crucially, it provides a means to configure itself so the developer can optimize it for performance and scalability. Using it for server initiated notifications is a “no-brainer”.

Here’s an example of how to set up such a notification mechanism.

To begin with, install required libraries into the project using NuGet.

PM> Install-Package Microsoft.AspNet.SignalR
PM> Install-Package ServiceStack.Redis
PM> Install-Package Microsoft.AspNet.SignalR.Redis

You can see that Redis is used too. This is to allow for web farm scaling. Redis is used to store the SignalR connections so they will always be available and synchronized no matter which web server the SignalR polling request arrives at. This can be achieved (depending on architectural demands) using just one Redis server instance, or by running multiple replicated Redis server instances (this is outside the scope of this example, but it’s easy to set-up).

Next configure SignalR to use Redis as the backing store and map the signalr route. This is done as part of RegisterRoutes (Global.asax.cs).

public static void RegisterRoutes(RouteCollection routes)
      //Use redis for signalr connections - set redis server connection details here
      GlobalHost.DependencyResolver.UseRedis("localhost", 6379, null, "WBSignalR");

      // Register the default SiganlR hubs route: ~/signalr
      // Has to be defined before default route, or it is overidden 
      RouteTable.Routes.MapHubs(new HubConfiguration { EnableDetailedErrors = true });

      //All other mvc routes are defined here            

A SignalR Hub subclass is needed to contain the server side code that both the SignalR client and server will use.

public class NotificationHub : Hub

We also use this class to keep the server aware of the open SignalR connections and – more importantly – which connections relate to which user. The events on the Hub class allow us to keep this up-to-date connection list.

There’s a lot to consider in the code for this class. The full code can be downloaded – NotificationHub.cs. Let’s look at it piece-by-piece.

The first thing is the nested ConnectionDetail class that is used to store the details of the connection in Redis.

public class ConnectionDetail
    public ConnectionDetail() { }

    public string ConnectionId { get; set; }

    public override bool Equals(object obj)
        if (obj == null) return false;
        if (obj.GetType() != this.GetType()) return false;

        return (obj as ConnectionDetail).ConnectionId.Equals(this.ConnectionId);

This class only has one property – the SignalR ConnectionId string. It is better to use a class instead of just the connection id string because we can extend it to store other detail about the connection that later on might affect what message we send, or how it should be treated on the client. For example we could record and store the type of browser associated with the connection (mobile, etc.)

The Equals implementation is needed to check if the connection object is already part of the user’s connection collection or not.

To store the connection detail object in Redis it will be serialized to a byte array using protocol buffers – hence the ProtoBuf attributes. Protocol buffers are a highly performant way of serializing/deserializing data. If you’re not familiar with, you really should check it out.

Next, we use the ServiceStack.Redis client to make all calls to Redis to store the list of connections per user. This is fairly trivial to set-up.

private RedisClient client;

public NotificationHub()
    client = new RedisClient();   //Default connection - localhost:6379

The connection to Redis is made when we want to add or remove a connection from the user’s connection list. Two methods provide that functionality – AddNotificationConnection and RemoveNotificationConnection. They are very similar, so I’ll just explain the first one.

public void AddNotificationConnection(string username, string connectionid)
    string key = String.Format("{0}:{1}", REDIS_NOTIF_PREFIX, username);

            List<ConnectionDetail> list = new List<ConnectionDetail>();
            byte[] data = client.Get(key);
            MemoryStream stream;
            if (data != null)
                stream = new MemoryStream(data);
                list = ProtoBuf.Serializer.Deserialize<List<ConnectionDetail>>(stream);
            ConnectionDetail cdetail = new ConnectionDetail() { ConnectionId = connectionid };
            if (!list.Contains(cdetail))
            stream = new MemoryStream();
            ProtoBuf.Serializer.Serialize<List<ConnectionDetail>>(stream, list);
            stream.Seek(0, SeekOrigin.Begin);
            data = new byte[stream.Length];
            stream.Read(data, 0, data.Length);

            using (var t = client.CreateTransaction())
                t.QueueCommand(c => c.Set(key, data));

The code looks for data in Redis under a unique key which is a combination of the constant prefix and the username. It keyed this way because we can do a fast key lookup, retrieve and lock a small block of data, and so keep the operation atomic, maintaining integrity of the user’s connection list in an environment where the user could open a new connection via a different web server at any time. Keying it on one user, rather than storing a list of connections for all users under one key, also avoids creating locking bottlenecks at scale.

Next, we use the connection events of the Hub class to maintain the user’s list, e.g.:

public override Task OnConnected()
    string Username = GetConnectionUser();

    if (Username != null)
        AddNotificationConnection(Username, Context.ConnectionId);

    return base.OnConnected();

It’s fairly simple – the ConnectionId is taken from the Hub Context object and stored. The main issue here is how to get the user name associated with the connection. The usual HttpContext.User is not available in the SingalR Hub implementation. SignalR uses Owin for it’s Hhttp pipeline, not the usual MVC pipeline, and one of the consequences of this is that SignalR does not load the session (based on the session cookie). However, the browser cookies are sent with the SignalR request. In this case, I use FormsAuthentication in the web application, so the user’s name is stored encrypted in the ticket when the user logs in. GetConnectionUser gets this data from the FormsAuthentication cookie.

private string GetConnectionUser(){
    if (Context.RequestCookies.ContainsKey(FormsAuthentication.FormsCookieName))
        string cookie = Context.RequestCookies[FormsAuthentication.FormsCookieName].Value;

        FormsAuthenticationTicket ticket = FormsAuthentication.Decrypt(cookie);
        return ticket.UserData;

    return null;

The final piece of the Hub code is the function that actually sends the message to the user’s client browser sessions. It will invoke the corresponding ReceiveNotification function in Javascipt on the client.

public bool SendNotificationToUser(string username, string message){

    List list = GetNotificationConnections(username);
    foreach(ConnectionDetail detail in list){

    return false;

To test this, we will call it from a controller action from a test page.


public ActionResult NotfTest(string touser, string message)
    var hubConnection = new HubConnection("http://localhost/SignalR.Notification.Sample");
    IHubProxy hubProxy = hubConnection.CreateHubProxy("NotificationHub");
        hubConnection.Start().Wait(2000); //Async call, 2s wait - should use await in C# 5

        hubProxy.Invoke("SendNotificationToUser", new object[] { touser, message });
    return View("NotfTestSent");

The call is made by the server creating a SignalR hub connection of its own and then sending a request to the Hub’s SendNotificationToUser function (similar to an RPC call).

That’s all the server side code, now for the client side.

To use the client side features of SignalR, we need to include the signalr javascript file, and the server-side generated hubs javascript.

How you want to display the notification in the browser is application dependant, and so up to you. For this, I use the jquery qtip plugin to show it as a tooltip pop-up.

    <!-- Add Script includes -->
    <script src="" type="text/javascript"/>
    <script src="@Url.Content("~/Scripts/jquery.signalR-1.1.2.js")" type="text/javascript"/>
    <script src="@Url.Content("~/signalr/hubs")"/>

Near the end of html page (or near the end of the template page html), some javascript makes the connection to the hub once the page is loaded. Finally, define the client-side implementation ReceiveNotification to handle the display of the message.

<script type="text/javascript">

        //Make a connection to the server hubs

        // Declare a proxy to reference the server-side signalr hub class. 
        var notfHub = $.connection.notificationHub;

        //Link a client-side function to the server hub event
        notfHub.client.receiveNotification = function (message) {

            //Use qtip library to show a tooltip message
                content: {
                    text: message,
                    title: 'Notification',
                    button: true
                position: {
                    at: 'top right',
                    my: 'bottom left'
                show: {
                    delay: 0,
                    ready: true,
                    effect: function (offset) {


Voila. Server side push notification to any number of users, no matter how many places each is logged-in, and whatever browser they use.


Loading EJBs in JRuby on Torquebox

Here’s one of those obscure little problems you come across in Torquebox that might help someone wrestling with the same problem; it sure would have saved me some time!

One of the things I like about using the Java flavour of Ruby (JRuby) is that it’s pretty easy
to invoke and access EJBs. Here’s an example:

require 'java'
require '~/Applications/torquebox-1.0.1/jboss/lib/jboss-common-core.jar'
require '~/Applications/torquebox-1.0.1/jboss/client/jbossall-client.jar'
require '~/ExampleEAR/ExampleEAR-ejb/dist/ExampleEAR-ejb.jar'
require '~/Applications/netbeans/enterprise/modules/ext/javaee-api-6.0.jar'

include_class 'java.util.Properties'
include_class 'javax.naming.InitialContext'
include_class 'com.hypermatix.session.ExampleSessionBean';
include_class 'com.hypermatix.session.ExampleSessionBeanRemote';

class EJBExampleController < ApplicationController
  def index
    properties =
    properties.put(Context::PROVIDER_URL, "jnp://localhost:1099")
    properties.put(Context::URL_PKG_PREFIXES, "org.jboss.naming:org.jnp.interfaces");

    context =

    sessionBean = context.lookup("ExampleEAR/ExampleSessionBean/remote")

    result = sessionBean.someBusinessMethod(parameters)

Torquebox is a platform that combines JBoss AS and JRuby into one server, and adds a lot of very powerful integration additions too. However, when I deployed this JRuby test app and the EJB on the same Torquebox JBoss instance, I got a "name not bound" message with the JNDI context.lookup for the EJB. Looking further down the stack trace (getting a stack trace from a JRuby NativeException needs a helper!):

Can not find interface declared by Proxy in our CL + org.jboss.web.tomcat.serice.WebCtxLoader

Further again down the stack trace, it turned out that a ClassNotFoundException was causing this. My own ExampleSessionBean EJB class was not being located by the Torquebox JBoss instance. When I tried to invoke it in a standalone Java app, it worked fine. I couldn't find any similar problem reported for JRuby, however a similar error can occur in Apache web apps. It turns out to be the way the JBoss class loading mechanism works. The solution is to ensure that the web app uses the same class loading domain as the EJB. To do this, just add a jboss-classloading.xml file into the config directory of the JRuby app and into the EAR you use to deploy the EJBs:

<?xml version="1.0" encoding="UTF-8"?>
<classloading xmlns="urn:jboss:classloading:1.0"

Android HttpClient Proxy Errors

Coding HTTP interaction is a common task in mobile app development. It can be one of the trickiest if not done right. There’s a lot of edge cases to consider in the HTTP protocols –  different versions of the protocol, different server implementations, firewalls and routers to consider, etc.

Blackberry developers had been hamstrung for years by the more rudimentary package, or even plain old java URLConnection, and have had to deal with coding more of the HTTP protocol implementation themselves or else bloating the size of their apps by including a third-party HTTP library. (There’s also the issues of routing through BES/BIS and MDS to contend with!).

The Apache HttpComponents suite is a great toolset for handling all the common http interactions in Java apps, and a lot of the more specialized ones too. It can even tackle WebDAV, Authentication extensions, and HTTPS connections. Thankfully, Android had the Apache HttpComponents included in the SDK by default since day one, so there’s no need to bloat out the size of your app with extra jars if you want the convenience of HttpClient.

The classic tradeoff in using any library is that, while it may save time and cost, it can also be a ‘black box’ that obscures some of the details and makes it harder to track down and fix problems. No such problems with HttpClient though because it is well-designed (i.e. extensible) and is open source.

Here’s an example. I had an Android app interacting perfectly well with a test server, then switched it over to access Yahoo’s web server on the internet. Instant problem: HTTP 502 Proxy Error.

So why didn’t it work? In the end, the solution is simple (but finding it was a little tricky). This is where the design of the HttpComponents library really helped. HttpClient allows logging of all HTTP chatter through the standard Java logging api. Here’s an example of setting-up logging to the standard Java error console.

System.setProperty("org.apache.commons.logging.Log", "org.apache.commons.logging.impl.SimpleLog");
System.setProperty("org.apache.commons.logging.simplelog.showdatetime", "true"); 
System.setProperty("org.apache.commons.logging.simplelog.log.httpclient.wire.header", "trace"); 
System.setProperty("", "trace");

Note: I had to do this with a java console app because commons logging won’t configure in Android via the system properties – that’s a job for another day!
Here’s the trace it produced:

[DEBUG] headers - >> REPORT /dav/user/Calendar/MyCal/ HTTP/1.1
[DEBUG] headers - >> Depth: 1
[DEBUG] headers - >> Authorization: Basic [redacted]
[DEBUG] headers - >> Content-Type: application/xml; charset="UTF-8"
[DEBUG] headers - >> Content-Length: 296
[DEBUG] headers - >> Host:
[DEBUG] headers - >> Connection: Keep-Alive
[DEBUG] headers - >> Expect: 100-Continue
[DEBUG] wire - >> "&lt;C:calendar-query xmlns:D="DAV:" ...
[DEBUG] wire - << HTTP/1.1 502 Proxy Error[EOL]"

HttpClient was sending the request ok, but the server was immediately returning the 502 Proxy Error and no data. The request headers show that HttpClient was sending an Expect: 100-Continue header. (The Expect-Continue mechanism was introduced for HTTP 1.1 [RFC 2616] to make HTTP interaction more efficient).

Proxy Servers are very common on the internet. They buffer and insulate hosted environments from attack, balance loads, etc. The thing is – a proxy server uses the lowest common denominator of the HTTP protocols (HTTP 1.0) because it can never assume the client understands HTTP 1.1. Since Expect-Continue is a mechanism of the 1.1 HTTP protocol, a proxy server rejects it, even if the web server behind it can handle HTTP 1.1. It’s to do with authentication (which I won’t get into here). For this, it was enough to realise that the third-party server must have been using a proxy server.

One solution to this is to force HttpClient to use HTTP 1.0, but then you lose all the performance benefits of HTTP 1.1, which are especially important on a mobile device.

Yet again the excellent design of HttpClient comes to the rescue though. There is a configuration that you can set to prevent HttpClient from sending the Expect-Continue header over a HTTP 1.1 connection:

HttpClient httpclient = new HttpClient();
httpclient.getParams().setParameter("http.protocol.expect-continue", false);

It worked! HttpClient is great.