Tag Archives: internet of things

Internet-of-Things And Akka Actors

IoT (Internet of Things) has recently been one of the most popular buzzwords. Despite being over-hyped, we’re indeed heading towards a foreseeable world in which all sorts of things are inter-connected. Before IoT became a hot acronym, I was heavily involved in building a Home-Area-Network SaaS platform over the course of 5 years in a previous startup I cofounded, so it’s no stranger to me.

At the low-level device network layer, there used to be platform service companies providing gateway hardware along with proprietary APIs for IoT devices running on sensor network protocols (such as ZigBee, Z-Wave). The landscape has been evolving over the past couple of years. As more and more companies begin to throw their weight behind building products in the IoT ecosystem, open standards for device connectivity emerge. One of them is MQTT (Message Queue Telemetry Transport).

Message Queue Telemetry Transport

MQTT had been relatively little-known until it was standardized at OASIS a couple of years ago. The lightweight publish-subscribe messaging protocol, MQTT, has since been increasingly adopted by major players, including Amazon, as the underlying connectivity protocols for IoT devices. It’s TCP/IP based but its variant, MQTT-SN (MQTT for Sensor Networks), covers sensor network communication protocols such as ZigBee. There are also quite a few MQTT message brokers, including HiveMQ, Mosquitto and RabbitMQ.

IoT makes a great use case for Akka actor systems which come with lightweight loosely-coupled actors in decentralized clusters with robust routing, sharding and pub-sub features, as mentioned in a previous blog post. The actor model can be rather easily structured to emulate the operations of a typical IoT network that scales in device volume. In addition, availability of MQTT clients for Akka such as Paho-Akka makes it easy to communicate with MQTT brokers.

A Scala-based IoT application

UPDATE: An expanded version of this application with individual actors representing each of the IoT devices, each of which maintains its own internal state and setting, is now available. Please see the Akka Actors IoT v.2 blog post for details.

In this blog post, I’m going to illustrate how to build a scalable distributed worker system using Akka actors to service requests from a MQTT-based IoT system. A good portion of the Akka clustering setup is derived from Lightbend’s Akka distributed workers template. Below is a diagram of the application:

IoT with MQTT and Akka Actor Systems

As shown in the diagram, the application consists of the following components:

1. IoT

  • A DeviceRequest actor which:
    • simulates work requests from IoT devices
    • publishes requests to a MQTT pub-sub topic
    • re-publishes requests upon receiving failure messages from a topic subscriber
  • An IotAgent actor which:
    • subscribes to the mqtt-topic for the work requests
    • sends received work requests via ClusterClient to the master cluster
    • sends DeviceRequest actor a failure message upon receiving failure messages from Master actor
  • A MQTT pub-sub client, MqttPubSub, for communicating with a MQTT broker
  • A configuration helper object, MqttConfig, consisting of:
    • MQTT pub-sub topic
    • URL for the MQTT broker
    • Serialization methods to convert objects to byte arrays, and vice versa

2. Master Cluster

  • A fault-tolerant decentralized cluster which:
    • manages a singleton actor instance among the cluster nodes (with a specified role)
    • delegates ClusterClientReceptionist on every node to answer external connection requests
    • provides fail-over of the singleton actor to the next-oldest node in the cluster
  • A Master singleton actor which:
    • registers Workers and distributes work to available Workers
    • acknowledges work request reception with IotAgent
    • publishes work results to a work-results topic via Akka distributed pub-sub
    • maintains work states using persistence journal
  • A PostProcessor actor in the master cluster which:
    • simulates post-processing of the work results
    • subscribes to the work-results topic

3. Workers

  • An actor system of Workers each of which:
    • communicates via ClusterClient with the master cluster
    • registers with, pulls work from the Master actor
    • reports work status with the Master actor
    • instantiates a WorkProcessor actor to perform the actual work
  • WorkProcessor actors which process the work requests

Source code is available at GitHub.

A few notes:

  1. Neither IotAgent nor Worker actor system is a part of the master cluster, hence both need to communicate with the Master via ClusterClient.
  2. Rather than having the Master actor spawn child Workers and push work over, the Workers are set up to register with the Master and pull work from it – a model similar to what Derek Wyatt advocated in his post.
  3. Paho-Akka is used as the MQTT pub-sub client with configuration information held within the helper object, MqttConfig.
  4. The helper object MqttConfig consists of MQTT pub-sub topic/broker information and methods to serialize/deserialize the Work objects which, in turn, contains Device objects. The explicit serializations are necessary since multiple JVMs will be at play if one launches the master cluster, IoT and worker actor systems on separate JVMs.
  5. The test Mosquitto broker at tcp://test.mosquitto.org:1883 serves as the MQTT broker. An alternative is to install a MQTT broker (Mosquitto, HiveMQ, etc) local to the IoT network.
  6. The IotAgent uses Actor’s ask method (?), instead of the fire-and-forget tell method (!), to confirm message receipt by the Master via a Future return. If the receipt confirmation is not so important, using the tell method will be a much preferred choice for performance.
  7. This is primarily a proof-of-concept application of IoT using Akka actors, hence code performance optimization isn’t a priority. In addition, for production systems, a production-grade persistence journal (e.g. Redis, Cassandra) should be used and multiple-Master via sharding could be considered.

Test-running

Similar to how you would test-run Lightbend’s distributed workers template, you may open up separate command line terminals and run the different components on separate JVMs, adding and killing the launched components to observe how the systems scale out, fail over, persist work states, etc. Here’s an example of test-run sequence:

Below are some sample console output.

Console Output: Master seed node with persistence journal:

Console Output: IotAgent-DeviceRequest node:

Console Output: Worker node:

Challenges Of Big Data + SaaS + HAN

This is part two of a previous post about building and operating a Big Data SaaS for Home Area Network devices during my 5-year tenure with EcoFactor. Simply put, our main goal was to add “smarts” to residential heating and cooling systems (i.e. heaters and air conditioners, a.k.a. HVAC) via ordinary thermostats. That focus led to a superficial perception by some people that we’re a smart thermostat device company. In actuality, we have always been a software service, virtually agnostic to both hardware and communications protocol. It’s more of an IoT version of the “Intel Inside” business model.

Challenges from all fronts

Like building any startup company, there was a wide spectrum of challenges confronting us which is what this post is going to talk about. Funding environment was pretty hellish as we started just shortly before the financial crisis in 2007-2008. And failure of some high-profile solar companies in subsequent years certainly didn’t help make the once hyped cleantech a favorable sector for investors.

The ever-growing fierce competition for software engineering talent was and has been a big challenge for pretty much every startup in the Silicon Valley. On the technology front, production-grade open-source Big Data technologies weren’t there, leading to the need for a lot of internal R&D effort by individual companies, which in turn requires domain experts in both development and operations who were scarce endangered species back then, thus completing the vicious infinite loop that starts with the hiring difficulty.

Operational processes

On the operational front, there was a long list of processes that need to be carefully established and managed – from user acquisition, on-boarding, device installer training, scheduling coordination for on-site device installation, technical support for installers, to customer service. To get into the details of how all that was done warrants writing a book. In charge of product and marketing, Scott Hublou who is also a co-founder of the company owned the “horrendous” list.

Many of the items in the list are correlated. For instance, getting HVAC technicians to create a HAN network and pair up thermostats with the HAN gateway during an on-site installation not only required a custom-built software tool with a well-thoughtout workflow and easy UI, but also thorough training and a knowledgeable support team to back them up for ad-hoc troubleshooting.

Back to the engineering side of the world, a key piece in operations is the technology infrastructure that needs to cope with future business growth. That includes systems hosting, network and data architecture, server clusters for distributed computing, load balancing systems, fail-over and monitoring mechanism, firewalls, etc. As a startup company, we started with something simple but expandable to conserve cash, and scaled up as quickly as necessary. That’s also a practical approach from the design point of view to avoid over-engineering.

State of WPAN

On hardware, applicable HAN communications protocol and HAN device hardware were far from ready for mass deployment at the time when we started exploring in that space. That’s a non-trivial challenge for anybody who wants to get into the very space. On the other hand, if done right it represents an opportunity for one to pioneer in a relatively new arena.

ZigBee, an IEEE 802.15.4 standard WPAN (Wireless Personal Area Network) protocol, was our selected communications protocol for scaled deployment. While it’s a robust protocol compared with others such as Z-Wave, its specifications was still undergoing changes and few real-world implementations had ever exploited its full features.

The protocol comes with a few predefined application profiles including Energy Efficiency and Home Automation profiles. Part of our core business is about translating HVAC operations data via thermostats into actionable business intelligence, hence ability to acquire key attributes from these devices is crucial. We quickly discovered that some attributes as basic as HVAC state were missing in certain application profiles and we had to not only utilize multiple profiles but also extend to using custom attributes in ZCL (ZigBee Cluster Library).

Working with technology partners

Working with hardware technology partners does present some other challenges. HAN device firmware and embedded software development is a totally different beast from SaaS/server application development. Python on Linux is a prominent embedded software platform. While that’s also a popular combo for server software development, the two worlds share little resemblance. Building a system that bridges the two worlds takes learning and collaborative effort from both camps.

Some of our HAN device partners were quick to realize the significance of the need to back their gateway devices with a scalable PaaS infrastructure and invest significant effort in M2M (Machine-to-Machine) through acquisition and internal development. But coming from a hardware background, there was inevitably a non-trivial learning curve for our hardware partners to get it right in areas such as software service scalability. Leveraging our internal scalable SaaS development experience and our partners’ embedded software engineering expertise, we managed to put together the best ingredients from both worlds into the cooperative work.

OTA firmware update

OTA (Over-the-Air) firmware update generally refers to wireless firmware update. Our devices run on a WPAN protocol and the firmware is OTA-able. It’s probably one of the operations that create the most anxiety, as an update failure may result in “bricking” the devices in volume, leading to the worst user experience. A bricked thermostat that results in an inoperable HVAC (i.e. heater / air conditioner) would be the last thing the home occupant wants to deal with on a 105F Summer day, or worse, a potentially life-threatening hazard on a 10F Winter night.

This critical task is all about making sure the entire update procedure is foolproof from end to end. The important thing is to go through lots of rehearsals in advance. In addition, the capability of rollback of firmware version is as critical as the forward-update so to undo the update should unforeseen issues arise post-update. Startups typically work at a cut-throat pace that it’s tempting to circumvent pre-production tests whenever possible. But this is one of those operations that even a minor compromise of stringent tests could mean end of business.

Pull vs Push

The around-the-clock time series data acquisition from a growing volume of primitive HAN devices is a capacity-intensive requirement. Understanding that it was going to be a temporary method for smaller-scale deployments, we started out using a simplistic pull model to mechanically acquire data from the HAN gateway devices. These devices gather data serially from their associated thermostat devices, making a single trip to a gateway-connected thermostat device cost a few seconds to tens of seconds. To come up with a data acquisition method that could scale, we needed something that is at least an order of magnitude faster.

With larger-scale deployments in the pipeline, we didn’t waste any time and worked collaboratively with all involved parties early on to build a scalable solution. We went back to the drawing board to scrutinize the various data communication methods that are supported by the WPAN specifications and laid out a few architectural changes. First, we switched the data acquisition model from pull to push. Such change affected not only data communications within our internal SaaS applications but the end-to-end data flow spanning across our partners’ PaaS systems.

One of the key changes was to come up with standards compliant methods that minimize necessary data retrievals via unexploited features such as attribute grouping and differential reporting under the push model. Attribute grouping allows selected attributes to be bundled as a single packet for delivery instead of spitting individual attributes serially in multiple deliveries. Differential reporting helps minimize necessary data deliveries by triggering data transfer only when at least one of the selected attributes has changed. All that means lots of extra work for everybody in the short term, but in exchange for a scalable solution in the long run.

Collaborative work pays off

The challenges mentioned above wouldn’t be resolvable hadn’t there been a team of cross-functional group technologists working diligently and creatively to make it happen. Performance was boosted by orders of magnitude after implementing the new data acquisition method. More importantly, the collective work in some way set a standard for large-scale data acquisition from SaaS-managed HAN devices. It was an invaluable experience being a part of the endeavor.

Big Data SaaS For HAN Devices

At one of the startups I co-founded in recent years, I was responsible for building a SaaS (Software-as-a-Service) platform to manage a rapidly growing set of HAN (Home Area Network) devices across multiple geographies. It’s an interesting product belonging to the world of IoT (Internet of Things), a buzzword that wasn’t popular at all back in 2007. Building such a product required a lot of adventurous exploration and R&D effort from me and my team, especially back then when SaaS and HAN were generally perceived as two completely segregated worlds. The company is EcoFactor and is in the energy/cleantech space.

Our goal was to optimize residential home energy, particularly in the largely overlooked HVAC (heating, ventilation, and air conditioning) area. We were after the consumer market and chose to leverage channel partners in various verticals including HVAC service companies, broadband service providers, energy retailers, to reach mass customers. Main focus was twofold: energy efficiency and energy load shaping. Energy efficiency is all about saving energy while not significantly compromising comfort, and energy load shaping primarily targets utility companies who have vast interest in reducing spikes in energy load during usage peak-time.

Home energy efficiency

Energy efficiency implementation requires intelligence derived from mass real-world data and delivered by optimization algorithms. Proper execution of such optimization isn’t trivial. It involves deep learning of HVAC usage pattern in a given home, analysis of the building envelope (i.e. how well-insulated the building is), the users’ thermostat control activities, etc. All that information is deduced from the raw thermal data spit out by the thermostats, without needing to ask the users a single question. And execution is in the form of programmatic refinement through learning over time as well as interactive adjustment in accordance with feedback from ad-hoc activities.

Obviously, local weather condition and forecast information is another crucial input data source for executing the energy efficiency strategy. Besides temperature, other information such as solar/radiation condition and humidity are also important parameters. There are quite a lot of commercial weather datafeed services available for subscription, though one can also acquire raw data for U.S. directly from NCDC (National Climatic Data Center).

Energy load shaping

Many utilities offer demand response programs, often with incentives, to curtail energy consumption during usage peak-time (e.g. late afternoon on a hot Summer day). Load reduction in a conventional demand response program inevitably causes discomfort experienced by the home occupants, leading to high opt-out rate that beats the very purpose of the program. Since the “thermal signature” of individual homes is readily available from the vast thermal data being collected around the clock, it didn’t take too much effort to come up with suitable load shaping strategy, including pre-conditioning, for each home to take care of the comfort factor while conducting a demand response program. Utility companies loved the result.

HAN devices

The product functionality described so far seems to suggest that: a) some nonexistent complicated device communications protocol is needed, and, b) in-house hardware/firmware engineering effort is needed. Fortunately, there were already some WPAN (Wireless Personal Area Network) protocols, such as ZigBee/IEEE 802.15.4, Z-Wave, 6loWPAN (and other wireless protocols such as WiFi/IEEE 802.11.x), although implementations were still in experimentation at the time I started researching into that space.

We wanted to stay in the software space (more specifically, SaaS) and focus more on delivering business intelligence out of the collected data, hence we would do everything we could to keep our product hardware- and protocol-agnostic. Instead of trying to delve into the hardware engineering world ourselves, we sought and adopted strategic partnership with suitable hardware vendors and worked collaboratively with them to build quality hardware to match our functionality requirement.

Back in 2007, the WPAN-based devices available on the market were too immature to be used even for field trials, so we started out with some IP-based thermostats each of which equipped with a stripped-down HTTP server. Along with the manufacturer’s REST-based device access service, we had our first-generation 2-way communicating thermostats for proof-of-concept work. Field trials were conducted simultaneously in both Texas and Australia so as to capture both Summer and Winter data at the same time. The trials were a success. In particular, the trial result answered the few key hypotheses that were the backbone of our value proposition.

WPAN vs WiFi

To prepare ourselves for large-scale deployment, a low-cost barebone programmable thermostat one can find in a local hardware store like Home Depot is what we were going after as the base hardware. The remaining requirement would be to equip it with a low-cost chip that can communicate over some industry-standard protocol. An IP-based thermostat requiring running ethernet cable inside a house is out of question for both deployment cost and cosmetic reasons which we learned a great deal from our field trials. In essence, we only considered thermostats communicating over wireless protocols such as WPAN or WiFi.

Next, WPAN won over WiFi because of the relatively less work required for messing with the broadband network in individual homes and the low-power specs that works better for battery-powered thermostats. Finally, ZigBee became our choice for the first mass deployment because of its relatively robust application profiles tailored for energy efficiency and home automation. Another reason is that it was going to be the protocol SmartMeters would use, and communicating with SmartMeters for energy consumption information was in our product roadmap.

ZigBee forms a low-power wireless mesh network in which nodes relay communications. At 250 kbit/s, it isn’t a high-speed protocol and can operate in the 2.4GHz frequency band. It’s built on top of IEEE 802.15.4 and is equipped with industry-standard public-key cryptography security. Within a ZigBee network, a ZigBee gateway device typically serves as the network coordinator device, responsible for enforcing the security policy in the ZigBee network and enrollment of joining devices. It connects via ethernet cable or WiFi to a broadband router on one end and communicates wirelessly with the ZigBee devices in the home. The gateway device in essence is the conduit to the associated HAN devices. Broadband internet connectivity is how these HAN devices communicate with our SaaS platform in the cloud. This means that we only target homes with broadband internet service.

The SaaS platform

Our very first SaaS prototype system prior to VC funding was built on a LAMP platform using first-generation algorithms co-developed by a small group of physicists from academia. We later rebuilt the production version on the Java platform using a suite of open-source application servers and frameworks supplemented with algorithms written in Python. Heavy R&D of optimization strategy and machine learning algorithms were being performed by a dedicated taskforce and integrated into the “brain” of the SaaS platform. A suite of selected open-source software including application servers and frameworks were adopted along with tools for development, build automation, source control, integration and QA.

Relational databases were set up initially to persist acquired data from the HAN devices in homes across the nation (and beyond). The data acquisition architecture was later revamped to use HBase as a fast data dumping persistent store to accommodate the rapidly growing around-the-clock data stream. Only selected data sets were funneled to the relational databases for application logics requiring more complex CRUD (create, read, update and delete) operations. Demanding Big Data mining, aggregation and analytics tasks were performed on Hadoop/HDFS clusters.

Under the software-focused principle, our SaaS applications do not directly handle low-level communications with the gateway and thermostat devices. The selected gateway vendor provides its PaaS (Platform-as-a-Service) which takes care of M2M (machine to machine) hardware communications and exposes a set of APIs for basic device operations. The platform also maintains bidirectional communications with the gateway devices by means of periodic phone-home from devices and UDP source port keep-alive (a.k.a. hole-punching, for inbound communications through the firewall in a typical broadband router). Such separation of work allows us to focus on the high-level application logics and business intelligence. It also allows us to more easily extend our service to multiple hardware vendors.

Algorithms

Obviously I can’t get into any specifics of the algorithms which represents collective intellectual work developed and scrutinized by domain experts since the very beginning of the startup venture. It suffices to say that they constitute the brain of the SaaS application. Besides information garnered from historical data, the execution also takes into account of interactive feedback from the users (e.g. ad-hoc manual overrides of temperature settings on the thermostat via the up/down buttons or a mobile app for thermostat control) and modifies existing optimization strategy accordingly.

Lots of modeling and in-depth learning of real-world data were performed in the areas of thermal energy exchange in a building, HVAC run-time, thermostat temperature cycles, etc. A team of quants with strong background in Physics and numerical analysis were assembled to just focus on the relevant work. Besides custom optimization algorithms, machine learning algorithms including clustering analysis (e.g. k-Means Clustering) were employed for various kinds of tasks such as fault detection.

A good portion of the algorithmic programming work was done on the Python platform primarily for its abundance of contemporary Math libraries (SciPy, NumPy, etc). Other useful tools include R for programmatic statistical analysis and Matlab/Octave for modeling. For good reasons, the quant team is the group demanding the most computing resource from the Hadoop platform. And Hadoop’s streaming API makes it possible to maintain a hybrid of Java and Python applications. A Hadoop/HDFS cluster was used to accommodate all the massive data aggregation operations. On the other hand, a relational database with its schema optimized for the quant programs was built to handle real-time data processing, while a long-term solution using HBase was being developed.

Putting everything together

Although elastic cloud service such as Amazon’s EC2 has been hot and great for marketing, our around-the-clock data acquisition model consists of a predictable volume and steady stream rate. So the cloud’s elasticity wouldn’t benefit us much, but it’s useful for development work and benchmarking.

Another factor is security, which is one of the most critical requirements in operating an energy management business. A malicious attack that simultaneously switches on 100,000 A/Cs in a metropolitan region on a hot Summer day could easily bring down the entire grid. Cloud computing service tends to come with less flexible security measure, whereas one can more easily implement enhanced security in a conventional hosting environment, and co-located hosting would offer the highest flexibility in that regard. Thus a decision was made.

That pretty much covers all the key ingredients of this interesting product that brings together the disparate SaaS and HAN worlds at a Big Data scale. All in all, it was a rather ambitious endeavor on both the business and technology fronts, certainly not without challenges – which I’ll talk about perhaps in a separate post some other time.