The State of XIoT Security Report: 1H 2022
Download the Report
Claroty Logo


Engineering Deep Dive: The Art of On-Premise Development

Roee Drucker
/ July 21st, 2022

There's nothing like working with cloud managed services. Everything is a breeze, it's quick and easily monitored. But what happens when the customer requirement for your application is to run on-premise? One that is supposed to run even in an air-gapped network?
At Claroty, we've been working on a solution that should support both on-premise and cloud deployment, and we got to witness first-hand the issues you can get from it.

There are obvious things that needs to be done when working on an on-premise solution, like replacing managed cloud services with self managed ones. In this article, I'll review three important considerations that may initially be less obvious while deploying an on-premise application and share insight on how to mitigate them.

The state of on-premise development in 2022

There are many reasons for the fact that we see less and less on-premise deployment in today's market, mostly related to the growing ubiquity of cloud-based SaaS solutions. So, why do we even need to talk about it?

A 2021 report from Dimensional Research has shown that 92% of the surveyed companies indicate on-premise software sales are growing. Another research report published in September 2021 by Nutantix concluded that the use of a private cloud solution rose through 2021, and 25% of the companies asked have considered it their most commonly used deployment model.

In trying to understand this trend, our initial assumption was that we'll see those kind of deployments less and less, but that's not necessarily the case.

Many organizations today choose a middle ground, favoring hybrid cloud solutions that can combine on-premise resources with a public/private cloud. They understand that they don't necessarily have to choose, they can enjoy the benefits of both solutions, and change what is deployed/stored wherever and whenever they feel like.

So what are the (perhaps less obvious), key principles that are important for us to remember when developing software that will be deployed on-premise?


  • The Issue: You're no longer in charge of upgrading the customer to the newest release. There is no promise that the customer will upgrade for a really long time.

  • The Mitigation: Upgrading your software should be as easy and fast as possible. Sure, there might be some pushbacks, with comments like "Why both with the upgrade?", but if the upgrade file is small and is fast to upgrade there'll be less resistance.

Sometimes, your on-premise code might have a corresponding database, which might take more than a few seconds to migrate, and will slow down the upgrade. There are several ways to approach this, but one that I especially like and have seen being used in a few places, is to replicate the database and leverage a CDC tool, such as Debezium, to keep migrating the changes while the system is still running. When all the existing data + the new one that was created during the upgrade have finished migrating, replace the primary database with the secondary database, resulting in a very short downtime.


  • The Issue: How can we monitor an environment if we don't have any access to it? How can we even retrieve crash logs?

  • The Mitigation: You can't know if you have any issues in a totally isolated environment, so monitoring will have to be part of the solution you provide to your customer, with clear indications if something goes wrong. An on-prem monitoring solution must be in place, one which will monitor the machine's free disk space to the stuff you'd monitor on the cloud. It should be something that is easily customizable, with scripts you can add to increase visibility.

Once again, there are many solutions for this. At Claroty, we use Datadog for one of our cloud solutions, with plans to utilize its on-premise capabilities as well. Just make sure that the critical alerts are properly shown to the users so they can contact the relevant support - it's a lot nicer to get a big warning "The system is using 80% of the storage size" than seeing it suddenly crash as a result of a 100% usage.

Let's say there is an actual bug in your code. Debugging remotely can be slow and troublesome, and many times getting the system's logs is completing only half of the picture. In Claroty, we have a script that dumps all the relevant application data, encrypts and packs. We then unpack it on a VM, and get a sort of replica of the customer application at the moment of the dump (logs, database, etc).

While not the cheapest, there are actually ways to do it today with the cloud providers, easily copying the VM/files and utilizing the cloud to get a customer replica. This process is mostly useful when you want to safely and easily debug the full state of your application, reducing the latency that might occur with remote debugging.

Machine size

  • The Issue: The limited on-premise resources don't really allow you to scale. You might deploy on a big machine, but it's never enough, and you definitely can't tell customers to keep changing the machine on which the solution is deployed!

  • The Mitigation: The system has to fit its data to the machine spec it's deployed on. Limitations and retention are your best friends to ensure that the system won't be overwhelmed. This becomes especially important if we remember our first issue: the fact that we can't control upgrades, might result in database information becoming stale/old/irrelevant, and you'll need to set proper retention/limits policies.

There are several areas where you'd most likely want to enforce this:

  • Logs (logrotate)

  • Files that might be created by the application

  • Database tables (limit amount of rows)

  • Amount of processes (fit amount of processes to machine's specs)


Add to the monitoring solution some indications of if the system is about to reach some of the limits, and monitor if it might be maximizing machine resources. This way you can reduce and change the limits for different system parameters, and make sure the system always works in the best possible way.

Wrapping up

Developing on-premise applications is a difficult task. But going forward, software developers will have to understand the need to write hybrid applications that run on both cloud and on-premise instances.

The good news is that aside from the tips noted above, there are more and more solutions that are being created by cloud providers to keep cloud and on-premise environments as similar as possible, making deployments easier.

Having said that, developing applications like these still offers a unique challenge, forcing you to rethink the way you've developed your application so far.


Featured Articles

Interested in learning about Claroty's Cybersecurity Solutions?

Claroty Logo
LinkedIn Twitter Facebook