How to Make a Sales Tool?

The market is full of workflow engines that are perfect for specific systems, either because they are tightly integrated into it or because they are designed for one specific use case. There is no universal solution that fits all needs. Of course, building something from scratch is not always an option either. Here is how we managed to overcome this challenge at The IT Solutions.

Business problem: How to Manage Data Efficiently?

One of our clients needed a way to manage their sales process efficiently. What does this exactly mean?

First and foremost, they needed a platform that enabled them to enter data into a form with the option to leave things and continue them later. Second, there are numerous external data sources that the agents used to look up data manually for the customers. Integrating these could help them work faster, hence increasing productivity. These integrations rely on the data entered by the user, so data consistency during the process is a must-have.

Goals and Requirements

The business problem clearly described that we needed a sales workflow. Each step had to be executed in a specific order. If a previous step is edited, the data entered afterward could not be trusted anymore. But how do we store this?

Mixing partially completed and consistent data in persistent storage is not the best idea because it is hard to differentiate them. Managing and maintaining these from the data and code perspective is both uncomfortable and unreliable.

We had to look at what the industry had to offer. The main issue with the available workflow engines that are currently on the market is the ability to adapt to changing software. Even a minor change in the flow itself can cause data loss, technical debt, or inefficient development in the longer term.

The requirements were collected:

  • The solution needs to be resilient and easy to scale to match the load that is on the system.
  • The sales process has to be flexible: if they are interrupted, they can be picked up where there were left off.
  • If an external data source is not available, any automated step needs to have the option to be retried at a later point without data loss.
  • Implementing changes needs to be supported.

Solution: Uber's Cadence

This is how Uber’s solution, Cadence came into the picture. As their food delivery service relies on this, it has been proven to be working on a large scale.

The engine is implemented in Go which is famous for its speed and reliability. It does not require a lot of resources. During mealtime, they are experiencing similar usage peaks to what we can expect in our application. The company was kind enough to make the code publicly available and free to use, even commercially. They are also offering SDKs in Go and Java. This later turned out to be a great candidate.

But I hear you ask: “How does this work?”

The core concept is how your application can communicate with the workflow. As a part of your workflow implementation, you can create custom signals, which can change the state of the workflow, and process any data that can be represented in JSON format. The only point of storing data somewhere is if you can access them. This is what queries are for. The queries in a workflow can access the workflow state, read, transform and return the data stored in it. How to get started with Cadence? The SDK provides the following things to implement to interact with your workflow implementation:


A Cadence worker is responsible for executing the workflow and storing its data in memory. It does not have to be publicly available; it only needs access to Cadence. If you register multiple workers, the workflow engine will act like a broker and delegate tasks to a preferred worker host.


A workflow client can expose the workflow for your application and it can send signals and queries to the workflow.


A chunk of code that can wrap the integration of external data sources, rapidly changing data or any part of the system where deterministic execution cannot be guaranteed. Once a workflow is started, its state is stored in the worker process’ memory, hence it is quick to access. Any active workflow can be stored in the memory, their data can be retrieved from there. To achieve this, the workflow engine needs to have a sticky execution policy, and a workflow instance needs to be tied to a worker host.

But what happens when a usage peak ends and the worker process is terminated?

As I have mentioned previously any kind of data that is represented as a JSON document is something Cadence can cope with. The workflow engine notices that the worker has gone offline and looks for another available worker. For every so-called **side-effect **(collective phrase for signals and activities which can alter the workflow state), the parameters are serialized and stored in the Cadence workflow history. When a worker gets an existing but unknown workflow execution, it takes the whole history and reiterates it, replaying every signal that happened previously. To achieve this, the workflow must be deterministic. Every external integration needs to be persisted in the workflow history so a replay can restore the workflow state. The parameters and the result of every activity are also stored and the activity implementation is not invoked during the replay. The same goes for every side effect.

Our solution

We have created a Spring Boot-based application, integrated with Cadence. A sales flow is started by an agent, a snapshot is made of the prices stored in the application. This way, no data change in the system will affect the running workflows. Every field for the agent is required to fill in and ends up in a signal, which can trigger activities in the background if needed. The result is presented to the user, indicating a successful operation.


There is no perfect solution, Cadence does not come with a caveat either. As the base concept of Cadence is that its workflows need to be deterministic and reproducible, every change made in the implementation needs to support this concept. Cadence has the tool to create a version flag and run a different code branch in case of a new version is executed. Once the old executions are terminated/completed, these branches can be removed. This creates a technical debt; the legacy code needs to be maintained and rolled out later. The developer also needs to keep in mind if a change is backward compatible, as it may break existing flows.

The future of Cadence

There is an improved version of Cadence, which is maintained as a separate project, called Temporal. The concept is the same, although there are some major internal differences. Here is one of Temporal co-founder's thoughts on this topic: Upgrading from Cadence to Temporal is almost like a version upgrade of Cadence for your codebase. Changing the engine itself is a more radical change, there is no way to migrate data from Cadence. If you need to support the existing flows, you need to consider your options.


Despite the flaws, Cadence brings to the table, working with it turned out to be effective both for the developer team and the end customer. Automating the collection of data, so the agents can see these all at once, increased their productvitity. The time of creating one sale with this tool has been reduced from almost 1 hour to 10 minutes.

Blog Posts

View more

Graceful termination of Nginx in K8s

Graceful termination of Nginx in K8s

React Native vs. React Web: What's the Difference and When to Use Them

React Native vs. React Web: What's the Difference and When to Use Them

React vs. Angular: Which Front-End Framework Is Right for You?

React vs. Angular: Which Front-End Framework Is Right for You?