The IT world held its breath as the first LLMs (Large Language Models) started writing code, creating graphics and introducing the world of artificial intelligence to people who could never program. As a result, more and more companies started to bet on AI and automation.
Unfortunately, the changes we see in companies are mostly driven by the fear that competitors will start using AI and overtake them. Large companies are better able to withstand this pressure because they often have their own AI specialists or can hire enough of them in a relatively short time.
What is Process Discovery
This is another name for a method of studying the flow of business processes in our organisation. Usually when we come across this statement we mean process mapping using notations such as BPMN (Business Process Model & Notation).
So far, drawing maps in this way has allowed ‘non-technical’ people to communicate better with the different departments of developers and analysts involved in, for example, digitising business processes.
Although these notations are still used, there is a growing trend to simplify the whole map and draw it in forms such as a service blueprint.
Understanding these methods will allow you to better understand the rest of this article, which focuses on the concepts of task mining and process mining, methods directly related to AI and automation.
How can we identify business processes with potential for the use of AI or automation tools?
This is a question that will really come to the fore from 2022 onwards. Certainly the concepts mentioned above, namely process mining and task mining, will become more and more common.
Both terms refer to methods to find out (based on quantitative data) where we are potentially “losing” efficiency. Before we look at a specific example, let’s decipher the two terms.
Task mining
This is the process of collecting information about the actions performed by an individual person/employee in the organisation. In practice, it involves running a system in the background that analyses and, if necessary, monitors the actions performed by a particular user. This makes it possible to create an overall pattern of behaviour for an individual or group of people and to analyse the effectiveness of their actions. This approach makes it possible to analyse the efficiency of task execution and the logic of their workflow.
As an example, we can give a simple acceptance of an invoice:
- the user searches for the invoice in the system (5min),
- opens the invoice folder (<1min),
- analyses the details on the invoice (~10min),
- click on accept invoice (<1min).
Such a simple process can take a few minutes or a few hours, depending on what happens between the steps recorded in the system. Such information, correlated with individual data across the organisation, can give us an idea of the average time it takes to accept an invoice.
By multiplying this type of information by the number of invoices to be accepted on a given day (scale), we can better assess the potential benefits of automating the process.
The same process for an example could look like this:
- the user receives a notification of a new invoice in the folder (automation),
- receives a summary of the items on the invoice with an AI recommendation along with the notification (AI action),
- reviews the invoice (~5 min),
- clicks to accept (<1min).
In this simple example, we save 11 minutes per task performed – translating this to, say, 30 such tasks performed by 1 user, the savings are as follows
Without automation – task completion | With automation – task completion | |
1 invoice | ~ 17 minut | ~ 6 minut |
10 invoices | ~ 170 minut (ok. 2h 50 minut) | ~ 60 minut (1h) |
100 invoices | ~ 1700 minut (28h 20minut) | ~ 600 minut (10h) |
Of course, this simple example is only here to illustrate how the element of scale/repetition of a task affects the tangible results in terms of time savings.
Seeing the effects of task mining, it is natural to move on to process mining.
Process mining
In a nutshell, process mining is the analysis of events recorded throughout a system. These events are usually in the form of a database and are analysed by specialised algorithms. Here we are no longer analysing the actions of individual employees, but rather company-wide activities, such as the turnaround time for delivering a courier package to a customer. This is where we encounter the first major problem.
Although the principle is similar to task mining, transferring all the data from all the systems requires ensuring that it does not violate company rules (compliance), and adds the tedious work of reconciling data if, for example, date formats are not consistent between systems (although AI is increasingly helping us with this).
The result is not always satisfactory either, because the dry data does not give us any context. They show us the journey of, for example, the parcel mentioned above, with successive ‘gateways’ such as the sending of the parcel, the collection by the courier, etc., but they do not give us answers to the ‘why’ questions. There are more organic relationships between the purely digital processes, e.g. receiving the optimal route to the customer from the pilot, or simply receiving a parcel during a lunch break.
Of course, you can try to customise all of this, but the amount of data required to get a workable process is often beyond the capabilities of humans and AI. However, the very idea of process mining is a great way to identify points of downtime or even potential overstepping of authority (compliance breaches), but this is usually not enough data to start an informed and effective automation or AI implementation process.
Even at this early stage, we can see that this approach will only work well if we monitor every aspect of our business and have experienced data analysts.
Value between 0 and 1
One problem that organisations run into is modelling their process maps based only on numbers, or worse, their own beliefs. While the use of numbers seems logical and elegant at first glance, real business processes are more complex than ‘0’ and ‘1’. Discovering the value between 0 and 1 is what we do at EDISONDA.
For example, if we are in the business of selling products and we track the delivery of parcels from a warehouse, we may find, based on hard data, that parcels are being held 2 days longer in one of our warehouses than in others. The data alone may not give us enough information to understand why this is happening and what we can do about it.
To put this data into context, we wanted to take a closer look at the warehouse. After initial interviews with staff and an analysis of the journey of such a parcel, we may be surprised to find that we do not need to spend a lot of money on autonomous robots, and that all we need to do is ditch paper forms in favour of an efficient IT system, or plan a small part of the process a little differently.
Stories like this come up almost every day in our work. Companies looking to save money get caught up in higher costs by investing in something that is riding a wave of popularity. The result is often a failure to achieve the intended results, and it is much harder to convince the team to automate processes the second time around when the first attempt has failed.
Process Discovery Workshops by EDISONDA
The Process Discovery Workshop is something that we believe combines the concept of process mining with a layer of qualitative research. Our aim is to quickly understand and test the need for automation, and only then move into the modes of a full-scale project.
It is all about the values between zero and one. Real business processes require talking to teams, conducting interviews and observing what work looks like. After all, we wouldn’t want to automate a process that doesn’t benefit anyone and costs more than expected.
During the process discovery workshops, we focus on combining quantitative and qualitative data. The whole process consists of 4 milestones, broken down into smaller tasks:
Step 1: Gathering information
Before we start the workshop, we want to understand the business environment and what part of the process we want to address. This usually takes the form of a thesis, e.g. ‘We believe that automated invoice summarisation will allow us to speed up the whole process of accepting expense invoices’.
We collect several such theses within a given process slice. This allows us to prepare a workshop in which we try to clarify the requirements and the purpose of the changes. For more complex problems, we also ask which systems are currently in use and which should be integrated for potential process automation.
Since the workshop is only about ‘sketching’ a first process map, this knowledge is already sufficient for us to move on to the workshop.
Step 2: Workshops
The first workshop usually lasts about 4 hours, during which we focus on a number of key points:
- We describe the purpose(s) of the automation,
- list the potential risks,
- list the process actors (people or systems currently doing the work we want to automate),
- define the project’s key performance indicators (KPIs),
- write down the current slice of the process.
After this workshop we can only try to assess which potential solution (e.g. ML or LLM) will work best.
Step 3: Questions after the workshop
After the workshop we try to get the necessary details. Here there is room for more detailed questions, e.g. about data structure, potential information noise, tools to be included in the automation, etc.
A common question is:
- Does the company have sufficient implementation skills?
- Are there any legal issues we need to be aware of?
- What is our budget?
All this leads us to the next step, which is…
Step 4: Recommendations
Recommendations depend mainly on the capabilities, constraints and results of the workshop. Our recommendations usually include information on the potential costs of implementing and maintaining the solution. It does not matter whether we are talking about external services or building our own IT architecture adapted to the use of AI.
The recommendation is accompanied by a solution blueprint, which can take many forms. From something as simple as a service blueprint to an IT architecture and links between different systems.
Implementation process and proof of concept
Depending on the size of the project, implementation can take anywhere from a few weeks to a few months.
One of the first milestones is the human factors and user interviews mentioned above. This stage allows us to check that assumptions about how the work should be done match reality. In our experience, we often have to stop work at this stage to better understand the real business processes.
Another important element of implementation is planning a proof of concept, which allows us to test in practice how the target solution might work. It allows us to test the validity of a new implementation relatively quickly. At this stage, we pre-select trained AI models or plan the automation flow.
In the case of AI, this is a frontier stage where we must already have access to the data we will use to adapt the model or to connect the appropriate RAG system (i.e. the system that connects to our chosen database).
The last major milestone is the implementation of the proof of concept. At this stage we test how the solution performs on test data and, if all goes well, implement the solution more widely within the organisation.
What to consider when implementing AI or automation in a business?
Using a machine alone often does not solve organisational problems. No machine will be able to catalogue invoices correctly, for example, unless there are certain benchmarks in the organisation that we can rely on.
As well as the obvious issues, it’s also important to consider the potential barriers to using AI:
- Business barriers – how AI affects the way our business operates and how it is perceived by employees and customers.
- Regulatory barriers – depending on the region in which we plan to implement automation or AI systems. This includes AI legislation.
- Technological barriers – the risks associated with the fact that some solutions, particularly those related to LLM models, such as ChatGPT, have shortcomings that we cannot, for the time being, avoid 100% technically.
- Environmental barriers – the use of AI has a significant environmental impact through the use of electricity and resources to generate it. For example, generating 1,000 images with an advanced AI model such as Stable Diffusion XL is responsible for about as much carbon dioxide as driving 6.5km in an average petrol car.
As you can see, using AI and automation for commercial purposes can be challenging at times. However, the Process Discovery Workshop minimises the risk of unsuccessful investments and focuses on achieving specific goals, even if they are only the first step towards greater business process automation.