Iain Levy, Senior Technical Lead, Seequent.

When discussing Leapfrog EDGE with clients I’m often asked, “what about macros?” or, “will Leapfrog EDGE be getting macros in the future?” Along this vein, a common request is for a swish multidomain tool that enables you to quickly carbon copy parameters from one domain and apply them to multiple other ones.

This has led me to reflect on why macros are so engrained into our estimation workflows, and if this has resulted in a focus on the processes of resource estimation to the detriment of the geology and reasoning behind it. As Dr Jacqui Coombes, General Manager – Innovation at METS Ignited recently commented; ‘The future of resource estimation will not focus so much on the process of resource estimation but on the reasoning of it’.

This blog is part one of a two-part series which considers the standard approach undertaken for resource estimation, and the value of a new approach where reasoning and validation play a more integrated role throughout the estimate.

Why Macros?

So why are macros so widespread within our industry? Is it because they are the most efficient and effective way possible to create an estimate? Is it because it’s just the way it has always been done? Or is it because this is all that has been offered to date?

Most estimation packages offer a comprehensive toolkit of independent features, none of which are linked. This means that to use them in any meaningful way, a workflow must be created by the user (aka macros) before an estimate can be completed. If you want to just quickly look at a specific aspect, domain, or outcome it involves a significant amount of clicking, exporting, importing, and recoding.

Due to this limitation, macros have become cemented as the primary approach to completing an estimate and now perform 3 functions for most people:
1. Link and provide a functional workflow for estimation
2. Repeatability of the estimate
3. Auditability of the estimate

We will first discuss the use of macros from a workflow perspective to fix shortcomings and question if there isn’t a better way. And then we’ll go on to discuss the repeatability and auditability aspects.

Feature focus

Most estimation software has a heavy focus on feature development as opposed to a workflow focus which has led to estimation being very heavily macrotised to make it functional. There is no intuitive way to immerse ourselves into our estimation journey and instead we have to follow a very rigid and linear process. While we could spend time looking at each domain individually through the whole process from domaining to validation, it would be very tedious. So generally, each step is completed for all domains before moving on to the next step. This shackles the estimator and does not allow them to easily refer back and rerun a previous step based on their insights further down the track. This stepwise process, with all domains being treated at once, makes it complicated to understand the detailed picture for each domain until the end of the entire process – often when it is too late to make the changes needed.

It also means that to get a meaningful estimate you must be able to write code. The test of a good resource geologist then becomes not just around their geological and statistical ability and understanding, but also their programming skills. Therefore, people who can’t write code but have a strong understanding of the geological framework, are left out.

A different approach – Making it immersive

Would a better approach be a fully immersive process? Instead of taking a linear and stepwise approach focused on progressing through a set of features, we focus on each domain through the full estimation process. In this way, we have the opportunity to really investigate each domain, understand the distribution and relationship of grades, and how it relates to our geological interpretation. We can then quickly test the sensitivity of our important domains to various parameters and know what truly impacts the grades, distributions, and quality of our estimate and what does not. Every decision can be related forward to understanding how it impacts the block estimate – not just the composite data or a couple of random blocks. Finally, it means that the validation of our estimate can occur at the same time as we are creating the estimate, meaning that we can alter things based off that validation – all the way back to refining our domains if needed.


Coupled with having a flexible workflow the ability to visualise what we are doing at each step is essential. This includes relating everything back, (our 2D graphs, variography, and estimation parameters), to our geological interpretation and visualising it in 3D. Linking all of these steps to easily flow from one to the next so that we can quickly visualise the outcome and validate its effect.

This is a much more logical approach, providing a deeper understanding and vision about how the result relates to the input data. And this in itself gives the practitioner confidence. By the end of the estimation journey we have a better understanding of the individual domains that contributed to the whole.

Repeatability & Auditability

Traditionally macros have formed the cornerstone of auditability and it could be argued that a lack of them could be a weakness as the estimate could be changed more easily. However, in much the same way as with a traditional estimate, all that is needed to avoid these issues with the immersive approach is a robust model management system. Leapfrog also offers added protection and control through its model management platform, Central, with the ability to centrally store models, manage version control, user permissions and track and audit decision making. Looking at them in more detail:

Repeatability is used to ensure that the estimate can be reproduced from the macros. However, it does not consider issues with the way the macros were written in the first place. To achieve this, we must carefully check every line of code.

Auditability is focused around being able to verify that the parameters reported as used were used in the estimate. While macros do provide a system for this verification, by reading through the lines of code and pulling out the required parts, it is not easy to confirm that they were the best choices in the first place.

The immersive approach possible with Leapfrog EDGE, means that repeatability and auditability are, in fact, simplified. We can easily verify that the parameters for each domain used are correct within the estimate, and quickly validate them. Repeatability becomes a moot point, as if the parameters are correct you already know that the block model is correct without having to rerun any processes.

Blog 2 – achieving repeatability and auditability without macros

In the second part of this blog series I will apply this flexible and integrated domain based workflow approach to a real example using Leapfrog EDGE.

As a final note for this part, since Leapfrog EDGE isn’t macrotised, you might think that applying parameters from one domain to another isn’t easy or intuitive. Don’t worry, with a domain focused approach it is easy to copy the estimation product from one domain and apply it to another. This will be covered in Part 2.

Ready to experience Leapfrog EDGE yourself? Watch this 6 minute demo video.

2 thoughts on “Part 1: Where have the macros gone? An immersive approach to estimation”

  1. A very well reasoned Blog. I particularly like the reference to “The test of a good resource geologist then becomes not just around their geological and statistical ability and understanding, but their programming skills. Therefore, people who can’t write code but have a strong understanding of the geological framework, are left out.” Unfortunately those with strong programming skills are often not intuitive geologists !

  2. I thing the fist step is make sure of the data confiability when you put macros, after it with macros you should be a work flow, obiously the macros will be check by a senior or your boss.

Leave a Reply