Iain Levy, Senior Technical Lead, Seequent.

In the first blog, we talked about a new more immersive approach to resource estimation. In this, the second part, we use real world data to highlight how this works and how this deeper understanding benefits the user. We will start off by looking at how you set up a domain for the immersive approach, before looking at how this could then be applied to investigating the search parameters.

Before we get started though, here’s a quick outline of the mineralisation:

There are two major pods of massive sulfide mineralisation running in a NE-SW trend, plunging to the NE (Figure 1). There is also a small pod of mineralisation to the north with limited drilling. We will be looking at the northern (red) pod to start. While it is a base metal deposit, we are going to be estimating gold as it has more interesting statistics and trends than zinc or lead. In terms of drilling, we are looking at a grade control situation with high density drilling.

 

Figure 1: (Top) Mineralisation model of the deposit looking west. There are two major ore bodies plunging to the NE. (Bottom): Composites of Au within the domain of interest (red domain) looking west.

Getting setup

The central nervous system for the immersive approach is the domain focused workflow and within that, the domained estimate function, this is where all the parameters and controls are housed (Figure 2). As everything is linked, if we change one thing, these changes will flow downstream. This means we can set up a block model (BM) from the outset and spatially validate as we go, gaining a much greater understanding of the spatial sensitivity of the estimate to parameter changes. We can easily gather information about the spatial distribution of grades and sampling at the start, along with identifying any areas of concern for the estimate.

Figure 2: Example of a domained estimation function within Leapfrog showing the process.

Figure 3 shows how we can create validation steps from the outset – in this case we use Swath Plots to test the assumptions and sensitivity of our estimate as we go. Here they indicate that we do not have stationarity in the estimate (Figure 3). Since the domains were created for zinc not gold, this lack of stationarity is not too surprising. We could, at this point, go back and investigate more domain control, but for the purposes of this blog, we will continue with these domains.


Figure 3: Swath plot of the Initial estimate in the easting (left) and northing (right) direction. Input composites (light blue), declustered composites (dark blue), and the initial grade estimate (red). 

Search ranges

Now, although top cutting, variography, min/max sampling etc., can all benefit from the same process, we are going to jump a couple of steps and focus on search ranges. Some might say that search ranges are well covered with QKNA and handled quite nicely by macros. However, how many blocks is the QKNA being run on, and where are they located? Often, it is only a couple of blocks in the centre of the domain, or the results are grouped together into a single range with spatial trends not being considered.

For this discussion, we will look at two search ranges – one determined by QKNA from a couple of representative blocks (max range of 60m), and the other based off the variogram ranges (max range of 140m). Looking at two search ranges, we can see that globally there is no change – the global mean only changes by the detection limit of 0.01ppm.

However, this doesn’t mean that there are no material differences. If we look at the individual block scales we can see some significant differences occur (up to 4.7ppm). This is because what was optimised for the centre of the deposit (if we used QKNA) turns out to be a poorer option around the edge of the domain. These areas indicate where there is sensitivity to the search parameters selected, and we can’t have the same confidence in the estimate in these areas.

  

Figure 4: Difference in the estimate between the two search ranges looking west. (Top): displaying the absolute difference in grade, (bottom): displaying the percentage difference. Both models are filtered based off a difference of 0.25ppm and 10% respectively.

We can take this even further and test these differences in relation to the anticipated mining cutoff grade. Here we have filtered the results, looking at where the QKNA estimate is calling the blocks at below 1pmm Au but the variogram estimate is predicting them to be above. Simply put, if we had a cutoff grade of 1ppm, ore will be misclassified and lost.  Reporting these figures we see that there is potential to misclassify over 5,000 ounces of gold within this domain.

 

Figure 5: Difference in the estimate between the two search ranges looking west. (Top): Looking at the misclassified mineralisation at a 1ppm cut-off. (Bottom): looking at significant variations in the slope of regression between the two estimates.

This comparison of estimation parameters can be applied to every decision made, identifying where there are major differences, not only in grade but also in kriging statistics. We can also quickly copy the estimator, retaining the parameters used, so that there is always a record of our decisions. From this process, we develop a strong understanding not only of what parameters are highly sensitive to change, but also where we have more risk in the estimate spatially. Once we are happy with our decision, we can apply the final version of our parameters to our master block model, knowing that the domain has already been validated and is ready for reporting.  Figure 1 displays what this domained estimation might look like when complete.

Next domain

Once we have completed the first domain, we then have a couple of options for assessing our next domain. If it is significantly different we can start the process again. Or, we can just copy the whole estimation object, changing the element or domain as required. All the parameters will flow through and be processed based off the new data. From here you can quickly validate that the parameters are holding true and tweak them as needed. This review and tweaking isn’t something that is easy to assess and implement if you are just copying a line of code in a macro.

Final thoughts

For most of this discussion we have been looking at grades and the differences between them. We are not doing this to pick the one that gives us the highest grades, but to understand the sensitivity of the grades spatially to the parameter changes. You could also supplement this by looking at any of the kriging statistics and comparing them to help determine the quality of the estimate. Which grade is right? At the end of the day you must select a set of parameters based on your experience. The value of the immersive approach is that you gain a strong spatial understanding of sensitivity and relative uncertainty for each parameter used, and you can also carry out spatial validation at each step.

Traditionally, uncertainly analysis of grade is handled by simulation. While this gives a good indication of the uncertainty in the estimate for a single set of parameters, based off grade variability, it does not give you the sensitivity in the parameters selected. To do this you would have to run multiple sets of realisations, which is quite time consuming.