The following notebook demonstrates some of the challenges related to Mineral Resource classification approaches. A variety of approaches, considerations, and calibrations related to classification flagging are shown. The final section demonstrates a method for calibrating gridded spacings from a drill hole spacing study to correspond with the commonly used 3-hole spacing available for classification flagging.
The presented approaches may be extended to represent other deposits, although no warranty is made regarding error-free implementation. Further, although many techniques are considered and compared, their application here should not be interpretted as recommendation for use.
Import required modules:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from copy import deepcopy
import rmsp
Set a couple parameters of interest:
# Set to false for interactive viewer
rmsp.GlobalParams["plotting.viewer3d.show_static"] = True
# Set to true for progressbars on intensive tasks
rmsp.GlobalParams["core.log_progress"] = False
Set additional plotting parameters:
rmsp.GlobalParams["core.enable_beta"] = True
rmsp.GlobalParams["plotting.viewer3d.background"] = "white"
rmsp.GlobalParams['plotting.viewer3d.dpi'] = 500
rmsp.GlobalParams["plotting.viewer3d.render_points_as_spheres"] = True
rmsp.GlobalParams["plotting.cmap"] = "Spectral_r"
camera = [[-522.65, -883.05, 79.82, 200.06, 100.13, 97.29, -0.01, -0.01, 1.00], 113.50]
Colormaps that we'll use for classification, spacing and kriging efficiency:
cmap_class={"Measured": "C3", "Indicated": "C2",
"Inferred": "C0", "Unclassified": ".7"}
cmap_space= {"bounds":[5., 10., 15., 20., 25., 30., 40.],
"colors":["#b2182b", "#ef8a62", "#fddbc7",
"#ffffff", "#e0e0e0", "#999999"]}
cmap_ke = {"bounds":[0.1, 0.2, 0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6],
"colors":["#878787", "#bababa", "#e0e0e0", "#ffffff",
"#fddbc7", "#f4a582", "#d6604d", "#b2182b"]}
dh, solid, topo = rmsp.load_example_data('single_vein')
Reduce to data inside the vein solid:
dh = dh[solid.contains(dh)].reset_index(drop=True)
dh.head(2)
DHID | From | To | Vein | Easting | Northing | Elevation | startEasting | startNorthing | startElevation | endEasting | endNorthing | endElevation | Au | u | v | w | flat_w | NS:Au | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | DDH001 | 57.6 | 58.6 | 1 | 228.656235 | 40.628077 | 119.328119 | 228.698987 | 40.472719 | 119.801444 | 228.613537 | 40.783449 | 118.854794 | 3.869551 | 8.094362 | 70.502389 | 20.378544 | 3.348134 | 0.269589 |
1 | DDH001 | 58.6 | 59.6 | 1 | 228.570894 | 40.938837 | 118.381469 | 228.613537 | 40.783449 | 118.854794 | 228.528305 | 41.094239 | 117.908145 | 3.237578 | 8.729573 | 70.302561 | 19.632504 | 2.602094 | -0.167628 |
View:
viewer = dh.view3d(var='Au', representation='Surface')
solid.view3d(viewer=viewer, alpha=0.1, color='.5')
topo.view3d(viewer=viewer)
viewer.set_camera(*camera)
<rmsp.plotting.viewer3d.visualization.Viewer at 0x25b1b1243d0>
Create a grid inside the solid:
grid = rmsp.GridData.from_solid(solid, usize=1, vsize=1, zsize=1)
grid.griddef.to_table()
Easting | Northing | Elevation | |
---|---|---|---|
minimum (m) | 35.61 | 4.9 | 32.69 |
maximum (m) | 332.61 | 159.9 | 192.69 |
size (m) | 1.0 | 1.0 | 1.0 |
number | 300 | 157 | 162 |
For this example the 3-hole rule is used. The workflow is as follows:
search = rmsp.Search(min_comps=3, max_comps=3, max_comps_per_dhid=1)
grid['spacing'] = rmsp.IDWEstimator(search).estimate(
grid, dh, 'Au', output=('mean_dist'), copy_meta=False) * 2**0.5
grid.head()
In Grid | spacing | |
---|---|---|
345056 | 1 | 22.465691 |
345057 | 1 | 22.761676 |
345058 | 1 | 23.146055 |
345059 | 1 | 23.613864 |
345060 | 1 | 24.159378 |
Plot distribution and oblique view of spacing:
viewer=grid.view3d('spacing', cmap=cmap_space)
viewer.set_camera(*camera)
fig, axes = plt.subplots(1, 2, figsize=(10, 3.4))
grid.histplot('spacing', ax=axes[0])
viewer.show_static(ax=axes[1], cbar=True)
fig.tight_layout()
Apply the classification flags directly to blocks based on DH spacing alone. In this example a drill hole spacing criteria is set to the following:
spac_crit = {"Measured": 15.0,
"Indicated": 30.0,
"Inferred": 60.0}
grid["class_space"] = 'Unclassified'
for rescat, space in reversed(spac_crit.items()):
grid.loc[grid['spacing'] <= space, 'class_space'] = rescat
grid[['spacing', 'class_space']].tail()
spacing | class_space | |
---|---|---|
7234420 | 5.517965 | Measured |
7234421 | 6.447362 | Measured |
7234719 | 5.090871 | Measured |
7234720 | 5.765975 | Measured |
7235020 | 6.310856 | Measured |
Overall the result looks reasonable with some areas that may require post processing (spheres highlight examples where the practitioner may opt to downgrade the measured to indicated):
viewer = grid.view3d("class_space", cmap_cat=cmap_class)
viewer.set_camera(*camera)
# Highlight three problem areas
problem_areas = [[82, 132.93, 169.73], [74.92, 107.85, 129.55], [49.24, 129.23, 157.55]]
rmsp.Solid.from_merged_meshes(
[rmsp.Solid.icosphere(10, p) for p in problem_areas]
).view3d(viewer=viewer, label="To Downgrade", alpha=0.5, color="yellow")
viewer
<rmsp.plotting.viewer3d.visualization.Viewer at 0x25b1df037f0>
estimators = []
spacings = list(spac_crit.values())
for i, r in enumerate(spacings):
search = rmsp.Search(
ranges=[r] * 3, min_comps=3, max_comps=12, max_comps_per_dhid=2
)
estimators.append(rmsp.IDWEstimator(search))
multipass = rmsp.MultipassEstimator(estimators)
grid["class_pass"] = multipass.estimate(grid, dh, "Au")["multipass_flag"]
print("unique pass numbers", grid["class_pass"].unique())
unique pass numbers [1 0 2]
Without much consideration, assign classification based on pass number. Note that this mapping assumes all blocks are estimated with a pass number.
pass_to_class = {0: "Measured", 1: "Indicated", 2: "Inferred"}
grid["class_pass"] = grid["class_pass"].map(pass_to_class)
print("unique class codes", grid["class_pass"].unique())
unique class codes ['Indicated' 'Measured' 'Inferred']
Compare counts:
def add_counts_to_dict(var, title, cellcount_dict):
"""Function for counting cells by category, returning a table for visualization"""
cellcount_dict[title] = grid.groupby([var])[var].count()
return pd.DataFrame(cellcount_dict)
cellcount_dict = {}
_ = add_counts_to_dict("class_space", "Spacing Method", cellcount_dict)
add_counts_to_dict("class_pass", "Search Pass Method", cellcount_dict)
Spacing Method | Search Pass Method | |
---|---|---|
Indicated | 95221 | 46816 |
Inferred | 9342 | 1505 |
Measured | 67771 | 124013 |
Compare the search pass classification to the spacing classification:
def comparison_plots(
variables=["class_pass", "class_space"],
titles=["Search Pass Classification", "DH Spacing Classification"],
cmaps=[cmap_class, cmap_class]):
"""Function for repeatedly comparing many classifications"""
fig, axes = plt.subplots(1, 2, figsize=(10, 3.4))
for i, (var, title, ax, cmap) in enumerate(zip(variables, titles, axes, cmaps)):
vw = grid.view3d(var=var, cmap_cat=cmap)
vw.set_camera(*camera)
vw.show_static(ax=ax, cbar=i==1, title=title)
comparison_plots()
It's clear from the images and results above that using the classification directly from the search ranges results in vastly different classification volumes. The code block below will look for a factor fact
on the search ranges that will approximate the spacing classification (based on the measured volume):
fact = 0.4
grid['class_pass'] = 'Unclassified'
while np.sum(grid['class_pass'] == 'Measured') < np.sum(grid['class_space'] == 'Measured') and fact<0.9:
fact+=0.01
estimators=[]
for i, r in enumerate(spacings):
search=rmsp.Search(ranges=[r*fact]*3, min_comps=3, max_comps=12, max_comps_per_dhid=2)
estimators.append(rmsp.IDWEstimator(search))
multipass = rmsp.MultipassEstimator(estimators)
grid['class_pass'] = multipass.estimate(grid, dh, 'Au', progressbar=False)['multipass_flag']
grid["class_pass"] = grid["class_pass"].map(pass_to_class)
print(f"Factor on ranges applied = {fact}")
Factor on ranges applied = 0.7100000000000003
Cell count comparison:
_ = add_counts_to_dict("class_space", "Spacing Method", cellcount_dict)
add_counts_to_dict("class_pass", "Search Pass Method", cellcount_dict)
Spacing Method | Search Pass Method | |
---|---|---|
Indicated | 95221 | 92656 |
Inferred | 9342 | 11234 |
Measured | 67771 | 68444 |
Radii required to match DH spacing:
print("Measured:", np.round(fact*np.array(spacings)[0]))
print("Indicated:", np.round(fact*np.array(spacings)[1]))
print("Inferred:", np.round(fact*np.array(spacings)[2]))
Measured: 11.0 Indicated: 21.0 Inferred: 43.0
Visually compare:
comparison_plots(
["class_pass", "class_space"],
["Search Pass Classification", "DH Spacing Classification"],
)
When optimising the search ellipse radii, we can approximate the DH spacing classification. While the results become quite similar, there are some notable artifacts in the search pass approach which will cause some issues during reserve estimation. Interesting to note that the radii required to match the DH spacing classification are approximately 70% of the nominal spacing criteria:
It is common practice to using estimation measures as a proxy for determining classification categories. While it can provide similar results to drill hole spacing categorization, it does suffer from some weaknesses including:abs
Calculate kriging efficiency:
search = rmsp.Search(
ranges=[100, 100, 100], min_comps=3, max_comps=3, max_comps_per_dhid=1
)
vario = rmsp.VarioModel.from_params(
1, 0.3, "spherical", [0.7], angles=[0, 0, 0], ranges=[80] * 3
)
kriger = rmsp.KrigeEstimator(search, vario)
grid['KE'] = kriger.estimate(grid, dh, "Au", output=["efficiency"])["efficiency"]
View results:
fig, axes = plt.subplots(1, 2, figsize=(10, 3.4))
viewer = grid.view3d(var='KE', cmap=cmap_ke)
viewer.set_camera(*camera)
viewer.show_static(ax=axes[1], title="Kriging Efficiency")
grid.scatplot(
ax=axes[0], var0='spacing',
var1='KE',
c='class_space',
cbar=True,
title="Spacing vs KE coloured by Spacing Criteria",
grid=True, s=5,
cmap=cmap_class
)
plt.tight_layout()
In the next step, the kriging effiency thresholds are selected by approximating the spacing criteria in the measured and indicated categories:
ke = 1.0
grid["class_ke"] = "Inferred"
ke_crit = {}
while np.sum(grid['class_ke'] == "Measured") < np.sum(grid['class_space'] == "Measured"):
ke -= 0.001
grid.loc[grid['KE'] >= ke, "class_ke"] = "Measured"
ke_crit["Measured"] = [np.round(ke, 3)]
while np.sum(grid['class_ke'] == "Indicated") < np.sum(grid['class_space'] == "Indicated"):
ke -= 0.001
grid.loc[(grid['KE'] >= ke) & (grid['class_ke'] != "Measured"), "class_ke"] = "Indicated"
ke_crit["Indicated"] = [np.round(ke, 3)]
ke_crit["Inferred"] = [0]
print("KE Required to Match DH Spacing:")
print("")
pd.DataFrame(ke_crit)
KE Required to Match DH Spacing:
Measured | Indicated | Inferred | |
---|---|---|---|
0 | 0.46 | 0.212 | 0 |
Compare classification counts to prior methods:
add_counts_to_dict("class_ke", "Kriging Efficiency Method", cellcount_dict)
Spacing Method | Search Pass Method | Kriging Efficiency Method | |
---|---|---|---|
Indicated | 95221 | 92656 | 95257 |
Inferred | 9342 | 11234 | 8709 |
Measured | 67771 | 68444 | 68368 |
Visually compare against spacing based classification:
comparison_plots(
["class_ke", "class_space"],
["Kriging Efficiency Classification", "DH Spacing Classification"],
)
Similar to the search pass approach, it is possible to reasonably approximate volumes and patterns with the downside that some artifacts are present. A more reasonable approach might be to use kriging efficiency as a guide or validation tool.
For the last classification approach example, the pre-calculated DH spacing value will be assigned to the composites themselves. Then using the classificaiton criteria, we can define a indicators for each class. The indicators can then be interpolated and the resulting indicator probabilities can be inspected relative to the original data spacing to determine reasonable thresholds for defining each category. Once the base thresholds are determined, further post processing can done to clean up the categories.
Begin by back-flagging the gridded spacing onto the composite locations:
dh["spacing"] = grid.get_values_at(dh, columns_to_flag=["spacing"], copy_meta=False)
for rescat, space in reversed(spac_crit.items()):
dh[rescat] = np.where(dh['spacing'] <= space, 1.0, 0.0)
dh[["DHID", "From", "To", "spacing"] + list(spac_crit)].head()
DHID | From | To | spacing | Measured | Indicated | Inferred | |
---|---|---|---|---|---|---|---|
0 | DDH001 | 57.6 | 58.6 | 16.435634 | 0.0 | 1.0 | 1.0 |
1 | DDH001 | 58.6 | 59.6 | 16.436508 | 0.0 | 1.0 | 1.0 |
2 | DDH001 | 59.6 | 60.6 | 16.480352 | 0.0 | 1.0 | 1.0 |
3 | DDH001 | 60.6 | 61.6 | 16.619989 | 0.0 | 1.0 | 1.0 |
4 | DDH001 | 61.6 | 62.6 | 16.720959 | 0.0 | 1.0 | 1.0 |
Calculate the proportion of each DHID that exceeds each spacing threshold, before assigning a classification to each drillhole based on majority rules. Note that this only applicable for narrow tabular type mineralization.
dhid_grouped = dh.groupby(["DHID"], as_index=True).mean()[list(spac_crit)].reset_index()
dhid_grouped["class_space"] = "Unclassified"
for rescat in reversed(spac_crit):
dhid_grouped.loc[dhid_grouped[rescat] >= 0.5, "class_space"] = rescat
dhid_grouped[["DHID", "class_space"] + list(spac_crit)].head()
DHID | class_space | Measured | Indicated | Inferred | |
---|---|---|---|---|---|
0 | DDH001 | Indicated | 0.00 | 1.00 | 1.00 |
1 | DDH002 | Measured | 1.00 | 1.00 | 1.00 |
2 | DDH003 | Measured | 1.00 | 1.00 | 1.00 |
3 | DDH004 | Measured | 0.80 | 0.80 | 0.80 |
4 | DDH006 | Measured | 0.75 | 0.75 | 0.75 |
Return the calculated classification to the composites by DHID before reseting indicators accordingly:
dh = dh.merge(dhid_grouped[["DHID", "class_space"]], on="DHID")
# resetting indicators
for rescat in spac_crit:
dh[rescat] = 0.0
dh.loc[dh['class_space'].isin(["Measured"]), "Measured"] = 1.0
dh.loc[dh['class_space'].isin(["Measured", "Indicated"]), "Indicated"] = 1.0
dh.loc[dh['class_space'].isin(["Measured", "Indicated", "Inferred"]), "Inferred"] = 1.0
dh[["DHID", "From", "To", "spacing", "class_space"] + list(spac_crit)].head()
DHID | From | To | spacing | class_space | Measured | Indicated | Inferred | |
---|---|---|---|---|---|---|---|---|
0 | DDH001 | 57.6 | 58.6 | 16.435634 | Indicated | 0.0 | 1.0 | 1.0 |
1 | DDH001 | 58.6 | 59.6 | 16.436508 | Indicated | 0.0 | 1.0 | 1.0 |
2 | DDH001 | 59.6 | 60.6 | 16.480352 | Indicated | 0.0 | 1.0 | 1.0 |
3 | DDH001 | 60.6 | 61.6 | 16.619989 | Indicated | 0.0 | 1.0 | 1.0 |
4 | DDH001 | 61.6 | 62.6 | 16.720959 | Indicated | 0.0 | 1.0 | 1.0 |
Check to see that the flagging matches the blocks in terms of spacing and classification:
viewer1 = grid.view3d(var="spacing", cmap=cmap_space, alpha=0.1)
dh.view3d(viewer=viewer1, var="spacing", cmap=cmap_space, point_size=30)
viewer1.set_camera(*camera)
viewer2 = grid.view3d(var="class_space", cmap_cat=cmap_class, alpha=0.1)
dh.view3d(viewer=viewer2, var="class_space", cmap_cat=cmap_class, point_size=30)
viewer2.set_camera(*camera)
fig, axes = plt.subplots(1, 2, figsize=(10, 3.4))
viewer1.show_static(ax=axes[0], title='Spacing Assigned to DH')
viewer2.show_static(ax=axes[1], title='DH Spacing Classification and Classified DH')
Simple kriging is used in the next step to interpolate the indicator of each classification category with the range set to the predetermined spacing criteria and simple kriging mean to zero. By setting the simple kriging mean to zero, the indicator will decay towards zero away from data avoiding overextrapolation in the absence of data along edges of the domains or in data gaps.
def simple_krige_each_indicator(data):
"""Defining this in a function as we'll iterate with/without control points"""
for rescat, space in reversed(spac_crit.items()):
print(f'Interpolating {rescat} with a variogram/search tied to {space} ranges')
search = rmsp.Search(ranges=space * 2.0, min_comps=1, max_comps=12, max_comps_per_dhid=5)
vario = rmsp.VarioModel.from_params(1, 0.01, 'spherical', [0.99], [0,0,0], [space]*3)
grid[rescat] = rmsp.KrigeEstimator(search, vario, ktype='sk', sk_mean=0.0).estimate(
grid, data, rescat, output=['estimate'], copy_meta=False)
simple_krige_each_indicator(dh)
Interpolating Inferred with a variogram/search tied to 60.0 ranges Interpolating Indicated with a variogram/search tied to 30.0 ranges Interpolating Measured with a variogram/search tied to 15.0 ranges
We can now inspect spacing versus the estimated indicators to select a probability threshold that matches the DH spacing criteria:
fig, axes = plt.subplots(1, 3, figsize=(10, 3))
for i, (rescat, space) in enumerate(spac_crit.items()):
grid.scatplot(ax=axes[i], var0='spacing', var1=rescat,
ylabel=rescat + " Probability")
axes[i].axvline(space, c=cmap_class[rescat], ls='--')
fig.tight_layout()
The following probability thresholds are estimated based on the scatter plots above - regression may be better:
spac_dh_crit = {"Measured": 0.4, "Indicated": 0.3, "Inferred": 0}
grid['class_space_dh'] = "Unclassified"
for rescat, prob in reversed(spac_dh_crit.items()):
grid.loc[grid[rescat] >= prob, "class_space_dh"] = rescat
Compare the resulting classification counts against prior methods:
add_counts_to_dict("class_space_dh", "Spacing to DH Method", cellcount_dict)
Spacing Method | Search Pass Method | Kriging Efficiency Method | Spacing to DH Method | |
---|---|---|---|---|
Indicated | 95221 | 92656 | 95257 | 92560 |
Inferred | 9342 | 11234 | 8709 | 12679 |
Measured | 67771 | 68444 | 68368 | 67095 |
Compare the SK and Spacing based classifications visually:
comparison_plots(
["class_space_dh", "class_space"],
["Spacing Assigned to DH Classification (using SK)", "DH Spacing Classification"])
Compare the SK classification against the flagged DH classification:
viewer = grid.view3d(var="class_space_dh", cmap_cat=cmap_class, alpha=0.1)
viewer = dh.view3d(viewer=viewer, var=["class_space", "DHID"], cmap_cat=cmap_class, point_size=30)
viewer.set_camera(*camera)
viewer
<rmsp.plotting.viewer3d.visualization.Viewer at 0x25b1e23b2e0>
Some manual adjustment can be made by querying dh and locations in the 3d scene above.
Add a control point to upgrade some indicated material that is surrounded by measured:
control_point_data = {'Easting': 205.43, 'Northing': 78.82, 'Elevation': 126.18, 'DHID': 'dh_added1'}
control_point = dh.iloc[0].to_frame().T.assign(**control_point_data)
control_point_data['Measured'] = 1.0
control_point_data['Status'] = 'Adjusted'
# Append to data
dh['Status'] = 'As-is'
dh_adj = dh.append(control_point_data, ignore_index=True)
dh_adj[['DHID', 'Easting', 'Northing', 'Elevation', 'Measured', 'Status']].tail(3)
DHID | Easting | Northing | Elevation | Measured | Status | |
---|---|---|---|---|---|---|
866 | DDH363 | 294.771315 | 41.870028 | 125.555188 | 1.0 | As-is |
867 | DDH363 | 295.041596 | 41.978402 | 124.546375 | 1.0 | As-is |
868 | dh_added1 | 205.430000 | 78.820000 | 126.180000 | 1.0 | Adjusted |
The following DH's will be adjusted from "measured" to "indicated":
measured_to_indicated = ["DDH315", "DDH349", "DDH201", "DDH201", "DDH030"]
dh_adj.loc[dh_adj['DHID'].isin(measured_to_indicated), "Measured"] = 0.0
dh_adj.loc[dh_adj['DHID'].isin(measured_to_indicated), "Status"] = "Adjusted"
Re-interpolate the indicators before applying the same probability thresholds for classification:
simple_krige_each_indicator(dh_adj)
grid['class_space_dh_adj'] = "Unclassified"
for rescat, prob in reversed(spac_dh_crit.items()):
grid.loc[grid[rescat] >= prob, "class_space_dh_adj"] = rescat
Interpolating Inferred with a variogram/search tied to 60.0 ranges Interpolating Indicated with a variogram/search tied to 30.0 ranges Interpolating Measured with a variogram/search tied to 15.0 ranges
Compare classification counts:
add_counts_to_dict("class_space_dh_adj", "Spacing to DH Method Adjusted", cellcount_dict)
Spacing Method | Search Pass Method | Kriging Efficiency Method | Spacing to DH Method | Spacing to DH Method Adjusted | |
---|---|---|---|---|---|
Indicated | 95221 | 92656 | 95257 | 92560 | 93271 |
Inferred | 9342 | 11234 | 8709 | 12679 | 12679 |
Measured | 67771 | 68444 | 68368 | 67095 | 66384 |
Visually compare before and after adjustment:
cmap_adj = {"As-is": "white", "Adjusted": "black"}
fig, axes = plt.subplots(1, 2, figsize=(10, 3.4))
for i, (var, title, cbarshow) in enumerate(
zip(
["class_space_dh_adj", "class_space_dh"],
[
"Spacing Assigned to DH Classification (adjusted)",
"Spacing Assigned to DH Classification (original)",
],
[False, True],
)
):
viewer = grid.view3d(var=var, cmap_cat=cmap_class)
dh_adj.view3d(viewer=viewer, var="Status", cmap_cat=cmap_adj, point_size=50)
viewer.set_camera(*camera)
viewer.show_static(ax=axes[i], cbar=cbarshow, title=title)
A common method for determining DH spacing criteria is resampling of simulated realizations and resimulation to observe the uncertainty. The resampling requires the user to specify a grid configuration consiting of the following attributes:
The limitation of this resampling approach is that in reality, one single drill hole oriention drilled on a regular grid is rarely achievable. Some factors influencing this include:
For this reason, it is important to calculate the actual drill hole spacing using a defined technique (e.g. the 3-hole rule) and then apply this to the idealised sampled grid configuration. This may lead to the use of factors outside of the commonly used $ \sqrt{2} $.
To begin, lets create some synthetic samples based on the following grid configurations:
spacings = np.arange(5, 65, 5)
sampler = rmsp.ModelSampler(
spacings=spacings,
plane_orient="xy",
comp_len=1.0,
locations=grid,
azm=22.3,
incl=-35.6
)
sampler.sample(grid, ["spacing"])
Visualize the synthetically sampled composite locations:
fig, axes = sampler.sectionplots(s=3)
for ax in axes:
solid.sectionplot_draw(ax, alpha=0.2, face_c='.5')
Calculate 3-hole spacing for each gridded dataset, using the commonly applied $\sqrt2$ factor for scaling (consistent with 3-hole calculation on regular data above that yielded the spacing
column).
def calculate_3hole_spacing(factor=2**0.5):
"""Defining in a function as will recalculate with a calibrated factor"""
search = rmsp.Search(min_comps=3, max_comps=3, max_comps_per_dhid=1)
cols = []
for space in sampler.spacings:
cols.append(f"spacing_{int(space[0])}")
grid[cols[-1]] = rmsp.IDWEstimator(search).estimate(
grid, sampler.get_data(space), "spacing", output=("mean_dist")
)["mean_dist"] * factor
return cols
cols = calculate_3hole_spacing(factor=2**0.5)
grid[['spacing'] + cols].head()
spacing | spacing_5 | spacing_10 | spacing_15 | spacing_20 | spacing_25 | spacing_30 | spacing_35 | spacing_40 | spacing_45 | spacing_50 | spacing_55 | spacing_60 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
345056 | 22.465691 | 4.300981 | 6.960380 | 12.942950 | 17.041549 | 21.269393 | 30.022455 | 36.948573 | 36.912011 | 49.064129 | 40.569195 | 50.741372 | 67.524001 |
345057 | 22.761676 | 3.594522 | 7.539700 | 12.533397 | 17.067684 | 21.074528 | 29.893119 | 36.922473 | 37.098658 | 49.164724 | 40.787201 | 50.488985 | 66.898126 |
345058 | 23.146055 | 3.269229 | 7.760116 | 11.991925 | 16.680415 | 20.907572 | 29.791937 | 36.938262 | 37.306729 | 49.288858 | 41.061248 | 50.245621 | 66.285926 |
345059 | 23.613864 | 3.609867 | 7.826677 | 12.157462 | 16.326463 | 20.769456 | 29.580877 | 36.991879 | 37.533606 | 49.435051 | 41.363501 | 50.011469 | 65.687056 |
345060 | 24.159378 | 3.683531 | 7.969875 | 12.380864 | 16.005932 | 20.661020 | 28.734061 | 37.076527 | 37.777262 | 49.574746 | 41.683374 | 49.786746 | 65.101190 |
Compare the 3-hole spacing with the gridded spacing:
def compare_3hole_with_grid_spacing():
"""Defining in a function as we will re-call with altered globals"""
fig, ax = plt.subplots(figsize=(6, 6))
ax.plot(spacings, grid[cols].mean(),
"--ok", label="grid vs spacing")
for rescat, space in spac_crit.items():
ax.axhline(space, c=cmap_class[rescat], label=rescat)
rmsp.format_plot(ax, "Study Grid", "3-Hole Spacing", grid=True,
xlim=(0, 70), ylim=(0, 70))
ax.legend(loc=2)
compare_3hole_with_grid_spacing()
Based on the results above, under the grid configurations provided, the 3-hole rule (using factor of $ \sqrt{2} $) corresponds with 78% of the grid configuration (e.g. a 3-hole spacing of 15m is equivalent to a grid configuration of 20mx20m).
We would need a factor closer to $ \sqrt{2}/{0.78} = 1.81$:
_ = calculate_3hole_spacing(factor=1.81)
Re-compare:
compare_3hole_with_grid_spacing()
Now that the relationship between the sample grid and drill hole spacing calculation is established, there are two choices:
The same principle can be extended to drill hole planning where a planned grid configuration should be tested based on the drill hole spacing calculation with adjustments in accordance with how the drill hole spacing has been calculated in the context of the classification criteria.