Surfer offers 12 different gridding methods to choose from.

In addition to the information below, we also have some general gridding method recommendations in the Help. Go to the General Gridding Recommendations page and read through the various descriptions for the gridding methods to see if one sounds more appropriate for your data than the others, or put a list of some methods that might work for your data for you to try

**Inverse Distance to a Power**is fast but has the tendency to generate "bull's-eye" patterns of concentric contours around the data points. Inverse Distance to a Power does not extrapolate Z values beyond the range of data.**Kriging**is one of the more flexible methods and is useful for gridding almost any type of data set. With most data sets, Kriging with the default linear variogram is quite effective. In general, we would most often recommend this method. Kriging is the default gridding method because it generates a good map for most data sets. For larger data sets, Kriging can be rather slow. Kriging can extrapolate grid values beyond your data's Z range.**Minimum Curvature**generates smooth surfaces and is fast for most data sets but it can create high magnitude artifacts in areas of no data. The internal tension and boundary tension allow you control over the amount of smoothing. Minimum Curvature can extrapolate values beyond your data's Z range.**Natural Neighbor**generates good contours from data sets containing dense data in some areas and sparse data in other areas. It does not generate data in areas without data. Natural Neighbor does not extrapolate Z grid values beyond the range of data.**Nearest Neighbor**is useful for converting regularly spaced (or almost regularly spaced) XYZ data files to grid files. When your observations lie on a nearly complete grid with few missing holes, this method is useful for filling in the holes, or creating a grid file with the blanking value assigned to those locations where no data are present. Nearest Neighbor does not extrapolate Z grid values beyond the range of data.**Polynomial Regression**processes the data so that underlying large-scale trends and patterns are shown. This is used for trend surface analysis. Polynomial Regression is very fast for any amount of data, but local details in the data are lost in the generated grid. This method can extrapolate grid values beyond your data's Z range.*EXAMPLE: Many users use this gridding method in conjunction with Kriging to either produce 1st, 2nd or 3rd order residual maps, or remove a trend from their data. They’ll grid their data with Kriging, and then again with Polynomial Regression to get the trend. Then they subtract the Polynomial Regression grid from the Kriging grid and create a map of the resulting grid.*

**Radial Basis Function**is quite flexible. It compares to Kriging since it generates the best overall interpretations of most data sets. This method produces a result quite similar to Kriging.**Modified Shepard's Method**is similar to Inverse Distance to a Power but does not tend to generate "bull's eye" patterns, especially when a smoothing factor is used. Modified Shepard's Method can extrapolate values beyond your data's Z range.**Triangulation with Linear Interpolation**is fast. When you use small data sets, Triangulation with Linear Interpolation generates distinct triangular faces between data points. Triangulation with Linear Interpolation does not extrapolate Z values beyond the range of data.**Moving Average**is most applicable to large and very large data sets (e.g. > 1,000 observations). Moving Average extracts intermediate-scale trends and variations from large, noisy data sets, and it is fast even for very large data sets. This gridding method is a reasonable alternative to Nearest Neighbor for generating grids from large, regularly spaced data sets.**Data Metrics**is used to create grids of information about the data. For example, you can create a grid where the Z values for the grid nodes is the average Z value for all points in the search area, the number of points in the search, or the slope of the samples within the search.*EXAMPLE: An example case would be some archeologists want to create a contour map of the number of artifacts in their dig site. They have point locations of all their artifacts and then grid the data using Data Metrics and choosing to calculate either the density or the number of points in the search area. The resulting contour map shows where the number of samples is high or low.*

**Local Polynomial**is most applicable to data sets that are locally smooth (i.e. relatively smooth surfaces within the search neighborhoods). The computational speed of the method is not significantly affected by the size of the data set.

*Updated January 31, 2017*

## 0 Comments