Bin Grid Definitions – Loading to Workstations

Bin Grid Definitions – Loading to Workstations

Precise location data is crucial for accurate seismic interpretation. While the “4 corners” method can introduce risks, extracting coordinates directly from trace headers improves spatial accuracy, minimizing misalignment from cumulative azimuth and spacing errors.

Loading corner coordinates from load sheets and EBCDIC headers is efficient, but manual data entry raises error risks. Studies indicate that 20-30% of these errors are transpositions (e.g., “43” entered as “34”), with the rest being random digit additions or omissions.

Analyzing XY values from hundreds of thousands of 3D poststack volumes confirms that trace headers—populated directly by processing software—provide more reliable spacing and azimuth data than load sheets and EBCDIC headers, with far fewer manual-entry errors. However, trace headers can still have issues, which can often be identified and corrected automatically.

The images from the Waihapa 3D dataset included here are ©Crown Copyright, reproduced with permission from New Zealand Petroleum and Minerals (www.nzp&m.govt.nz), and are used to showcase the Bin Grid Calculator.

We’re launching a new Grid Definition Calculator, available soon for Beta testing. This tool allows users to enter or paste line, trace, and XY corner values to calculate spacings, azimuths, area, and create a grid polygon using either three corners or a Point + Spacing method. Currently, Projection (CRS) is for display only, but an upcoming feature will check corner orthogonality (90-degree angles).

Interested in Beta testing? Reach out! We welcome feedback on the interface and are especially keen on your input for handling 4-corner data from load sheets/EBCDIC headers versus XYs from trace headers.


About Don Robinson
Don Robinson has dedicated over 50 years to software development and seismic analysis. He founded Oklahoma Seismic Corporation in 1980 and co-developed the MIRA interpretation system, later acquired by Landmark Graphics in 1993. He then started Resolve GeoSciences in 1997, where he now leads the development of SeisShow and AnalyzeSE, software for analyzing and correcting SEG-Y seismic data.Connect on LinkedIn

Exploring the Parihaka 3D Dataset

Exploring the Parihaka 3D Dataset

The Parihaka 3D dataset in New Zealand’s Taranaki Basin is publicly available through New Zealand Petroleum and Minerals and worth exploring. We reviewed the Near, Mid, Far, and Full Angle Stack volumes, noting the Mid Angle Stack volume had issues with a few traces.

The images included in this posting are tied to the ©Crown Copyright and has been reproduced with permission, from the New Zealand Petroleum and Minerals website at www.nzpam.govt.nz.

Initial display of the Parihaka 3D dataset highlights its impressive quality, though there are some loading and interpretation challenges. Logarithmic histograms are used to capture the full amplitude range, skipping bins with low counts until sufficient data appears. Absolute and alternate min/max amplitudes are stored to flag outliers. Notably, only two traces out of over 1,038,172 had extreme values at the 32-bit float limit. For display, standard deviations of amplitude values were used to ensure a representative view, despite these outliers.

The indexing process scans each trace and sample, logging findings in reports and a JSON file. It flagged 15 traces with missing values in the trace headers, with file positions highlighted in Red. These issues were found at the end of a few lines, and SeisShow excluded them from the index file since they couldn’t be linked to any line or trace.

The sample rate stored in both the Binary and Trace Headers presents another issue. Here, the Trace Headers showed 2049, while the correct value in the Binary Header was 1168. If both headers are off, aligning sample rates across traces can help identify the correct count—a method used in SeisShow and AnalyzeSE to maintain accuracy. This discrepancy is highlighted in yellow in the SeisShow Index, Trace Header, and Report.

Spikes in datasets can disrupt analysis, interpretation, and proper loading into workstations. The previous paragraph discussed this issue. The following images show methods for handling outliers: setting them to zero, clipping, or interpolating traces. SeisShow identifies extreme amplitudes, providing details like line, crossline, x, y, amplitude, time, and trace location. Red arrows highlight spikes, and users can click on high-amplitude lines to jump to their location for review and correction. Interpolation generally yields the best results, while clipping can leave residual spikes in quieter areas. There’s also an option to write out the edited file for further adjustments.

Included are two more displays: the SeisShow Report and a well-documented EBCDIC header.

Have you encountered problems with bad trace header values or amplitude spikes? Please share your experiences in the comments on LinkedIn.


About Don Robinson
Don Robinson has dedicated over 50 years to software development and seismic analysis. He founded Oklahoma Seismic Corporation in 1980 and co-developed the MIRA interpretation system, later acquired by Landmark Graphics in 1993. He then started Resolve GeoSciences in 1997, where he now leads the development of SeisShow and AnalyzeSE, software for analyzing and correcting SEG-Y seismic data.
Connect on LinkedIn

Trust but Verify: Overcoming Common Challenges in Seismic Data Management

Trust but Verify: Overcoming Common Challenges in Seismic Data Management

Seismic data plays a key role for many professionals, whether it’s loading to workstations, managing repositories, interpreting datasets or preparing data for licensing. However, a common misconception is that the data we receive is clean and ready to use. After 50+ years of experience, we’ve learned to avoid “blind faith” and adopt a “trust but verify” approach instead.

Just because seismic data loads into a workstation doesn’t mean it’s accurate. Even new data can have issues like duplicate traces or spikes. Workstations create grids with one trace per line and crossline, so they may load only the first or last duplicate trace. Spikes are often clipped, but quieter intervals can still be affected. When data is loaded into Numpy arrays or cloud formats for analysis, they expect a clean 3D grid with one trace per cell, so any errors can disrupt the process.

EBCDIC headers and load sheets, often created manually, are prone to errors in projection systems, byte locations for lines/traces, SP/CDP, XYs, and other metadata. Verification is key.

If your wells tie reliably in the southwest but not in the northeast, there could be a simple reason. We’ve seen transposition errors in XY values, represented by fractional differences in spacing, cause offsets up to 1220 meters (4,000 feet). This explains why well control might not match seismic data, but the issue is easy to resolve once spotted.

These are just a few issues we’ll cover in future posts, with help from SeisShow for troubleshooting and AnalyzeSE for scanning thousands of SEG-Y files, with results in JSON metafiles for easy data management.

What challenges have you faced with seismic data (whether resolved or not)?

Share your experiences here to help guide the order of our future posts. You can also contact us here: resolvegeo.com/contact and share the post with others. Your insights are valuable, and we’re always surprised by new challenges.


About Don Robinson
Don Robinson has dedicated over 50 years to software development and seismic analysis. He founded Oklahoma Seismic Corporation in 1980 and co-developed the MIRA interpretation system, later acquired by Landmark Graphics in 1993. He then started Resolve GeoSciences in 1997, where he now leads the development of SeisShow and AnalyzeSE, software for analyzing and correcting SEG-Y seismic data.
Connect on LinkedIn

Join Our Mailing List

Subscribe to keep up with the latest developments at Resolve GeoSciences.

You have Successfully Subscribed!