Bin Grid Definitions – Loading to Workstations

Bin Grid Definitions – Loading to Workstations

Precise location data is crucial for accurate seismic interpretation. While the “4 corners” method can introduce risks, extracting coordinates directly from trace headers improves spatial accuracy, minimizing misalignment from cumulative azimuth and spacing errors.

Loading corner coordinates from load sheets and EBCDIC headers is efficient, but manual data entry raises error risks. Studies indicate that 20-30% of these errors are transpositions (e.g., “43” entered as “34”), with the rest being random digit additions or omissions.

Analyzing XY values from hundreds of thousands of 3D poststack volumes confirms that trace headers—populated directly by processing software—provide more reliable spacing and azimuth data than load sheets and EBCDIC headers, with far fewer manual-entry errors. However, trace headers can still have issues, which can often be identified and corrected automatically.

The images from the Waihapa 3D dataset included here are ©Crown Copyright, reproduced with permission from New Zealand Petroleum and Minerals (www.nzp&m.govt.nz), and are used to showcase the Bin Grid Calculator.

We’re launching a new Grid Definition Calculator, available soon for Beta testing. This tool allows users to enter or paste line, trace, and XY corner values to calculate spacings, azimuths, area, and create a grid polygon using either three corners or a Point + Spacing method. Currently, Projection (CRS) is for display only, but an upcoming feature will check corner orthogonality (90-degree angles).

Interested in Beta testing? Reach out! We welcome feedback on the interface and are especially keen on your input for handling 4-corner data from load sheets/EBCDIC headers versus XYs from trace headers.


About Don Robinson
Don Robinson has dedicated over 50 years to software development and seismic analysis. He founded Oklahoma Seismic Corporation in 1980 and co-developed the MIRA interpretation system, later acquired by Landmark Graphics in 1993. He then started Resolve GeoSciences in 1997, where he now leads the development of SeisShow and AnalyzeSE, software for analyzing and correcting SEG-Y seismic data.Connect on LinkedIn

Trust but Verify: Overcoming Common Challenges in Seismic Data Management

Trust but Verify: Overcoming Common Challenges in Seismic Data Management

Seismic data plays a key role for many professionals, whether it’s loading to workstations, managing repositories, interpreting datasets or preparing data for licensing. However, a common misconception is that the data we receive is clean and ready to use. After 50+ years of experience, we’ve learned to avoid “blind faith” and adopt a “trust but verify” approach instead.

Just because seismic data loads into a workstation doesn’t mean it’s accurate. Even new data can have issues like duplicate traces or spikes. Workstations create grids with one trace per line and crossline, so they may load only the first or last duplicate trace. Spikes are often clipped, but quieter intervals can still be affected. When data is loaded into Numpy arrays or cloud formats for analysis, they expect a clean 3D grid with one trace per cell, so any errors can disrupt the process.

EBCDIC headers and load sheets, often created manually, are prone to errors in projection systems, byte locations for lines/traces, SP/CDP, XYs, and other metadata. Verification is key.

If your wells tie reliably in the southwest but not in the northeast, there could be a simple reason. We’ve seen transposition errors in XY values, represented by fractional differences in spacing, cause offsets up to 1220 meters (4,000 feet). This explains why well control might not match seismic data, but the issue is easy to resolve once spotted.

These are just a few issues we’ll cover in future posts, with help from SeisShow for troubleshooting and AnalyzeSE for scanning thousands of SEG-Y files, with results in JSON metafiles for easy data management.

What challenges have you faced with seismic data (whether resolved or not)?

Share your experiences here to help guide the order of our future posts. You can also contact us here: resolvegeo.com/contact and share the post with others. Your insights are valuable, and we’re always surprised by new challenges.


About Don Robinson
Don Robinson has dedicated over 50 years to software development and seismic analysis. He founded Oklahoma Seismic Corporation in 1980 and co-developed the MIRA interpretation system, later acquired by Landmark Graphics in 1993. He then started Resolve GeoSciences in 1997, where he now leads the development of SeisShow and AnalyzeSE, software for analyzing and correcting SEG-Y seismic data.
Connect on LinkedIn

The Seismic Family Tree

The Seismic Family Tree

When you consider a family tree, you likely envision a primary Root Ancestor accompanied by all their descendants: their children, grandchildren, and even great-grandchildren. We can comfortably visualize how this tree is structured, tracking a family’s lineage from its earliest roots to its newest branches.

This same concept applies to seismic data, but, as we’ll soon discover, it can become very complex quite rapidly. The acquired data acts as the Root Ancestor, and each instance of data processing or reprocessing gives birth to a new offspring. Some versions yield many children, grandchildren, and great-grandchildren, while others may not bear any children at all. Every processing step brings forth more offspring until the final interpretation volume is achieved. Nevertheless, even then, interpreters may decide to employ different versions for their interpretation. Despite these complexities, the subsequent versions never stray far from their Root Ancestor, and the tree can still be navigated.

However, real chaos ensues when interpreters utilize both pre-stack and post-stack volumes to construct additional branches. Acquisition footprints are suppressed. Filters and AGCs are implemented. Attributes of all sorts are calculated. EBCDIC headers are altered (or not written) and might not cite the correct, clean volume utilized as an input. Datasets loaded into workstations are disassociated from EBCDIC, Binary, and Trace headers. They’re clipped or transformed into other formats. Following these alterations, tracing a volume back to the original processed version, let alone the Root Ancestor, might be nearly impossible!

And it doesn’t stop there. Even the parent can be compromised when numerous individual surveys merge into one. A combined survey now signifies a new Root Ancestor, complete with an entirely new family tree. Moreover, not all original traces from each combined survey will be used, due to permit issues and areas of interest.

Are you preserving an accurate seismic family tree? Are you documenting the offspring as they’re born and pinpointing the most productive branches contributing to the final interpretation volumes? This might seem daunting, but armed with the correct technology and processes, you can start to assert control over your seismic ancestry, ensuring it remains accurate, easy to navigate, and well-populated.

About Don Robinson
Don Robinson has dedicated over 50 years to software development and seismic analysis. He founded Oklahoma Seismic Corporation in 1980 and co-developed the MIRA interpretation system, later acquired by Landmark Graphics in 1993. He then started Resolve GeoSciences in 1997, where he now leads the development of SeisShow and AnalyzeSE, software for analyzing and correcting SEG-Y seismic data.
Connect on LinkedIn

Join Our Mailing List

Subscribe to keep up with the latest developments at Resolve GeoSciences.

You have Successfully Subscribed!