Precise location data is crucial for accurate seismic interpretation. While the “4 corners” method can introduce risks, extracting coordinates directly from trace headers improves spatial accuracy, minimizing misalignment from cumulative azimuth and spacing errors.
Loading corner coordinates from load sheets and EBCDIC headers is efficient, but manual data entry raises error risks. Studies indicate that 20-30% of these errors are transpositions (e.g., “43” entered as “34”), with the rest being random digit additions or omissions.
Analyzing XY values from hundreds of thousands of 3D poststack volumes confirms that trace headers—populated directly by processing software—provide more reliable spacing and azimuth data than load sheets and EBCDIC headers, with far fewer manual-entry errors. However, trace headers can still have issues, which can often be identified and corrected automatically.
We’re launching a new Grid Definition Calculator, available soon for Beta testing. This tool allows users to enter or paste line, trace, and XY corner values to calculate spacings, azimuths, area, and create a grid polygon using either three corners or a Point + Spacing method. Currently, Projection (CRS) is for display only, but an upcoming feature will check corner orthogonality (90-degree angles).
Interested in Beta testing? Reach out! We welcome feedback on the interface and are especially keen on your input for handling 4-corner data from load sheets/EBCDIC headers versus XYs from trace headers.
About Don Robinson Don Robinson has dedicated over 50 years to software development and seismic analysis. He founded Oklahoma Seismic Corporation in 1980 and co-developed the MIRA interpretation system, later acquired by Landmark Graphics in 1993. He then started Resolve GeoSciences in 1997, where he now leads the development of SeisShow and AnalyzeSE, software for analyzing and correcting SEG-Y seismic data.Connect on LinkedIn
The Parihaka 3D dataset in New Zealand’s Taranaki Basin is publicly available through New Zealand Petroleum and Minerals and worth exploring. We reviewed the Near, Mid, Far, and Full Angle Stack volumes, noting the Mid Angle Stack volume had issues with a few traces.
Initial display of the Parihaka 3D dataset highlights its impressive quality, though there are some loading and interpretation challenges. Logarithmic histograms are used to capture the full amplitude range, skipping bins with low counts until sufficient data appears. Absolute and alternate min/max amplitudes are stored to flag outliers. Notably, only two traces out of over 1,038,172 had extreme values at the 32-bit float limit. For display, standard deviations of amplitude values were used to ensure a representative view, despite these outliers.
The indexing process scans each trace and sample, logging findings in reports and a JSON file. It flagged 15 traces with missing values in the trace headers, with file positions highlighted in Red. These issues were found at the end of a few lines, and SeisShow excluded them from the index file since they couldn’t be linked to any line or trace.
The sample rate stored in both the Binary and Trace Headers presents another issue. Here, the Trace Headers showed 2049, while the correct value in the Binary Header was 1168. If both headers are off, aligning sample rates across traces can help identify the correct count—a method used in SeisShow and AnalyzeSE to maintain accuracy. This discrepancy is highlighted in yellow in the SeisShow Index, Trace Header, and Report.
Spikes in datasets can disrupt analysis, interpretation, and proper loading into workstations. The previous paragraph discussed this issue. The following images show methods for handling outliers: setting them to zero, clipping, or interpolating traces. SeisShow identifies extreme amplitudes, providing details like line, crossline, x, y, amplitude, time, and trace location. Red arrows highlight spikes, and users can click on high-amplitude lines to jump to their location for review and correction. Interpolation generally yields the best results, while clipping can leave residual spikes in quieter areas. There’s also an option to write out the edited file for further adjustments.
Included are two more displays: the SeisShow Report and a well-documented EBCDIC header.
Have you encountered problems with bad trace header values or amplitude spikes? Please share your experiences in the comments on LinkedIn.
About Don Robinson Don Robinson has dedicated over 50 years to software development and seismic analysis. He founded Oklahoma Seismic Corporation in 1980 and co-developed the MIRA interpretation system, later acquired by Landmark Graphics in 1993. He then started Resolve GeoSciences in 1997, where he now leads the development of SeisShow and AnalyzeSE, software for analyzing and correcting SEG-Y seismic data. Connect on LinkedIn
Before loading SEG-Y files into an interpretation system, analysis tool, or data repository, several critical questions must be addressed. The image effectively highlights key considerations for preparing seismic files properly.
SeisShow and AnalyzeSE take care of everything, eliminating the need to manually locate bytes for key fields and addressing all necessary items.
After careful review, it’s clear we should cover these topics across multiple posts.
As the saying goes, “Be sincere, be brief, be seated”, often attributed to Franklin D. Roosevelt. So today we’ll focus on key steps for loading data for immediate use.
Our primary concern is: Location! Location! Location!
With seismic data now integral to GeoSteering, accuracy is crucial. Modern drilling pads support 32+ wells, so even small spacing errors can impact well positioning and fail to warn of faults and hazards for all wells on a pad.
A key concern is accurately identifying XY values from the Trace Headers or Load Sheet and carefully checking spacings. As shown in the second image, if the expected spacing is 110 feet but the data reads 110.5 feet, the grid could be off by up to 1,000 feet in X and Y by the end of the survey or greater if the survey had more lines and traces..
Ensure your Projection System is correct. The image below uses Texas Central, NAD 27, but results would vary significantly with Texas Central, NAD 83 or Texas North Central, NAD 27. Incorrect datums have led to many dry holes.
Share your experiences here to help guide the order of our future posts. You can also contact us here: resolvegeo.com/contact and share the post with others. Your insights are valuable, and we’re always surprised by new challenges.
About Don Robinson Don Robinson has dedicated over 50 years to software development and seismic analysis. He founded Oklahoma Seismic Corporation in 1980 and co-developed the MIRA interpretation system, later acquired by Landmark Graphics in 1993. He then started Resolve GeoSciences in 1997, where he now leads the development of SeisShow and AnalyzeSE, software for analyzing and correcting SEG-Y seismic data. Connect on LinkedIn
Seismic data plays a key role for many professionals, whether it’s loading to workstations, managing repositories, interpreting datasets or preparing data for licensing. However, a common misconception is that the data we receive is clean and ready to use. After 50+ years of experience, we’ve learned to avoid “blind faith” and adopt a “trust but verify” approach instead.
Just because seismic data loads into a workstation doesn’t mean it’s accurate. Even new data can have issues like duplicate traces or spikes. Workstations create grids with one trace per line and crossline, so they may load only the first or last duplicate trace. Spikes are often clipped, but quieter intervals can still be affected. When data is loaded into Numpy arrays or cloud formats for analysis, they expect a clean 3D grid with one trace per cell, so any errors can disrupt the process.
EBCDIC headers and load sheets, often created manually, are prone to errors in projection systems, byte locations for lines/traces, SP/CDP, XYs, and other metadata. Verification is key.
If your wells tie reliably in the southwest but not in the northeast, there could be a simple reason. We’ve seen transposition errors in XY values, represented by fractional differences in spacing, cause offsets up to 1220 meters (4,000 feet). This explains why well control might not match seismic data, but the issue is easy to resolve once spotted.
These are just a few issues we’ll cover in future posts, with help from SeisShow for troubleshooting and AnalyzeSE for scanning thousands of SEG-Y files, with results in JSON metafiles for easy data management.
What challenges have you faced with seismic data (whether resolved or not)?
Share your experiences here to help guide the order of our future posts. You can also contact us here: resolvegeo.com/contact and share the post with others. Your insights are valuable, and we’re always surprised by new challenges.
About Don Robinson Don Robinson has dedicated over 50 years to software development and seismic analysis. He founded Oklahoma Seismic Corporation in 1980 and co-developed the MIRA interpretation system, later acquired by Landmark Graphics in 1993. He then started Resolve GeoSciences in 1997, where he now leads the development of SeisShow and AnalyzeSE, software for analyzing and correcting SEG-Y seismic data. Connect on LinkedIn
When you consider a family tree, you likely envision a primary Root Ancestor accompanied by all their descendants: their children, grandchildren, and even great-grandchildren. We can comfortably visualize how this tree is structured, tracking a family’s lineage from its earliest roots to its newest branches.
This same concept applies to seismic data, but, as we’ll soon discover, it can become very complex quite rapidly. The acquired data acts as the Root Ancestor, and each instance of data processing or reprocessing gives birth to a new offspring. Some versions yield many children, grandchildren, and great-grandchildren, while others may not bear any children at all. Every processing step brings forth more offspring until the final interpretation volume is achieved. Nevertheless, even then, interpreters may decide to employ different versions for their interpretation. Despite these complexities, the subsequent versions never stray far from their Root Ancestor, and the tree can still be navigated.
However, real chaos ensues when interpreters utilize both pre-stack and post-stack volumes to construct additional branches. Acquisition footprints are suppressed. Filters and AGCs are implemented. Attributes of all sorts are calculated. EBCDIC headers are altered (or not written) and might not cite the correct, clean volume utilized as an input. Datasets loaded into workstations are disassociated from EBCDIC, Binary, and Trace headers. They’re clipped or transformed into other formats. Following these alterations, tracing a volume back to the original processed version, let alone the Root Ancestor, might be nearly impossible!
And it doesn’t stop there. Even the parent can be compromised when numerous individual surveys merge into one. A combined survey now signifies a new Root Ancestor, complete with an entirely new family tree. Moreover, not all original traces from each combined survey will be used, due to permit issues and areas of interest.
Are you preserving an accurate seismic family tree? Are you documenting the offspring as they’re born and pinpointing the most productive branches contributing to the final interpretation volumes? This might seem daunting, but armed with the correct technology and processes, you can start to assert control over your seismic ancestry, ensuring it remains accurate, easy to navigate, and well-populated.
—
About Don Robinson Don Robinson has dedicated over 50 years to software development and seismic analysis. He founded Oklahoma Seismic Corporation in 1980 and co-developed the MIRA interpretation system, later acquired by Landmark Graphics in 1993. He then started Resolve GeoSciences in 1997, where he now leads the development of SeisShow and AnalyzeSE, software for analyzing and correcting SEG-Y seismic data. Connect on LinkedIn