Skip to content
View Philliec459's full-sized avatar
  • Formerly with Saudi Aramco
  • Crested Butte, CO

Block or report Philliec459

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Philliec459/README.md

Welcome đź‘‹

We would like to welcome you to the GitHub site for E. Craig Phillips. Craig has over 40 years of experience as a Petrophysicist in the Oil and Gas industry. Many of those years were spent in applied research that culminated with the experience gained working in the Reserves Assessment and the Geologic Modeling Groups at Saudi Aramco. Craig had the fortunate experience to characterize, and 3D model most of the Giant oil and gas reservoirs in Saudi working on a team with Ed Clerke, Jan Buiting, Ramsin Eyvazzadeh, and Stephen Cheshire.

Craig is now developing python software where we can all exploit some of the most recent open-source python libraries that have been developed in recent years. Craig specializes in NMR log interpretation, core-log integration, and full-field Petrophysical Characterization studies, and these GitHub repositories reflect this experience. Craig is getting older and recognizes the need and value of delivering open-source software to industry as a way to give back to this industry for all the great years that he has had in this business.

Most of the repositories are petrophysically oriented, and many are designed to be used with Emerson's Geolog software using python loglans and Jupyter Notebooks as help files. There are also complete Geolog projects with python loglans available here too. One Geolog Project demonstrates a proven Carbonate Petrophysical Characterization Workflow used to characterize the Arab D carbonate reservoirs of Saudi Arabia using Ed Clerke's Rosetta Stone core database as calibration. The other Geolog Project can be used to model High Pressure Mercury Injection core data for the Thomeer parameters to model reservoir saturations using Capillary Pressure.

However, you will also find some off-the-wall repositories using Deep Learning for ⚡Marine Object Detection to identify ships, boats, buoys, and land mass from a live video feed sailing. In the image below we are not performing object detection on the land, but we are performing object detection on 'boats' and 'buoys', and there are no 'buoys':

Geolog_Image

Figure 1) Boat Object Detection using Pytorch.

This project was implemented on an Island Packet 420 Sailing Vessel using a Jetson NX minicomputer for the video Object Detection. One of the benefits of being able to write your own software is that if you need it, then do it.

The following is a partial list of available repositories that will give you a flavor of some of the projects available at this GitHub site. Please see all of the GitHub repositories using this link Repositories.

  • We have a new NMR repository that demonstrates how we can use python SciPy curve_fit for NMR Echo Train T2 inversion to create an NMR log:
def func(x,p1,p2,p3,p4,p5,p6,p7,p8):

   return (p1*np.exp(-x/4)+p2*np.exp(-x/8)+p3*np.exp(-x/16)+p4*np.exp(-x/32)+p5*np.exp(-x/64)+p6*np.exp(-x/128)+p7*np.exp(-x/256)+p8*np.exp(-x/512))

popt, pcov = curve_fit(func, xdata, ystack, method='trf', bounds=(0.05, [20, 20, 20, 20, 20, 20, 20, 20]))

This is a rather unorthodox approach, but it does demonstrate how an NMR log is created from the NMR Time-Domain Echo Train data. With the code in this repository, you can control the Echo Train noise, by adding additional random noise to the Time Domain data; and also apply Echo Train stacking to improve the S/N at the expense of lower NMR log resolution. The following animated gif illustrates how the program processes each Echo Train to create the T2 distribution and then the NMR log. This type of visual NMR Time-Domain processing was inspired by the software designed by Dan Georgi, formerly with Baker Atlas. Dan wanted his Echo Train inversion software to be visual and intuitive to use, and it was. An added benefit was that it was quite obvious if there was a problem with the number of echoes being acquired or not enough bins employed in the inversion process. We have tried to capture some of Dan's objectives in this python example. Geolog_Image

Figure 2) Construct an NMR log using Time-Domain Echo Train inversion.

  • We have a new repository that uses Altair and Panel to allow us to interrogate Routine Core Analysis (RCA) and SCAL data using Thin Section images in the process to better understand the texture of the rock. We have also added a Geolog project with python loglans to provide this same application in Geolog. This was not at all straight forward coding since only Jupyter Notebooks will display (render) the Thin Section images directly. Instead, we are using an alternative solutions to render the Thin Section images for Geolog, JupyterLab Notebooks and even .py files.

Geolog_Image

Figure 3) Use of Thin Sections to help us better understand the texture of our reservoir.

  • In our GitHub repository we have used a new, comprehensive reservoir characterization database from Costa, Geiger and Arnold(1) located at the following link where you will find our Jupyter Notebooks. We have employed all the available 17 well logs, Routine Core Analysis (RCA) and Special Core Analysis (SCAL) and implemented our typical carbonate reservoir characterization workflow as discussed by Phillips (2).

Geolog_Image

Figure 4) Complete Carbonate Reservoir Characterization process.

For this repository we did not use the 3D static or dynamic models, but we did employ the time-series dynamic production and formation pressure data by well in both Spotfire and python Altair to better understand the dynamic aspects of this reservoir. This is a rich dataset that needs to be explored further, more than what is presented within the scope of this project. We also have a complete Geolog Project with the same data and our python loglans which use the same code as in our Jupyter Notebooks. The Geolog project also has the SCAL dataset available with the python coding to create Capillary Pressure curves from Thomeer parameter and assess the Thomeer parameter correlations.

  • We have a complete carbonate (Arab D) Reservoir Characterization Workflow available in a Geolog Project. We are using python loglans with Ed Clerke’s Arab D Rosetta Stone carbonate core database as the calibration data. We also have a corresponding repository with 2 Jupyter Notebooks that can be used as Help Files to provide full documentation to explain each step of this characterization process. There is a third Notebook too that estimates Petrophysical Rock Types (PRT) using Sklearn, but we prefer the kNN method proposed in the second Notebook.

Geolog_Image

Figure 5) Arab D Carbonate Reservoir Characterization Workflow using Clerke's Rosetta Stone Arab D carbonate dataset for calibration.

  • We have a new Geolog Project used for Thomeer Parameter Analysis of High Pressure Mercury Injection (HPMI) core data using SciPy curve-fit written in a Geolog python loglan. This loglan estimates the Thomeer Capillary Pressure parameters for the Thomeer hyperbola to model the HPMI Capillary Pressure data and then used to estimate Capillary Pressure saturations in the reservoir. We also have the corresponding python code in a Jupyter Notebook with complete documentation to be used as a help file for this Geolog Thomeer Analysis Help Files.

Geolog_Image

Figure 6) High Pressure Mercury Injection data fit to the Thomeer hyperbola to determine the Thomeer Capillary Pressure parameters for each sample.

  • A new Shaley-Sand Log Analysis Tutorial written as a Jupyter Notebook in python that contains the code along with documentation for a typical shaley-sand analysis. This could be used as a stand-alone software or implemented in other Petrophysical software packages. This program uses George Coates' MRIAN type analysis using Dual Water saturations as well as Waxman-Smits saturations too.

Geolog_Image

Figure 7) Combined Conventional and NMR Logs used in a Shaley-Sand log analysis example similar to Coates' MRIAN analysis.

  • Altair is used extensively in many or our repositories to interrogate our petrophysical data for both Core Calibration and Log data. Altair is a python library that allows for great interactivity using your dynamically linked data.
  • We also have a few repositories using Altair again to better understand the reservoir’s dynamic data (production and pressure) from the Volve Field.
  • You will also find other notable repositories from others that have been forked into this site through GitHub too.
  • There is much more to come …


image.png

Science-and-Technology-Society-Use-of-NASA-STELLA-Q2-Spectrometer

The Science and Technology Society (STS) of Sarasota-Manatee Counties, Florida is working with the NASA STELLA (Science and Technology Education for Land/Life Assessment) outreach program as a part of our STEM initiative. According to their site,

  • "NASA STELLA instruments are portable low-cost do-it-yourself (DIY) instruments that support science education, and outreach through scientific engagement, inquiry, and discovery while helping you understand Landsat better".

STELLA instruments are developed under the influence and inspiration of Landsat. This alignment not only fulfills our project needs but also serves as a compelling addition to our STEAM initiatives:

  1. To train the minds young Floridians to be more aware of our wetlands, to care for them and about them. Our program will bring more community publicity to the issue of wetlands change, as well.
  2. To expose our middle- and high- school aged students to real science, using real data. That means how to use instrumentation and understand how the data is collected, and how the data can be used in the real world. It means not only to create beautiful charts and images that form the good results, but also to understand that data must be collected in a proper and reproducible way, that there physics reasons for lack of accuracy and lack of precision that one must understand and minimize in order to achieve meaningful results.

The NASA STELLA-Q2 is capable of making 18 different spectral measurements from the violet/blue portions of the electromagnetic spectrum out to near infrared regions (beyond our range of vision).The following figure (1) shows the visible spectrum by wavelength, and the yellow box indicates the STELLA-Q2 frequency range.

image

More can be found on the STELLA DIY instruments at the following link.

https://landsat.gsfc.nasa.gov/stella/

The Science and Technology Society (STS) of Sarasota-Manatee Counties, FL have created a Jupyter Notebook to load raw NASA STELLA-Q2 spectrometer data, white-card correct the wavelength data and then use Decision Tree and Knn to differentiate plant species based on the mean End-Members reference data where the Normalized Difference Vegetative Index (NDVI) is key to this analysis. NDVI is calculated:

NDVI = (Near IR irradiance – Red irradiance)/( Near IR irradiance + Red irradiance)

The STELLA-Q2 is a NASA configured hand-held spectrometer designed by Paul Mirel at NASA. The unit is relatively inexpensive and is used to collect End-Member data that can then be used to calibrate Landsat interpretations. Mike Taylor of NASA heads up the STELLA team, and he and his entire team have been so immensely helpful as we delve into calibrated Landsat interpretations.

In our notebooks we employ a few novel python methods using Altair and Panel to display the actual plant species images along the time-series NDVI data for each of the spectrometer readings. This helps us better understand the subtle differences in the STELLA data and calculated values.

animated

Pinned Loading

  1. View-Thin-Section-Images-from-a-Porosity-Permeability-Cross-Plot-using-Python-Altair View-Thin-Section-Images-from-a-Porosity-Permeability-Cross-Plot-using-Python-Altair Public

    This is some very simple python code to view thin sections from a porosity vs. permeability cross plot using python's Altair and Pane

    Jupyter Notebook 43 12

  2. NMR-Echo-Train-Inversion-to-created-a-typical-NMR-log NMR-Echo-Train-Inversion-to-created-a-typical-NMR-log Public

    This repository contains the python code for NMR Echo Train Inversion to created a typical NMR log T2 distributions and NMR log outputs.

    Python 2 1

  3. Marine-ObjectDetection-and-InstanceSegmentation-using-Detectron2-developing-our-own-Training-Weights Marine-ObjectDetection-and-InstanceSegmentation-using-Detectron2-developing-our-own-Training-Weights Public

    Nautical Object Detection using Detectron2 for Instance Segmentation Trained on Nautical Objects (buoys, ships and land).

    Jupyter Notebook 18 1

  4. Methods-to-to-solve-for-Klinkenberg-Permeability-from-Permeability-to-air Methods-to-to-solve-for-Klinkenberg-Permeability-from-Permeability-to-air Public

    Methods to to solve for Klinkenberg Permeability from Permeability to air using Newton-Raphson and SciPy Optimization

    Jupyter Notebook 2

  5. STS-Software-to-Download-and-Process-NASA-PACE-Ocean-Ecosystem-hyperspectral-data STS-Software-to-Download-and-Process-NASA-PACE-Ocean-Ecosystem-hyperspectral-data Public

    We have created a Jupyter Notebook to use with NASA PACE data employing HyperCoast to download the data and then view and process these hyperspectral data using traditional python code. We have als…

    Jupyter Notebook 1

  6. STS-STELLA-Spectrometer-Readings-on-Various-Plant-Species-with-NDVI STS-STELLA-Spectrometer-Readings-on-Various-Plant-Species-with-NDVI Public

    This is a Science and Technology Society (STS) of Sarasota-Manatee Counties that shows the data from a series of NASA STELLA-Q2 Spectrometer readings on a number of vegetative species with calculat…

    Jupyter Notebook 1