Skip to content

Commit

Permalink
Updating pyCSEP docs for commit b0a84c0 made on from refs/heads/maste…
Browse files Browse the repository at this point in the history
…r by fabiolsilva
  • Loading branch information
fabiolsilva committed Jul 27, 2023
0 parents commit b688251
Show file tree
Hide file tree
Showing 427 changed files with 68,831 additions and 0 deletions.
4 changes: 4 additions & 0 deletions .buildinfo
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: ff14d62ff672b14af17a7028a5f3d210
tags: 645f666f9bcd5a90fca523b33c5a78b7
Empty file added .nojekyll
Empty file.
1 change: 1 addition & 0 deletions CNAME
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
docs.cseptesting.org
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Empty README.md for documenation cache.
Original file line number Diff line number Diff line change
@@ -0,0 +1,212 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# Grid-based Forecast Evaluation\n\nThis example demonstrates how to evaluate a grid-based and time-independent forecast. Grid-based\nforecasts assume the variability of the forecasts is Poissonian. Therefore, Poisson-based evaluations\nshould be used to evaluate grid-based forecasts.\n\nOverview:\n 1. Define forecast properties (time horizon, spatial region, etc).\n 2. Obtain evaluation catalog\n 3. Apply Poissonian evaluations for grid-based forecasts\n 4. Store evaluation results using JSON format\n 5. Visualize evaluation results\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load required libraries\n\nMost of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the\n:mod:`csep.utils` subpackage.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import csep\nfrom csep.core import poisson_evaluations as poisson\nfrom csep.utils import datasets, time_utils, plots\n\n# Needed to show plots from the terminal\nimport matplotlib.pyplot as plt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Define forecast properties\n\nWe choose a `time-independent-forecast` to show how to evaluate a grid-based earthquake forecast using PyCSEP. Note,\nthe start and end date should be chosen based on the creation of the forecast. This is important for time-independent forecasts\nbecause they can be rescale to any arbitrary time period.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from csep.utils.stats import get_Kagan_I1_score\n\nstart_date = time_utils.strptime_to_utc_datetime('2006-11-12 00:00:00.0')\nend_date = time_utils.strptime_to_utc_datetime('2011-11-12 00:00:00.0')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load forecast\n\nFor this example, we provide the example forecast data set along with the main repository. The filepath is relative\nto the root directory of the package. You can specify any file location for your forecasts.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"forecast = csep.load_gridded_forecast(datasets.helmstetter_aftershock_fname,\n start_date=start_date,\n end_date=end_date,\n name='helmstetter_aftershock')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load evaluation catalog\n\nWe will download the evaluation catalog from ComCat (this step requires an internet connection). We can use the ComCat API\nto filter the catalog in both time and magnitude. See the catalog filtering example, for more information on how to\nfilter the catalog in space and time manually.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(\"Querying comcat catalog\")\ncatalog = csep.query_comcat(forecast.start_time, forecast.end_time, min_magnitude=forecast.min_magnitude)\nprint(catalog)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Filter evaluation catalog in space\n\nWe need to remove events in the evaluation catalog outside the valid region specified by the forecast.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"catalog = catalog.filter_spatial(forecast.region)\nprint(catalog)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Compute Poisson spatial test\n\nSimply call the :func:`csep.core.poisson_evaluations.spatial_test` function to evaluate the forecast using the specified\nevaluation catalog. The spatial test requires simulating from the Poisson forecast to provide uncertainty. The verbose\noption prints the status of the simulations to the standard output.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"spatial_test_result = poisson.spatial_test(forecast, catalog)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Store evaluation results\n\nPyCSEP provides easy ways of storing objects to a JSON format using :func:`csep.write_json`. The evaluations can be read\nback into the program for plotting using :func:`csep.load_evaluation_result`.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"csep.write_json(spatial_test_result, 'example_spatial_test.json')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Plot spatial test results\n\nWe provide the function :func:`csep.utils.plotting.plot_poisson_consistency_test` to visualize the evaluation results from\nconsistency tests.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"ax = plots.plot_poisson_consistency_test(spatial_test_result,\n plot_args={'xlabel': 'Spatial likelihood'})\nplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Plot ROC Curves\n\nWe can also plot the Receiver operating characteristic (ROC) Curves based on forecast and testing-catalog.\nIn the figure below, False Positive Rate is the normalized cumulative forecast rate, after sorting cells in decreasing order of rate.\nThe \"True Positive Rate\" is the normalized cumulative area. The dashed line is the ROC curve for a uniform forecast,\nmeaning the likelihood for an earthquake to occur at any position is the same. The further the ROC curve of a\nforecast is to the uniform forecast, the specific the forecast is. When comparing the\nforecast ROC curve against an catalog, one can evaluate if the forecast is more or less specific\n(or smooth) at different level or seismic rate.\n\nNote: This figure just shows an example of plotting an ROC curve with a catalog forecast.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(\"Plotting ROC curve\")\n_ = plots.plot_ROC(forecast, catalog)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Calculate Kagan's I_1 score\n\nWe can also get the Kagan's I_1 score for a gridded forecast\n(see Kagan, YanY. [2009] Testing long-term earthquake forecasts: likelihood methods and error diagrams, Geophys. J. Int., v.177, pages 532-542).\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"I_1 = get_Kagan_I1_score(forecast, catalog)\nprint(\"I_1score is: \", I_1)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
"""
.. _grid-forecast-evaluation:
Grid-based Forecast Evaluation
==============================
This example demonstrates how to evaluate a grid-based and time-independent forecast. Grid-based
forecasts assume the variability of the forecasts is Poissonian. Therefore, Poisson-based evaluations
should be used to evaluate grid-based forecasts.
Overview:
1. Define forecast properties (time horizon, spatial region, etc).
2. Obtain evaluation catalog
3. Apply Poissonian evaluations for grid-based forecasts
4. Store evaluation results using JSON format
5. Visualize evaluation results
"""

####################################################################################################################################
# Load required libraries
# -----------------------
#
# Most of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the
# :mod:`csep.utils` subpackage.

import csep
from csep.core import poisson_evaluations as poisson
from csep.utils import datasets, time_utils, plots

# Needed to show plots from the terminal
import matplotlib.pyplot as plt

####################################################################################################################################
# Define forecast properties
# --------------------------
#
# We choose a :ref:`time-independent-forecast` to show how to evaluate a grid-based earthquake forecast using PyCSEP. Note,
# the start and end date should be chosen based on the creation of the forecast. This is important for time-independent forecasts
# because they can be rescale to any arbitrary time period.
from csep.utils.stats import get_Kagan_I1_score

start_date = time_utils.strptime_to_utc_datetime('2006-11-12 00:00:00.0')
end_date = time_utils.strptime_to_utc_datetime('2011-11-12 00:00:00.0')

####################################################################################################################################
# Load forecast
# -------------
#
# For this example, we provide the example forecast data set along with the main repository. The filepath is relative
# to the root directory of the package. You can specify any file location for your forecasts.

forecast = csep.load_gridded_forecast(datasets.helmstetter_aftershock_fname,
start_date=start_date,
end_date=end_date,
name='helmstetter_aftershock')

####################################################################################################################################
# Load evaluation catalog
# -----------------------
#
# We will download the evaluation catalog from ComCat (this step requires an internet connection). We can use the ComCat API
# to filter the catalog in both time and magnitude. See the catalog filtering example, for more information on how to
# filter the catalog in space and time manually.

print("Querying comcat catalog")
catalog = csep.query_comcat(forecast.start_time, forecast.end_time, min_magnitude=forecast.min_magnitude)
print(catalog)

####################################################################################################################################
# Filter evaluation catalog in space
# ----------------------------------
#
# We need to remove events in the evaluation catalog outside the valid region specified by the forecast.

catalog = catalog.filter_spatial(forecast.region)
print(catalog)

####################################################################################################################################
# Compute Poisson spatial test
# ----------------------------
#
# Simply call the :func:`csep.core.poisson_evaluations.spatial_test` function to evaluate the forecast using the specified
# evaluation catalog. The spatial test requires simulating from the Poisson forecast to provide uncertainty. The verbose
# option prints the status of the simulations to the standard output.

spatial_test_result = poisson.spatial_test(forecast, catalog)

####################################################################################################################################
# Store evaluation results
# ------------------------
#
# PyCSEP provides easy ways of storing objects to a JSON format using :func:`csep.write_json`. The evaluations can be read
# back into the program for plotting using :func:`csep.load_evaluation_result`.

csep.write_json(spatial_test_result, 'example_spatial_test.json')

####################################################################################################################################
# Plot spatial test results
# -------------------------
#
# We provide the function :func:`csep.utils.plotting.plot_poisson_consistency_test` to visualize the evaluation results from
# consistency tests.

ax = plots.plot_poisson_consistency_test(spatial_test_result,
plot_args={'xlabel': 'Spatial likelihood'})
plt.show()

####################################################################################################################################
# Plot ROC Curves
# -----------------------
#
# We can also plot the Receiver operating characteristic (ROC) Curves based on forecast and testing-catalog.
# In the figure below, False Positive Rate is the normalized cumulative forecast rate, after sorting cells in decreasing order of rate.
# The "True Positive Rate" is the normalized cumulative area. The dashed line is the ROC curve for a uniform forecast,
# meaning the likelihood for an earthquake to occur at any position is the same. The further the ROC curve of a
# forecast is to the uniform forecast, the specific the forecast is. When comparing the
# forecast ROC curve against an catalog, one can evaluate if the forecast is more or less specific
# (or smooth) at different level or seismic rate.
#
# Note: This figure just shows an example of plotting an ROC curve with a catalog forecast.

print("Plotting ROC curve")
_ = plots.plot_ROC(forecast, catalog)

####################################################################################################################################
# Calculate Kagan's I_1 score
# ---------------------------
#
# We can also get the Kagan's I_1 score for a gridded forecast
# (see Kagan, YanY. [2009] Testing long-term earthquake forecasts: likelihood methods and error diagrams, Geophys. J. Int., v.177, pages 532-542).

I_1 = get_Kagan_I1_score(forecast, catalog)
print("I_1score is: ", I_1)
Loading

0 comments on commit b688251

Please sign in to comment.