Skip to content

Commit

Permalink
Feature/issue 236: Update track ingest to allow UAT query of CMR (#240)
Browse files Browse the repository at this point in the history
* /version 1.3.0a0

* Update build.yml

* /version 1.3.0a1

* /version 1.3.0a2

* Feature/issue 175 - Update docs to point to OPS (#176)

* changelog

* update examples, remove load_data readme, info moved to wiki

* Dependency update to fix snyk scan

* issues/101: Support for HTTP Accept header (#172)

* Reorganize timeseries code to  prep for Accept header

* Enable Accept header to return response of specific content-type

* Fix whitespace and string continuation

* Make error handling consistent and add an additional test where a reach can't be found

* Update changelog with issue for unreleased version

* Add 415 status code to API definition

* Few minor cleanup items

* Few minor cleanup items

* Update to aiohttp@3.9.4

* Fix dependencies

---------

Co-authored-by: Frank Greguska <89428916+frankinspace@users.noreply.github.com>

* /version 1.3.0a3

* issues/102: Support compression of API response (#173)

* Enable payload compression

* Update changelog with issue

---------

Co-authored-by: Frank Greguska <89428916+frankinspace@users.noreply.github.com>

* /version 1.3.0a4

* Feature/issue 100 Add option to 'compact' GeoJSON result into single feature (#177)

* Reorganize timeseries code to  prep for Accept header

* Enable Accept header to return response of specific content-type

* Fix whitespace and string continuation

* Make error handling consistent and add an additional test where a reach can't be found

* Update changelog with issue for unreleased version

* Add 415 status code to API definition

* Few minor cleanup items

* Few minor cleanup items

* Update to aiohttp@3.9.4

* Fix dependencies

* Update required query parameters based on current API functionality

* Enable return of 'compact' GeoJSON response

* Fix linting and add test data

* Update documentation for API accept headers and compact GeoJSON response

* Fix references to incorrect Accept header examples

---------

Co-authored-by: Frank Greguska <89428916+frankinspace@users.noreply.github.com>

* /version 1.3.0a5

* Feature/issue 183 (#185)

* Provide introduction to timeseries endpoint

* Remove _units in fields list

* Fix typo

* Update examples with Accept headers and compact query parameter

* Add issue to changelog

* Fix typo in timeseries documentation

* Update pymysql

* Update pymysql

* Provide clarity on accept headers and request parameter fields

* /version 1.3.0a6

* Feature/issue 186 Implement API keys (#188)

* API Gateway Lambda authorizer to facilitate API keys and usage plans

* Unit tests to test Lambda authorizer

* Fix terraform file formatting

* API Gateway Lambda Authorizer

- Lambda function
- API Keys and Authorizer definition in OpenAPI spec
- API gateway API keys
- API gateway usage plans
- SSM parameters for API keys

* Fix trailing whitespace

* Set default region environment variable

* Fix SNYK vulnerabilities

* Add issue to changelog

* Implement custom trusted partner header x-hydrocron-key

* Update cryptography for SNYK vulnerability

* Update documentation to include API key usage

* Update quota and throttle settings for API Gateway

* Update API keys documentation to indicate to be implemented

* Move API key lookup to Lambda INIT

* Remove API key authentication and update API key to x-hydrocron-key

* /version 1.3.0a7

* Update changelog for 1.3.0 release

* /version 1.4.0a0

* Feature/issue 198 (#207)

* Update pylint to deal with errors and fix collection reference

* Initial CMR and Hydrocron queries

- Includes placeholders for other operations needed to track granule
ingest.
- GranuleUR query for Hydrocron tables.

* Add and set up vcrpy for testing CMR API query

* Test track ingest operations

- Test CMR and hydrocron queries
- Test granuleUR query
- Update database to include granuleUR GSI

* Update to use track_ingest naming consistently

* Initial Lambda function and IAM role definition

* Replace deprecated path function with as_file

* Add SSM read IAM permissions

* Add DynamoDB read permissions

* Update track ingest lambda memory

* Remove duplicate IAM permissions

* Add in permissions to query index

* Update changelog

* Update changelog description

* Use python_cmr for CMR API queries

* /version 1.4.0a1

* Add doi to documentation pages (#216)

* Update intro.md with DOI

* Update overview.md with DOI

* /version 1.4.0a2

* issue-193: Add Dynamo DB Table for SWOT Prior Lakes (#209)

* add code to handle prior lakes shapefiles, add test prior lake data

* update terraform to add prior lake table

* fix tests, change to smaller test data file, changelog

* linting

* reconfigure main load_data method to make more readable and pass linting

* lint

* lint

* fix string casting to lower storage req & update test responses to handle different rounding pattern in coords

* update load benchmarking function for linting and add unit test

* try parent collection for lakes

* update version parsing for parent collection

* fix case error

* fix lake id reference

* add logging to troubleshoot too large features

* add item size logging and remove error raise for batch write

* clean up logging statements & move numeric_columns assignment

* update batch logging statement

* Rename constant

* Fix temp dir security risk https://rules.sonarsource.com/python/RSPEC-5443/

* Fix temp dir security risk https://rules.sonarsource.com/python/RSPEC-5443/

* fix code coverage calculation

---------

Co-authored-by: Frank Greguska <89428916+frankinspace@users.noreply.github.com>

* /version 1.4.0a3

* Feature/issue 201 Create a table for tracking granule ingest status (#214)

* Define track ingest database and IAM permissions

* Update changelog with issue

* Modify table structure to support sparse status index

* Updated to only apply PITR in ops

---------

Co-authored-by: Frank Greguska <89428916+frankinspace@users.noreply.github.com>

* /version 1.4.0a4

* Feature/issue 210 - Load large geometry polygons (#219)

* add functions to handle null geometries and convert polygons to points

* update doi in docs

* fix fill null geometries

* fix tests and update changelog

* /version 1.4.0a5

* Feature/issue 222 - Add granule info to track ingest table on load (#223)

* adjust lambdas to populate track ingest table on granule load

* changelog

* remove test cnm

* lint

* change error caught when handling checksum

* update lambda role permissions to write to track ingest table

* fix typo on lake table terraform

* set default fill values for checksum and rev date in track status

* fix checksum handling in bulk load data

* lint

* add logging to debug

* /version 1.4.0a6

* Add SSM parameter read for last run time

* Feature/issue-225: Create one track ingest table per feature type (#226)

* add track ingest tables for each feature type and adjust load data to populate

* changelog

* /version 1.4.0a7

* Feature/issue 196 Add new feature type to query the API for lake data (#224)

* Initial API queries for lake data

* Unit tests for lake data

* Updates after center point calculations

- Removed temp code to calculate a point in API
- Implemented unit test to test lake data retrieval
- Updated fixtures to load in lake data for testing

* Add read lake table permissions to lambda timeseries and track ingest roles

* Update documenation to include lake data

* Updated documentation to include info on lake centerpoints

---------

Co-authored-by: Frank Greguska <89428916+frankinspace@users.noreply.github.com>

* /version 1.4.0a8

* Feature/issue 205 - Add Confluence API key (#221)

* Fix possible variable references before value is assigned

* Define Confluence API key and trusted partner plan limits

* Define a list of trusted partner keys and store under single parameter

* Define API keys as encrypted envrionment variables for Lambda authorizer

* Update authorizer and connection class to use KMS to retrieve API keys

* Hack to force lambda deployment when ssm value changes (#218)

* Add replace_triggered_by to hydrocron_lambda_authorizer

* Introduce environment variable that contains random id which will change whenever an API key value changes. This will force lambda to publish new version of the function.

* Remove unnecessary hash function

* Update to SSM parameter API key storage and null_resource enviroment variable

* Update Terraform and AWS provider

* Update API key documentation

* Set source_code_hash to force deployment of new image

* Downgrade AWS provider to 4.0 to remove inline policy errors

* Update docs/timeseries.md

---------

Co-authored-by: Frank Greguska <89428916+frankinspace@users.noreply.github.com>

* /version 1.4.0a9

* /version 1.4.0a10

* changelog for 1.4.0 release

* update dependencies for 1.4.0 release

* /version 1.5.0a0

* fix CMR query in UAT

* /version 1.4.0rc1

* fix typo in load_data lambda

* /version 1.4.0rc2

* Initial track ingest table query

* Fix linting and code style

* Implement feature count operations

* Enable S3 permissions and set environment variable for track lambda

* Fix trailing white spaces and code format

* Update docstrings for class methods

* Implement run time storage in SSM

* Query track table unit tests

* Update CHANGELOG with issue

* Update SSM run time parameter

* Fix trailing whitespace

* Fix reference to IAM policy

* Enable specification of temporal range to search revision date by

* Fix SSM put parameter policy

* Update IAM permissions for reading track ingest

* Enable full temporal search on CMR granules

* Add capability to download shapefile granule to count features

* Update granule UR to include .zip

* Count features via Hydrocron table query

* Remove unnecessary s3 permissions

* Remove whitespace from blank line

* Update cryptography to 43.0.1

* Update track ingest table operations

* Update changelog with issue

* update dependencies

* upgrade geopandas

* update dependencies

* fix index on rev date in load data lambda

* update dependencies

* lint readme

* /version 1.4.0rc3

* /version 1.4.0rc4

* Implement operations to publish CNM messages for granules requiring ingest

* Implement unit test of publication operations

* Fix linting

* Add issue to changelog and fix linting

* Add EventBridge schedules with appropriate Lambda permissions

* Set initial schedule expressions and fix assume policy

* fix cmr env search by venue

* /version 1.4.0rc5

* Disable eventbridge schedules by default

* Update schedule to run weekly

* Define 1 hour latency to search by revision_date in CMR

* Allow CMR UAT query based on HYDROCRON_ENV environment variable

* Update unit tests to accomodate UAT CMR query

* Add earthdata login credentials to Lambda

* Add issue to changelog

* Fix linting white space

---------

Co-authored-by: nikki-t <nikki-t@users.noreply.github.com>
Co-authored-by: Frank Greguska <89428916+frankinspace@users.noreply.github.com>
Co-authored-by: frankinspace <frankinspace@users.noreply.github.com>
Co-authored-by: Victoria McDonald <49625194+torimcd@users.noreply.github.com>
Co-authored-by: Cassie Nickles <44206543+cassienickles@users.noreply.github.com>
Co-authored-by: cassienickles <cassienickles@users.noreply.github.com>
Co-authored-by: podaac-cicd[bot] <podaac-cicd[bot]@users.noreply.github.com>
Co-authored-by: Victoria McDonald <victoria.mcdonald@jpl.nasa.gov>
Co-authored-by: torimcd <torimcd@users.noreply.github.com>
  • Loading branch information
10 people authored Oct 4, 2024
1 parent 83240f1 commit 7a43c90
Show file tree
Hide file tree
Showing 10 changed files with 108 additions and 35 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Issue 211 - Query track ingest table for granules with "to_ingest" status
- Issue 212 - Update track ingest table with granule status
- Issue 203 - Construct CNM to trigger load data operations and ingest granule
- Issue 236 - Allow UAT query of CMR to support querying in different venues
### Changed
### Deprecated
### Removed
Expand Down
2 changes: 1 addition & 1 deletion docs/timeseries.md
Original file line number Diff line number Diff line change
Expand Up @@ -467,7 +467,7 @@ Users may request a special API key for cases where their intended usage of the

**Note: Users do *not* have to send an API key in their request to use the Hydrocron API. The API key is optional.**

### How to use an API key in requests [DRAFT]
### How to use an API key in requests

Hydrocron API key header: `x-hydrocron-key`

Expand Down
4 changes: 2 additions & 2 deletions hydrocron/db/load_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ def lambda_handler(event, _): # noqa: E501 # pylint: disable=W0613
case constants.SWOT_PRIOR_LAKE_TABLE_NAME:
collection_shortname = constants.SWOT_PRIOR_LAKE_COLLECTION_NAME
track_table = constants.SWOT_PRIOR_LAKE_TRACK_INGEST_TABLE_NAME
feature_type = 'LakeSP_prior'
feature_type = 'LakeSP_Prior'
case constants.DB_TEST_TABLE_NAME:
collection_shortname = constants.SWOT_REACH_COLLECTION_NAME
track_table = constants.SWOT_REACH_TRACK_INGEST_TABLE_NAME
Expand All @@ -81,7 +81,7 @@ def lambda_handler(event, _): # noqa: E501 # pylint: disable=W0613
logging.info('No UMM checksum')

try:
revision_date = [date["Date"] for date in granule["umm"]["ProviderDates"] if "Update" in date["Type"]]
revision_date = [date["Date"] for date in granule["umm"]["ProviderDates"] if "Update" in date["Type"]][0]
except KeyError:
revision_date = "Not Found"
logging.info('No UMM revision date')
Expand Down
76 changes: 72 additions & 4 deletions hydrocron/db/track_ingest.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,11 @@
import json
import logging
import os
import requests

# Third-party Imports
from cmr import GranuleQuery
from cmr import CMR_UAT

# Application Imports
from hydrocron.api.data_access.db import DynamoDataRepository
Expand Down Expand Up @@ -45,6 +47,7 @@ class Track:
}
CNM_VERSION = "1.6.0"
PROVIDER = "JPL-SWOT"
URS_UAT_TOKEN = "https://uat.urs.earthdata.nasa.gov/api/users/tokens"

def __init__(self, collection_shortname, collection_start_date=None, query_start=None, query_end=None):
"""
Expand Down Expand Up @@ -99,19 +102,80 @@ def query_cmr(self, temporal):
"""

query = GranuleQuery()
query = query.format("umm_json")
if temporal:
logging.info("Querying CMR temporal range: %s to %s.", self.query_start, self.query_end)
granules = query.short_name(self.collection_shortname).temporal(self.query_start, self.query_end).format("umm_json").get(query.hits())
if self.ENV in ("sit", "uat"):
bearer_token = self._get_bearer_token()
granules = query.short_name(self.SHORTNAME[self.collection_shortname]) \
.temporal(self.query_start, self.query_end) \
.format("umm_json") \
.mode(CMR_UAT) \
.bearer_token(bearer_token) \
.get(query.hits())
granules = self._filter_granules(granules)
else:
granules = query.short_name(self.collection_shortname) \
.temporal(self.query_start, self.query_end) \
.format("umm_json") \
.get(query.hits())
else:
logging.info("Querying CMR revision_date range: %s to %s.", self.query_start, self.query_end)
granules = query.short_name(self.collection_shortname).revision_date(self.query_start, self.query_end).format("umm_json").get(query.hits())
if self.ENV in ("sit", "uat"):
bearer_token = self._get_bearer_token()
granules = query.short_name(self.SHORTNAME[self.collection_shortname]) \
.revision_date(self.query_start, self.query_end) \
.format("umm_json") \
.mode(CMR_UAT) \
.bearer_token(bearer_token) \
.get(query.hits())
granules = self._filter_granules(granules)
else:
granules = query.short_name(self.collection_shortname) \
.revision_date(self.query_start, self.query_end) \
.format("umm_json") \
.get(query.hits())

cmr_granules = {}
for granule in granules:
granule_json = json.loads(granule)
cmr_granules.update(self._get_granule_ur_list(granule_json))
logging.info("Located %s granules in CMR.", len(cmr_granules.keys()))
return cmr_granules

def _get_bearer_token(self):
"""Get bearer authorizatino token."""

username = os.getenv("EARTHDATA_USERNAME")
password = os.getenv("EARTHDATA_PASSWORD")
get_response = requests.get(self.URS_UAT_TOKEN,
headers={"Accept": "application/json"},
auth=requests.auth.HTTPBasicAuth(username, password),
timeout=30)
token_data = get_response.json()
return token_data[0]["access_token"]

def _filter_granules(self, granules):
"""Filter granules for collection.
Parameters:
:param granules: List of granules to filter
:type granules: dict
"""

data_type = self.collection_shortname.split("_")[4].capitalize()
granule_list = []
for granule in granules:
filter_list = []
granule_json = json.loads(granule)
for item in granule_json["items"]:
if data_type in item["meta"]["native-id"]:
filter_list.append(item)
granule_json["hits"] = len(filter_list)
granule_json["items"] = filter_list
granule_list.append(json.dumps(granule_json))
return granule_list

@staticmethod
def _get_granule_ur_list(granules):
"""Return dict of Granule URs and revision dates from CMR response JSON.
Expand All @@ -124,7 +188,7 @@ def _get_granule_ur_list(granules):

granule_dict = {}
for item in granules["items"]:
granule_ur = item["umm"]["GranuleUR"].replace("_swot", ".zip")
granule_ur = f'{item["umm"]["GranuleUR"].replace("_swot", "")}.zip'
checksum = 0
for file in item["umm"]["DataGranule"]["ArchiveAndDistributionInformation"]:
if granule_ur == file["Name"]:
Expand Down Expand Up @@ -248,7 +312,10 @@ def _query_granule_files(self, granule_ur):
}
for granule_url in granule_item["umm"]["RelatedUrls"]:
if granule_url["Type"] == "GET DATA VIA DIRECT ACCESS":
cnm_file["uri"] = granule_url["URL"].replace("ops", self.ENV)
if self.ENV in ("sit", "uat"):
cnm_file["uri"] = granule_url["URL"].replace("ops", "uat")
else:
cnm_file["uri"] = granule_url["URL"]
cnm_files.append(cnm_file)

return cnm_files
Expand Down Expand Up @@ -309,6 +376,7 @@ def track_ingest_handler(event, context):
logging.info("Temporal end date: %s", query_end)
else:
logging.info("Collection start date: %s", collection_start_date)
logging.info("Environment: %s", track.ENV.upper())

cmr_granules = track.query_cmr(temporal)
track.query_hydrocron(hydrocron_table, cmr_granules)
Expand Down
4 changes: 2 additions & 2 deletions poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 3 additions & 1 deletion terraform/hydrocron-lambda.tf
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ resource "aws_lambda_function" "hydrocron_lambda_load_data" {
EARTHDATA_USERNAME = data.aws_ssm_parameter.edl_username.value
EARTHDATA_PASSWORD = data.aws_ssm_parameter.edl_password.value
GRANULE_LAMBDA_FUNCTION_NAME = aws_lambda_function.hydrocron_lambda_load_granule.function_name
CMR_ENV = "${coalesce(local.sit_env, local.uat_env, local.prod_env)}"
CMR_ENV = "${coalesce(local.sit_env, local.uat_env, local.prod_env)}"
}
}
}
Expand Down Expand Up @@ -221,6 +221,8 @@ resource "aws_lambda_function" "hydrocron_lambda_track_ingest" {
variables = {
GRANULE_LAMBDA_FUNCTION_NAME = aws_lambda_function.hydrocron_lambda_load_granule.function_name
HYDROCRON_ENV = local.environment
EARTHDATA_USERNAME = data.aws_ssm_parameter.edl_username.value
EARTHDATA_PASSWORD = data.aws_ssm_parameter.edl_password.value
}
}
}
2 changes: 1 addition & 1 deletion tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@ def mock_sns():
os.environ["AWS_DEFAULT_REGION"] = "us-west-2"

sns = boto3.client("sns")
sns.create_topic(Name="svc-hydrocron-test-cnm-response")
sns.create_topic(Name="svc-hydrocron-sit-cnm-response")

yield sns

Expand Down
2 changes: 1 addition & 1 deletion tests/test_data/track_ingest_cnm_message.json
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
"checksumType": "md5",
"checksum": "6ce27e868bd90055252de186f554759f",
"size": 745878,
"uri": "s3://podaac-swot-test-cumulus-protected/SWOT_L2_HR_RiverSP_2.0/SWOT_L2_HR_RiverSP_Reach_020_457_NA_20240905T233134_20240905T233135_PIC0_01.zip"
"uri": "s3://podaac-swot-uat-cumulus-protected/SWOT_L2_HR_RiverSP_2.0/SWOT_L2_HR_RiverSP_Reach_020_457_NA_20240905T233134_20240905T233135_PIC0_01.zip"
}
]
}
Expand Down
6 changes: 4 additions & 2 deletions tests/test_track_ingest.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ def test_query_cmr(mock_ssm):
track = Track(collection_shortname, collection_start_date)
track.query_start = datetime.datetime(2024, 6, 30, 0, 0, 0, tzinfo=datetime.timezone.utc)
track.query_end = datetime.datetime(2024, 6, 30, 12, 0, 0, tzinfo=datetime.timezone.utc)
track.ENV = "OPS"

vcr_cassette = pathlib.Path(os.path.dirname(os.path.realpath(__file__))) \
.joinpath('vcr_cassettes').joinpath('cmr_query.yaml')
Expand Down Expand Up @@ -299,18 +300,19 @@ def test_track_ingest_publish_cnm(track_ingest_cnm_fixture):
"actual_feature_count": 0,
"status": "to_ingest"
}]
track.ENV = "sit"

vcr_cassette = pathlib.Path(os.path.dirname(os.path.realpath(__file__))) \
.joinpath('vcr_cassettes').joinpath('publish_cnm.yaml')
with vcr.use_cassette(vcr_cassette):
track.publish_cnm_ingest(DEFAULT_ACCOUNT_ID)

sns_backend = sns_backends[DEFAULT_ACCOUNT_ID]["us-west-2"]
actual = json.loads(sns_backend.topics[f"arn:aws:sns:us-west-2:{DEFAULT_ACCOUNT_ID}:svc-hydrocron-test-cnm-response"].sent_notifications[0][1])
actual = json.loads(sns_backend.topics[f"arn:aws:sns:us-west-2:{DEFAULT_ACCOUNT_ID}:svc-hydrocron-sit-cnm-response"].sent_notifications[0][1])

expected_file = (pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
.joinpath('test_data').joinpath('track_ingest_cnm_message.json'))
with open(expected_file) as jf:
expected = json.load(jf)

assert actual == expected
assert actual == expected
42 changes: 21 additions & 21 deletions tests/vcr_cassettes/cmr_query.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,13 @@ interactions:
Connection:
- keep-alive
User-Agent:
- python-requests/2.31.0
- python-requests/2.32.3
method: GET
uri: https://cmr.earthdata.nasa.gov/search/granules.umm_json?short_name=SWOT_L2_HR_RiverSP_reach_2.0&revision_date%5B%5D=2024-06-30T00:00:00Z,2024-06-30T12:00:00Z&page_size=0
response:
body:
string: !!binary |
H4sIAAAAAAAAAKtWysgsKVayMjQ00FEqyc/PVrIyBrIyS1JzgaLRsbUA4Lq7hiEAAAA=
H4sIAAAAAAAAAKtWysgsKVayMjQ00FEqyc/PVrIyNTbVUcosSc0FCkfH1gIAS8RVJSIAAAA=
headers:
Access-Control-Allow-Origin:
- '*'
Expand All @@ -25,21 +25,21 @@ interactions:
CMR-Hits:
- '110'
CMR-Request-Id:
- b9f5a8d4-33fd-4ded-b0e5-d8e964505bd3
- b7d0b2db-0cb5-4d78-bb3e-c42a826c40e7
CMR-Took:
- '30'
- '537'
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-MD5:
- 3a550bd79ef29c083d6c8cea258001cf
- edc7617a457e6daba68727b96962201d
Content-SHA1:
- b7f02340a705429d1e4ca89188603db7ecd914de
- 532abf1f386d7779c34ef160e8f3c1dbe3cc7d90
Content-Type:
- application/vnd.nasa.cmr.umm_results+json;version=1.6.6; charset=utf-8
Date:
- Tue, 23 Jul 2024 21:52:54 GMT
- Tue, 24 Sep 2024 13:39:14 GMT
Server:
- ServerTokens ProductOnly
Strict-Transport-Security:
Expand All @@ -49,9 +49,9 @@ interactions:
Vary:
- Accept-Encoding, User-Agent
Via:
- 1.1 cb7d4a3c5329f4f381e8cdfcd4a3e1e4.cloudfront.net (CloudFront)
- 1.1 1845d835b50e25e6e32b19402cc11164.cloudfront.net (CloudFront)
X-Amz-Cf-Id:
- cHL2EMsmlLp8V4zk4wFUJoYAAqNv_lP1DmmaGUropWUy8B3bdgovZA==
- jEN3SkDw-suJhFnvm0jRxN7wWfzsxIVkYacBd08iNUU50W2KZHJw0A==
X-Amz-Cf-Pop:
- LAX50-C1
X-Cache:
Expand All @@ -61,7 +61,7 @@ interactions:
X-Frame-Options:
- SAMEORIGIN
X-Request-Id:
- cHL2EMsmlLp8V4zk4wFUJoYAAqNv_lP1DmmaGUropWUy8B3bdgovZA==
- jEN3SkDw-suJhFnvm0jRxN7wWfzsxIVkYacBd08iNUU50W2KZHJw0A==
X-XSS-Protection:
- 1; mode=block
status:
Expand All @@ -77,14 +77,14 @@ interactions:
Connection:
- keep-alive
User-Agent:
- python-requests/2.31.0
- python-requests/2.32.3
method: GET
uri: https://cmr.earthdata.nasa.gov/search/granules.umm_json?short_name=SWOT_L2_HR_RiverSP_reach_2.0&revision_date%5B%5D=2024-06-30T00:00:00Z,2024-06-30T12:00:00Z&page_size=110
response:
body:
string: !!binary |
H4sIAAAAAAAAAOyda3Mbt5a1/4pKXydmGndA3xRZyajGt2nLOafO1ClXk2zafCORKoryHJ9U/vsL
YNMXLkgKBe127LGqErUpEU+ju7F792Vjrd/3387Xl/sHQjQ/7K+Xy9/2D5S2P+zP1/15/PX//L5/
YNMXLkgKBe127LGqErUpEU+ju7F792Vjrd/3387Xl/sHQjQ/7K+Xy9/2D5T0P+zP1/15/PX//L5/
3q+7/YPf9yfLxaS/WD9av7/o9w/236y6xdVZv//Dxz/Mp/HXvyihfCONbsyjF8+Pnjx/9Th+ZdW/
m1/Ol4v8HfHD/qJbz9/11OLl356fvn4iX/9n+7qNv1y9fPG67bvJ29eNcK9F/P/l4WvZSN1YqU4b
I6QJW5+VaF6/ODlq4vdfX/7vcp17dHbWT9ZphVudO5IuBK28atRnnbtYLd/Np/2KvvPp97Pl6rxb
Expand Down Expand Up @@ -3226,7 +3226,7 @@ interactions:
otOPtmK0B3QCbUdut4U8CrW03ikDHjbOFYmLVUJNwxKXVlxj2llw2i84CWVHnMe6K6iEFl1Gelut
pC7KeOW3FvZbLfQmJvrywiKI02D9YqyOr94u1g8zDn6RRqwTLjVvUhlnjKi9yuf42dfta6ei06AC
55yS1qK6di/TgouRzmG+bmqUbhTWeJzjyCdvau5lijMpbJdB/lZk9RS2NJw4voUtryhsgejPH2gu
Uuq/QNjyz3//A/f1/gBy9SEA
Uuq/QNjyz3//A7ZKP6Vy9SEA
headers:
Access-Control-Allow-Origin:
- '*'
Expand All @@ -3236,23 +3236,23 @@ interactions:
CMR-Hits:
- '110'
CMR-Request-Id:
- 82ac2813-879e-4d22-8021-64b41cff65da
- e8ff386e-0678-4918-ae65-054c29857cf0
CMR-Search-After:
- '["pocloud",1719443366372,3138034484]'
CMR-Took:
- '393'
- '395'
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-MD5:
- 2d902a382970e2f83487aa192e645c59
- 0e64379b560424aac7c2441e20205e1a
Content-SHA1:
- 64fb509739aacd6d4bc5662df62254c7ddde86c9
- feba02a7fa54f919e3c182a6321022893cf83a69
Content-Type:
- application/vnd.nasa.cmr.umm_results+json;version=1.6.6; charset=utf-8
Date:
- Tue, 23 Jul 2024 21:52:55 GMT
- Tue, 24 Sep 2024 13:39:15 GMT
Server:
- ServerTokens ProductOnly
Strict-Transport-Security:
Expand All @@ -3262,9 +3262,9 @@ interactions:
Vary:
- Accept-Encoding, User-Agent
Via:
- 1.1 60977766708edc6ce8219f52eb7a75ca.cloudfront.net (CloudFront)
- 1.1 6ab2ed44e2146acf69ff031d14af25c0.cloudfront.net (CloudFront)
X-Amz-Cf-Id:
- uKbpYzW8sCh6edfEbTT9wncGswevzjXmRyC-nsGp7IbiBzMgW9ZdYQ==
- 2shaT1TLRTs0MLI90rJ4-h70whNh_lC5HJHtv9NvXBSTYcRwhKRRgw==
X-Amz-Cf-Pop:
- LAX50-C1
X-Cache:
Expand All @@ -3274,7 +3274,7 @@ interactions:
X-Frame-Options:
- SAMEORIGIN
X-Request-Id:
- uKbpYzW8sCh6edfEbTT9wncGswevzjXmRyC-nsGp7IbiBzMgW9ZdYQ==
- 2shaT1TLRTs0MLI90rJ4-h70whNh_lC5HJHtv9NvXBSTYcRwhKRRgw==
X-XSS-Protection:
- 1; mode=block
status:
Expand Down

0 comments on commit 7a43c90

Please sign in to comment.