You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is a use-case for importing bulk data from an external data source. There are around 8k Patient resources each having 5 or more related resources. The data is provided in form of a file containing a JSON Array of FHIR resources.
The current publishing tool has a few limitations : -
The tool ingests data in form of files containing a single resource. To handle this uses case a pre-processing stage is required to extract the data into different file.
The tool consolidates all the FHIR Resources into a single Transaction Bundle. This is a bottleneck when dealing with large amounts of resources.
The text was updated successfully, but these errors were encountered:
The current implementation can be used to test sorted datasets or sets with no references, or maybe we can play around with the chunk sizes to ensure it works? as we figure out how sort the whole file
There is a use-case for importing bulk data from an external data source. There are around 8k Patient resources each having 5 or more related resources. The data is provided in form of a file containing a JSON Array of FHIR resources.
The current publishing tool has a few limitations : -
The text was updated successfully, but these errors were encountered: