All notable changes to this project will be documented in this file.
- Added 'get_batches' method, a feature to retrieve a group of batches based on 'BatchFilters'
- Now passing the wrong type for Filtering Jobs or Batches will raise a TypeError instead of a ValueError
released 2024-10-09
- Deprecation warnings on init of SDK, now each version of pasqal-cloud is supported for 1 year after release. If your version is not supported anymore you will get a deprecation warning each time you use the SDK.
released 2024-10-02
- Allow unauthenticated users to access public device specifications
- Now these methods are using V2 endpoints:
- Cancel a batch
- Cancel a job
- Cancel a group of jobs
- Add jobs to a batch
- Close a batch
- Introduce EMU_FRESNEL device type
- 'from pasqal_cloud' import has completely replaced the deprecated import 'from sdk'
- 'group_id' field is now removed, use 'project_id' field instead
A Batch that does not accept new jobs is now called "closed" instead of "complete". As a result:
- You should create an "open" batch using the "open" argument of the
create_batch
method instead of thecomplete
argument. - Close an open batch using the
close_batch
method of the SDK orclose
method of theBatch
class. They are functionally equivalent to the now deprecatedcomplete_batch
anddeclare_complete
functions. - Batch dataclass parameter
complete
has been replaced byopen
. - Using the deprecated method and arguments will now raise a warning.
- Introduced the
User-Agent
header.
- Introduced the 'cancel_jobs' method, a feature to cancel a group of jobs from a specific batch, based on 'CancelJobFilters'
- Introduced a new filtering system using classes for more flexible and robust filtering options
- Added 'get_jobs' method, a feature to retrieve a group of jobs based on 'JobFilters'
- Updated the 'rebatch' method to use 'RebatchFilters' for filtering jobs to retry, replacing the previous multiple filter parameters
- Use v2 endpoints for batches and jobs
- Jobs are now downloaded from s3 client-side
- Expose add_jobs and close_batch functions in the SDK interface
- Refactor non-private methods that were prefixed with _
- Add test coverage for new functions
- Drop priority from Batch attributes
- Use the new authentication url: authenticate.pasqal.cloud
- Add a retry mechanism to the HTTP requests on certain status codes
- Add some typehints and basic test descriptions
- Added
strict_validation
option to emulators configuration.
- Upgrading Pydantic requirement from V1.10.x to >= V2.6.0.
- Updated documentation.
- Added feature to retry a job for an open batch.
- Added unit tests.
- Updated documentation.
- Added feature to "rebatch" for a closed batch.
- Added
parent_id
param to batch and job. - Updated documentation.
- Added feature to create an "open" batch.
- To create an open batch, set the
complete
argument toTrue
in thecreate_batch
method of the SDK. - To add jobs to an open batch, use the
add_jobs
method.
- To create an open batch, set the
- Updated documentation to add examples to create open batches.
- The
wait
argument now waits for all the jobs to be terminated instead of waiting for the batch to be terminated.
- CRUCIAL BUGFIX - download results properly from result link.
- Added exception case for results download.
- Added new base error class to inject response context data in exception message.
- Added tests for new base error class.
- Updated existing error to inherit from new response context exception class.
- Added
result_link
field toWorkload
object.
get_workload
now targets v2 of workloads endpoints.result
is built fromresult_link
where results are downloaded from temp s3 link.
- Added exception classes for all possible failures (mostly related to client errors).
- Added try-catch to corresponding methods to raise proper error.
- Use
raise_for_status
on response from client before returningdata
to get accurate exception. - Bumped minor as new exceptions are raised.
- Removed obsolete
HTTPError
andLoginError
classes.
Added an ordered_jobs
list as an attribute to the Batch
object in which jobs are ordered by creation date.
Batch attribute jobs
is now deprecated, use ordered_jobs
instead.
Workloads are now supported by the sdk with the create_workload, get_workload and cancel_workload methods.
Reinstated the full functionality for the fetch_results
argument as well as the corresponding test.
Pre-commit hooks were added for the contributors of pasqal-cloud to enforce some code linting.
Pins the dependency on pydantic
to versions before v2.0 due to conflicts with the current code.
- Added
ResultType
enum andresult_types
config option - Added validation for result types
Currently none of the devices can choose a different result type than counter
,
hence the feature was not documented.
- Added
full_result
attribute toJob
schema for unformatted results. - Updated documentation for
full_result
fetch_result
kwarg was removed from all internal functions as the results are by default included with the batch. It was marked as deprecated in public functions.- Refactored to no longer return
batch_rsp
andjobs_rsp
as the latter is systematically included in the former. - Changed test payloads to reflect data returned by the api.
Fixed incorrect type hint of user IDs from int to str which was raising a validation exception when loading the data returned by the API.
Now you can get the environment configuration for endpoints
and auth0
of the SDK class with
PASQAL_ENDPOINTS['env']
and AUTH0_CONFIG['env']
with env being prod
, preprod
or dev
.
- Batch and Job dataclasses have been replaced by pydantic models, which gives more control to unserialize the API response data. The SDK is now more resilient to API changes, if a new field is added to the response of the API then the job and batch object instantiation will not raise an exception, meaning this SDK version will not become obsolete as soon as the API spec is updated.
"Groups" have been renamed as "Projects", hence URL endpoints and attribute names have been changed accordingly.
For example the group_id
of a batch has been renamed to project_id
.
Note that for backwards compatibility, the group_id
is still exposed by the APIs as a duplicate of the project_id
.
- Cancel methods added to batches and jobs, from the object itself and the sdk
- Get a job method from sdk added
- Relax python dependencies version to prevent conflicts.
- Fixed bug when using a custom TokenProvider to authenticate to Pasqal Cloud Services
- Reorder documentation for clarity
- Clarify instructions to use a custom token provider for authentication
- Package renamed from pasqal-sdk to pasqal-cloud
- Import name renamed from sdk to pasqal_cloud (import sdk is now deprecated but still usable)
device_type
argument replace byemulator
in sdk create_batchDeviceType
replaced withEmulatorType
- QPU device type and related logic
- Added tests to check the login behavior.
- Added tests to check the override Endpoints behavior.
- The authentication now directly connects to the Auth0 platform instead of connecting through PasqalCloud.
- Small refactor of files, with the authentication modules in the
authentication.py
file, instead ofclient.py
.
- Account endpoint, we now use Auth0.
- Added a get_device_specs_dict function to the sdk
- Updated Readme for the device specs
- The default values for the tensor network emulator were updated to better ones.
- the client_id and client_secret was leftover in the Client object even though they are no longer used.
- Updated the README to also supply the group_id which is mandatory.
- Updated the default endpoint to
/core-fast
in accordance with infra changes. All users should use/core-fast
in all environments. - The PCS APIs Client was refactored to accept any custom token provider for authentication. This can be used by users as an alternative to the username/password-based token provider.
- The group_id field has been added to the Job schema which is now present in some services returning Job data.
- Pytest fixtures updated to accommodate this.
- The authentication system has been reworked and is now connected to auth0. API keys have been removed hence you should now use your email and password to initialize the SDK (see example in Readme).
-
A new device type, "EMU-TN", corresponding to tensor network-based emulator, was added. The "EMU_SV" type was removed as it is not available right now.
-
A new version of the "core" microservice, using FastAPI instead of Flask as web framework, was released in the "dev" environment. If using the "dev" environment, you should upgrade the core endpoint you are using to "https://apis.dev.pasqal.cloud/core-fast"
- Changed typehints for id fields to be
str
rather thanint
to reflect the switch touuid
in our services. - Updated tests to use UUID strings in the fixtures and tests
- Moved the device_types into device module
- Refactored configuration to be split into
BaseConfig
,EmuSVConfig
andEmuFreeConfig
, more device-specific configs can be added - Refactored unit tests to use the proper Config model.
- Updated README with the new Configuration classes.
BaseConfig
: the base configuration class. A dataclass with the same methods as the formerConfiguration
model and theextra_config
param.EmuSVConfig
: the configuration class forDeviceType.EMU_SV
, inherits fromBaseConfig
with the parameters formerly found onConfiguration
.EmuFreeConfig
: the configuration class forDeviceType.EMU_FREE
, inherits fromBaseConfig
with thewith_noise
boolean parameter.
Reworked the wait
logic
when creating a batch
or declaring it as complete. The old wait
has been split into
two separate boolean kwargs wait
and fetch_results
. - wait
when set to True
still makes the python statement
blocking until the batch gets assigned a termination status (e.g. DONE
, ERROR
, TIMED_OUT
) but doesn't trigger
fetching of results. - fetch_results
is a boolean which when set to True
makes the python statement blocking until
the batch has a termination status and then fetches the results for all the jobs of the batch.
This enables the user to wait for the results and then implement its own custom logic to retrieve results (e.g. only
fetch the results for the last job of the batch).
This also fixes a bug where the user needed an extra call after the batch creation to the get_batch
function to
retrieve results. Now results will be properly populated after batch creation when setting fetch_results=True
.
This is the last released version before the implementation of the changelog.
See commit history before this commit.
The format is based on Keep a Changelog and this project adheres to Semantic Versioning.