From 39748e1b0bf1f6e9a82b490ccd1ec37d2ecef235 Mon Sep 17 00:00:00 2001 From: Edgar Ruiz Date: Mon, 24 Jun 2024 13:31:08 -0500 Subject: [PATCH] Release prep --- DESCRIPTION | 2 +- NEWS.md | 10 +++++++--- cran-comments.md | 39 +++++++++++++++++---------------------- 3 files changed, 25 insertions(+), 26 deletions(-) diff --git a/DESCRIPTION b/DESCRIPTION index fcfd51f..aab6d92 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -1,6 +1,6 @@ Package: pysparklyr Title: Provides a 'PySpark' Back-End for the 'sparklyr' Package -Version: 0.1.4.9004 +Version: 0.1.5 Authors@R: c( person("Edgar", "Ruiz", , "edgar@posit.co", role = c("aut", "cre")), person(given = "Posit Software, PBC", role = c("cph", "fnd")) diff --git a/NEWS.md b/NEWS.md index 27dbab4..b719649 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,14 +1,18 @@ -# pysparklyr (dev) +# pysparklyr 0.1.5 -* Adds support for `I()` in `tbl()` +### Improvements -* Fixes issues with having multiple line functions in `spark_apply()` +* Adds support for `I()` in `tbl()` * Ensures `arrow` is installed by adding it to Imports (#116) * If the cluster version is higher than the available Python library, it will either use, or offer to install the available Python library +### Fixes + +* Fixes issues with having multiple line functions in `spark_apply()` + # pysparklyr 0.1.4 ### New diff --git a/cran-comments.md b/cran-comments.md index c85c668..0e4d08b 100644 --- a/cran-comments.md +++ b/cran-comments.md @@ -2,36 +2,31 @@ In this version: -* Adds support for `spark_apply()` via the `rpy2` Python library - * It will not automatically distribute packages, it will assume that the - necessary packages are already installed in each node. This also means that - the `packages` argument is not supported - * As in its original implementation, schema inferring works, and as with the - original implementation, it has a performance cost. Unlike the original, the - Databricks, and Spark, Connect version will return a 'columns' specification - that you can use for the next time you run the call. - -* At connection time, it enables Arrow by default. It does this by setting -these two configuration settings to true: - * `spark.sql.execution.arrow.pyspark.enabled` - * `spark.sql.execution.arrow.pyspark.fallback.enabled` +* Adds support for `I()` in `tbl()` + +* Ensures `arrow` is installed by adding it to Imports (#116) + +* If the cluster version is higher than the available Python library, it will +either use, or offer to install the available Python library + +* Fixes issues with having multiple line functions in `spark_apply()` ## Test environments -- Ubuntu 22.04, R 4.3.3, Spark 3.5 (GH Actions) -- Ubuntu 22.04, R 4.3.3, Spark 3.4 (GH Actions) +- Ubuntu 22.04, R 4.4.1, Spark 3.5 (GH Actions) +- Ubuntu 22.04, R 4.4.1, Spark 3.4 (GH Actions) -- Local Mac OS M3 (aarch64-apple-darwin23), R 4.3.3, Spark 3.5 (Local) +- Local Mac OS M3 (aarch64-apple-darwin23), R 4.4.0, Spark 3.5 (Local) ## R CMD check environments -- Mac OS M3 (aarch64-apple-darwin23), R 4.3.3 (Local) +- Mac OS M3 (aarch64-apple-darwin23), R 4.4.0 (Local) -- Mac OS x86_64-apple-darwin20.0 (64-bit), R 4.3.3 (GH Actions) -- Windows x86_64-w64-mingw32 (64-bit), R 4.3.3 (GH Actions) -- Linux x86_64-pc-linux-gnu (64-bit), R 4.3.3 (GH Actions) -- Linux x86_64-pc-linux-gnu (64-bit), R 4.5.0 (dev) (GH Actions) -- Linux x86_64-pc-linux-gnu (64-bit), R 4.2.3 (old release) (GH Actions) +- Mac OS x86_64-apple-darwin20.0 (64-bit), R 4.4.1 (GH Actions) +- Windows x86_64-w64-mingw32 (64-bit), R 4.4.1 (GH Actions) +- Linux x86_64-pc-linux-gnu (64-bit), R dev (GH Actions) +- Linux x86_64-pc-linux-gnu (64-bit), R 4.4.1 (GH Actions) +- Linux x86_64-pc-linux-gnu (64-bit), R 4.3.3 (old release) (GH Actions) ## R CMD check results