From 755c88b7a59960864e70449124a6e92224a3aa6b Mon Sep 17 00:00:00 2001
From: Sumedh Wale
Date: Sun, 17 Oct 2021 00:40:07 +0530
Subject: [PATCH] Documentation updates for 1.3.0
- scanned through all changes and added to release_notes.md
- updated descriptions of new added features, performance enhancements etc
- updated all references to Enterprise Edition
- move EXEC SCALA out of the experimental category since it has seen quite a bit of testing
and is standard part of the demo zeppelin notebooks
- went through many docs and fixed/updated them (too many changes to mention here)
---
.../main/scala/io/snappydata/Literals.scala | 2 +-
docs/additional_files/license_model.md | 8 +-
.../open_source_components.md | 80 +++--
docs/best_practices/odbc_jdbc_clients.md | 6 +-
.../transactions_best_practices.md | 2 +-
docs/connectors/tdv.md | 8 +-
docs/experimental.md | 8 -
docs/howto/connect_using_odbc_driver.md | 6 +-
docs/install.md | 18 +-
docs/install/building_from_source.md | 62 +++-
docs/install/system_requirements.md | 36 ++-
docs/new_features.md | 28 +-
docs/prev_doc_ver.md | 1 +
docs/programming_guide/scala_interpreter.md | 3 -
docs/reference.md | 2 +
.../API_Reference/odbc_supported_apis.md | 2 +-
.../command_line_utilities/scala-cli.md | 6 +-
...{known_issues.md => known_issues_1.1.0.md} | 3 +-
docs/release_notes/release_notes.md | 302 +++++++++++++++++-
docs/unsupported.md | 12 +-
20 files changed, 454 insertions(+), 141 deletions(-)
rename docs/release_notes/{known_issues.md => known_issues_1.1.0.md} (99%)
diff --git a/core/src/main/scala/io/snappydata/Literals.scala b/core/src/main/scala/io/snappydata/Literals.scala
index fbfd225cd2..90623dcaec 100644
--- a/core/src/main/scala/io/snappydata/Literals.scala
+++ b/core/src/main/scala/io/snappydata/Literals.scala
@@ -163,7 +163,7 @@ object Property extends Enumeration {
val MaxMemoryResultSize: SparkValue[String] = Val[String](
s"${Constant.SPARK_PREFIX}sql.maxMemoryResultSize",
- "Maximum size of results from a JDBC/ODBC query in a partition that will be held " +
+ "Maximum size of results from a JDBC/ODBC/SQL query in a partition that will be held " +
"in memory beyond which the results will be written to disk " +
"(value in bytes or k/m/g suffixes for unit, min 1k). Default is 4MB.", Some("4m"))
diff --git a/docs/additional_files/license_model.md b/docs/additional_files/license_model.md
index 074174af6d..3943de1076 100644
--- a/docs/additional_files/license_model.md
+++ b/docs/additional_files/license_model.md
@@ -1,8 +1,4 @@
# Licensing Model
-The source code is distributed with Apache License 2.0
-
\ No newline at end of file
+The source code is distributed with Apache License 2.0. Users can download the fully functional OSS version
+and deploy it in production.
diff --git a/docs/additional_files/open_source_components.md b/docs/additional_files/open_source_components.md
index 4044e8526d..99d7510a08 100644
--- a/docs/additional_files/open_source_components.md
+++ b/docs/additional_files/open_source_components.md
@@ -1,36 +1,52 @@
-# SnappyData Community Edition (Open Source) and TIBCO ComputeDB Enterprise Edition
+# SnappyData Community Edition (Open Source)
-With the 1.3.0 release, SnappyData Community Edition gets even closer to the TIBCO ComputeDB Enterprise Edition, in terms of the features.
+With the 1.3.0 release, SnappyData Community Edition gets close to the erstwhile TIBCO ComputeDB Enterprise Edition, in terms of the features.
+Apart from the GemFire connector (that depends on non-OSS Pivotal GemFire jars), the 1.3.0 Community Edition
+exceeds the previous TIBCO ComputeDB Enterprise Edition 1.2.0 in both features and performance. You can find a list
+of new features and performance improvements in the [release notes](../release_notes/release_notes.md).
-The features hitherto available only in Enterprise edition - Off-heap storage for column tables, Approximate Query Processing, LDAP-based Authentication and Authorization, to name a few -
-are now availble in SnappyData (community edition) as well.
+The features hitherto available only in Enterprise edition - Off-heap storage for column tables, Approximate Query Processing, LDAP-based Authentication and Authorization, ODBC driver, to name a few -
+are now available in SnappyData (community edition) as well.
-
-The capabilities of the **Community Edition** and the additional capabilities of the **Enterprise Edition** are listed in the following table:
+The high level capabilities of the **Community Edition** are listed in the following table:
-| Feature | Community | Enterprise |
-| ------------------- |:------------------------:| :-----------------------:|
-| Mutable Row & Column Store | X | X |
-| Compatibility with Spark | X | X |
-| Shared Nothing Persistence and HA | X | X |
-| REST API for Spark Job Submission | X | X |
-| Fault Tolerance for Driver | X | X |
-| Access to the system using JDBC Driver | X | X |
-| CLI for backup, restore, and export data | X | X |
-| Spark console extensions | X | X |
-| System Perf/Behavior statistics | X | X |
-| Support for transactions in Row tables | X | X |
-| Support for indexing in Row Tables | X | X |
-| SQL extensions for stream processing | X | X |
-| Runtime deployment of packages and jars | X | X |
-| Synopsis Data Engine for Approximate Querying | X | X |
-| ODBC Driver with High Concurrency | X | X |
-| Off-heap data storage for column tables | X | X |
-| CDC Stream receiver for SQL Server into SnappyData | X | X |
-| GemFire/Apache Geode connector | | X |
-| Row Level Security | X | X |
-| Use encrypted password instead of clear text password | X | X |
-| Restrict Table, View, Function creation even in user’s own schema | X | X |
-| LDAP security interface | X | X |
+| Feature | Available |
+| ------- | --------- |
+|Mutable Row and Column Store | X |
+|Compatibility with Spark | X |
+|Shared Nothing Persistence and HA | X |
+|REST API for Spark Job Submission | X |
+|Fault Tolerance for Driver | X |
+|Access to the system using JDBC Driver | X |
+|CLI for backup, restore, and export data | X |
+|Spark console extensions | X |
+|System Performance/behavior statistics | X |
+|Support for transactions in Row tables | X |
+|Support for indexing in Row tables | X |
+|Support for snapshot transactions in Column tables | X |
+|Online compaction of column block data | X |
+|Transparent disk overflow of large query results | X |
+|Support for external Hive meta store | X |
+|SQL extensions for stream processing | X |
+|SnappyData sink for structured stream processing | X |
+|Structured Streaming user interface | X |
+|Runtime deployment of packages and jars | X |
+|Scala code execution from SQL (EXEC SCALA) | X |
+|Out of the box support for cloud storage | X |
+|Support for Hadoop 3.2 | X |
+|SnappyData Interpreter for Apache Zeppelin | X |
+|Synopsis Data Engine for Approximate Querying | X |
+|Support for Synopsis Data Engine from TIBCO Spotfire® | X |
+|ODBC Driver with High Concurrency | X |
+|Off-heap data storage for column tables | X |
+|CDC Stream receiver for SQL Server into SnappyData | X |
+|Row Level Security | X |
+|Use encrypted password instead of clear text password | X |
+|Restrict Table, View, Function creation even in user’s own schema | X |
+|LDAP security interface | X |
+|Visual Statistics Display (VSD) tool for system statistics (gfs) files(*) | |
+|GemFire connector | |
+
+(*) NOTE: The graphical Visual Statistics Display (VSD) tool to see the system statistics (gfs) files is not OSS
+and was never shipped with SnappyData. It is available from [GemTalk Systems](https://gemtalksystems.com/products/vsd/)
+or [Pivotal GemFire](https://network.pivotal.io/products/pivotal-gemfire) under their own respective licenses.
diff --git a/docs/best_practices/odbc_jdbc_clients.md b/docs/best_practices/odbc_jdbc_clients.md
index bc28aa4c65..ef8fb89234 100644
--- a/docs/best_practices/odbc_jdbc_clients.md
+++ b/docs/best_practices/odbc_jdbc_clients.md
@@ -1,5 +1,7 @@
# ODBC and JDBC Clients
-When using JDBC or ODBC clients, applications must close the ResultSet that is consumed or consume a FORWARD_ONLY ResultSet completely. These ResultSets can keep tables open and thereby block any DDL executions. If the cursor used by ResultSet remains open, then the DDL executions gets timeout.
+When using JDBC or ODBC clients, applications must close the ResultSet that is consumed or consume a FORWARD_ONLY ResultSet completely. These ResultSets can keep tables open and thereby block any DDL executions. If the cursor used by ResultSet remains open, then the DDL executions may time out.
-Such intermittent ResultSets are eventually cleaned up by the product, but that happens only in a garbage collection (GC) cycle where JVM cleans the weak references corresponding to those ResultSets. Although, this process can take an indeterminate amount of time.
+Such intermittent ResultSets are eventually cleaned up by the product, but that happens only in a garbage collection (GC) cycle where JVM cleans the weak references corresponding to those ResultSets.
+However, this process can take an indeterminate amount of time so it's recommended for users to clean up the ResultSets,
+Statements and other JDBC/ODBC constructs immediately after use (using try-resources in Java and equivalent in other languages).
diff --git a/docs/best_practices/transactions_best_practices.md b/docs/best_practices/transactions_best_practices.md
index 3066d28a39..58052f273f 100755
--- a/docs/best_practices/transactions_best_practices.md
+++ b/docs/best_practices/transactions_best_practices.md
@@ -3,7 +3,7 @@
## Using Transactions
-- For high performance, mimimize the duration of transactions to avoid conflicts with other concurrent transactions. If atomicity for only single row updates is required, then completely avoid using transactions because SnappyData provides atomicity and isolation for single rows without transactions.
+- For high performance, minimize the duration of transactions to avoid conflicts with other concurrent transactions. If atomicity for only single row updates is required, then completely avoid using transactions because SnappyData provides atomicity and isolation for single rows without transactions.
- When using transactions, keep the number of rows involved in the transaction as low as possible. SnappyData acquires locks eagerly, and long-lasting transactions increase the probability of conflicts and transaction failures. Avoid transactions for large batch update statements or statements that effect a lot of rows.
diff --git a/docs/connectors/tdv.md b/docs/connectors/tdv.md
index 7557b76546..2bc92f1c00 100644
--- a/docs/connectors/tdv.md
+++ b/docs/connectors/tdv.md
@@ -33,9 +33,13 @@ SnappyData edition 1.3.0 is tested and works with TIBCO Data Virtualization 8.2.
For example:
snappy>deploy jar dv-jar '/snappydata/snappy-connectors/tdv-connector/lib/csjdbc8.jar';
-
+
+ !!!Note
+ The above jar may not be available in the SnappyData Community edition.
+ In that case find and copy it from your TDV installation.
+
!!!Note
- You should execute this command only once when you connect to the TDV cluster for the first time.
+ You should execute this command only once when you connect to the TDV cluster for the first time.
8. Create an external table with JDBC options:
diff --git a/docs/experimental.md b/docs/experimental.md
index 349f2d5655..cae73c9fb8 100644
--- a/docs/experimental.md
+++ b/docs/experimental.md
@@ -6,11 +6,3 @@ SnappyData 1.3.0 provides the following features on an experimental basis. These
You can enable authorization of external tables by setting the system property **CHECK_EXTERNAL_TABLE_AUTHZ** to true when the cluster's security is enabled.
System admin or the schema owner can grant or revoke the permissions on external tables to other users.
For example: `GRANT ALL ON to ;`
-
-
-## Support ad-hoc, Interactive Execution of Scala code
-You can execute Scala code using a new CLI script **snappy-scala** that is built with IJ APIs. You can also run it as an SQL command using prefix **exec scala**.
-The Scala code can use any valid/supported Spark API for example, to carry out custom data loading/transformations or to launch a structured streaming job. Since the code is submitted as an SQL command, you can now also use any SQL tool (based on JDBC/ODBC), including Notebook environments, to execute ad-hoc code blocks directly. Prior to this feature, apps were required to use the smart connector or use the SnappyData specific native Zeppelin interpreter.
-**exec scala** command can be secured using the SQL GRANT/REVOKE permissions. System admin (DB owner) can grant or revoke permissions for Scala interpreter privilege.
-
-For more information refer to [Executing Spark Scala Code using SQL](/programming_guide/scala_interpreter.md)
diff --git a/docs/howto/connect_using_odbc_driver.md b/docs/howto/connect_using_odbc_driver.md
index 07373c4ed6..203221b15c 100644
--- a/docs/howto/connect_using_odbc_driver.md
+++ b/docs/howto/connect_using_odbc_driver.md
@@ -22,9 +22,9 @@ To download and install the ODBC driver:
2. Extract **snappydata-odbc_1.3.0_win.zip**. Depending on your Windows installation, extract the contents of the 32-bit or 64-bit version of the SnappyData ODBC Driver.
| Version | ODBC Driver |
- |--------|--------|
- |32-bit for 32-bit platform|TIB_compute-odbc_1.2.0_win_x86.zip|
- |64-bit for 64-bit platform|TIB_compute-odbc_1.2.0_win_x64.zip|
+ |---------|-------------|
+ |32-bit for 32-bit platform|snappydata-odbc_1.3.0_win_x86.msi|
+ |64-bit for 64-bit platform|snappydata-odbc_1.3.0_win_x64.msi|
4. Double-click on the corresponding **msi** file, and follow the steps to complete the installation.
diff --git a/docs/install.md b/docs/install.md
index 526080a928..f925fa2116 100644
--- a/docs/install.md
+++ b/docs/install.md
@@ -1,14 +1,12 @@
# Provisioning SnappyData
-SnappyData offers two editions of the product:
-
-* **Community Edition**
-* **Enterprise Edition**
-
The SnappyData **Community Edition** is Apache 2.0 licensed. It is a free, open-source version of the product that can be downloaded by anyone.
-The **Enterprise Edition** of the product, which is sold by TIBCO Software under the name **TIBCO ComputeDB™**, includes everything that is offered in the Community Edition along with additional capabilities that are closed source and only available as part of a licensed subscription.
+The erstwhile **Enterprise Edition** of the product, which is sold by TIBCO Software under the name **TIBCO ComputeDB™**, includes everything that is offered in the Community Edition along with additional capabilities that are closed source and only available as part of a licensed subscription.
+
+As of the 1.3.0 release, all components that were previously closed source are now OSS (except for the
+GemFire connector), and there is only the **Community Edition** that is released.
-For more information on the capabilities of the Community Edition and Enterprise Edition, see [Community Edition (Open Source)/Enterprise Edition components](additional_files/open_source_components.md).
+For more information on the capabilities of the Community Edition and differences from the previous Enterprise Edition, see [Community Edition (Open Source)](additional_files/open_source_components.md).
Download SnappyData Community Edition
@@ -19,12 +17,6 @@ Download the [SnappyData 1.3.0 Community Edition (Open Source)](https://github.c
* [**SnappyData 1.3.0 Release download link**](https://github.com/TIBCOSoftware/snappydata/releases/download/v1.3.0/snappydata-1.3.0-bin.tar.gz)
-
-
SnappyData Provisioning Options
diff --git a/docs/install/building_from_source.md b/docs/install/building_from_source.md
index 79bd97bc58..d799747b0e 100644
--- a/docs/install/building_from_source.md
+++ b/docs/install/building_from_source.md
@@ -5,19 +5,47 @@
Building SnappyData requires JDK 8 installation ([Oracle Java SE](http://www.oracle.com/technetwork/java/javase/downloads/index.html)).
## Build all Components of SnappyData
-
+
**Master**
```pre
> git clone https://github.com/TIBCOSoftware/snappydata.git --recursive
> cd snappydata
-> git clone https://github.com/TIBCOSoftware/snappy-aqp.git aqp
-> git clone https://github.com/TIBCOSoftware/snappy-connectors.git
> ./gradlew product
```
The product is in **build-artifacts/scala-2.11/snappy**
+To build product artifacts in all supported formats (tarball, zip, rpm, deb):
+
+```pre
+> git clone https://github.com/TIBCOSoftware/snappydata.git --recursive
+> cd snappydata
+> ./gradlew cleanAll
+> ./gradlew distProduct
+```
+
+The artifacts are in **build-artifacts/scala-2.11/distributions**
+
+You can also add the flags `-PenablePublish -PR.enable` to get them in the form as in an official
+SnappyData distributions but that also requires zeppelin-interpreter and R as noted below.
+
+To build all product artifacts that are in the official SnappyData distributions:
+
+```pre
+> git clone https://github.com/TIBCOSoftware/snappydata.git --recursive
+> git clone https://github.com/TIBCOSoftware/snappy-zeppelin-interpreter.git
+> cd snappydata
+> ./gradlew cleanAll
+> ./gradlew product copyShadowJars distTar -PenablePublish -PR.enable
+```
+
+The artifacts are in **build-artifacts/scala-2.11/distributions**
+
+Building SparkR (with the `R.enable` flag) requires R to be installed locally and at least the following
+R packages along with their dependencies: knitr, markdown, rmarkdown, testthat
+
+
## Repository Layout
- **core** - Extensions to Apache Spark that should not be dependent on SnappyData Spark additions, job server etc. It is also the bridge between _spark_ and _store_ (GemFireXD). For example, SnappyContext, row and column store, streaming additions etc.
@@ -36,7 +64,7 @@ This component depends on _core_ and _store_. The code in the _cluster_ depends
- **snappy-connectors** - Connector for Apache Geode and a Change-Data-Capture (CDC) connector.
- The _spark_, _store_, and _spark-jobserver_ directories are required to be clones of the respective SnappyData repositories and are integrated into the top-level SnappyData project as git submodules. When working with submodules, updating the repositories follows the normal [git submodules](https://git-scm.com/book/en/v2/Git-Tools-Submodules). One can add some aliases in gitconfig to aid pull/push as follows:
+ The _spark_, _store_, _spark-jobserver_, _aqp_, and _snappy-connectors_ directories are required to be clones of the respective SnappyData repositories and are integrated into the top-level SnappyData project as git submodules. When working with submodules, updating the repositories follows the normal [git submodules](https://git-scm.com/book/en/v2/Git-Tools-Submodules). One can add some aliases in gitconfig to aid pull/push as follows:
```pre
[alias]
@@ -108,8 +136,8 @@ To import into IntelliJ IDEA:
* Increase the available JVM heap size for IDEA. Open **bin/idea64.vmoptions** (assuming 64-bit JVM) and increase `-Xmx` option to be something like **-Xmx2g** for comfortable use.
* Select **Import Project**, and then select the SnappyData directory. Use external Gradle import. Click **Next** in the following screen. Clear the **Create separate module per source set** option, while other options can continue with the default. Click **Next** in the following screens.
-
- !!! Note
+
+ !!! Note
* Ignore the **"Gradle location is unknown warning"**.
@@ -117,22 +145,22 @@ To import into IntelliJ IDEA:
* Ignore and dismiss the **"Unindexed remote Maven repositories found"** warning message if seen.
-* When import is completed,
- 1. Go to **File> Settings> Editor> Code Style> Scala**. Set the scheme as **Project**.
+* When import is completed,
+ 1. Go to **File> Settings> Editor> Code Style> Scala**. Set the scheme as **Project**.
- 2. In the same window, select **Java** code style and set the scheme as **Project**.
+ 2. In the same window, select **Java** code style and set the scheme as **Project**.
- 3. Click **OK** to apply and close the window.
+ 3. Click **OK** to apply and close the window.
- 4. Copy **codeStyleSettings.xml** located in the SnappyData top-level directory, to the **.idea** directory created by IDEA.
+ 4. Copy **codeStyleSettings.xml** located in the SnappyData top-level directory, to the **.idea** directory created by IDEA.
- 5. Verify that the settings are now applied in **File> Settings> Editor> Code Style> Java** which should display indent as 2 and continuation indent as 4 (same as Scala).
+ 5. Verify that the settings are now applied in **File> Settings> Editor> Code Style> Java** which should display indent as 2 and continuation indent as 4 (same as Scala).
* If the Gradle tab is not visible immediately, then select it from option available at the bottom-left of IDE. Click on that window list icon for the tabs to be displayed permanently.
* Generate Apache Avro and SnappyData required sources by expanding: **snappydata_2.11> Tasks> other**. Right-click on **generateSources** and run it. The **Run** option may not be available if indexing is still in progress, wait for indexing to complete, and then try again.
The first run may take some time to complete, as it downloads the jar files and other required files. This step has to be done the first time, or if **./gradlew clean** has been run, or if you have made changes to **javacc/avro/messages.xml** source files.
-* Go to **File> Settings> Build, Execution, Deployment> Build tools> Gradle**. Enter **-DideaBuild** in the **Gradle VM Options** textbox.
+* Go to **File> Settings> Build, Execution, Deployment> Build tools> Gradle**. Enter **-DideaBuild** in the **Gradle VM Options** textbox.
* If you get unexpected **Database not found** or **NullPointerException** errors in SnappyData-store/GemFireXD layer, run the **generateSources** target (Gradle tab) again.
@@ -155,14 +183,20 @@ Error:(236, 18) value getByte is not a member of org.apache.spark.unsafe.types.U
```
-Even with the above, running unit tests in IDEA may result in more runtime errors due to unexpected **slf4j** versions. A more comprehensive way to correct, both the compilation and unit test problems in IDEA, is to update the snappy-cluster or for whichever module unit tests are to be run and have the **TEST** imports at the end.
+Even with the above, running unit tests in IDEA may result in more runtime errors due to unexpected **slf4j** versions. A more comprehensive way to correct, both the compilation and unit test problems in IDEA, is to update the snappy-cluster or for whichever module unit tests are to be run and have the **TEST** imports at the end.
The easiest way to do that is to close IDEA, open the module IML file (**.idea/modules/cluster/snappy-cluster_2.11.iml** in this case) in an editor. Search for **scope="TEST"** and move all those lines to the bottom just before `` close tag.
+The ordering of imports is no longer a problem in latest IDEA 2020.x and newer.
## Running a ScalaTest/JUnit
Running Scala/JUnit tests from IntelliJ IDEA is straightforward.
+* In newer IDEA releases, ensure that Intellij IDEA is used to run the tests instead of gradle, while gradle
+ is used for the build. To do this, go to File->Settings->Build, Execution, Deployment->Build Tools->Gradle,
+ select "Run tests using" to be "Intellij IDEA" rather than with gradle for the "snappydata" project.
+ Also ensure that "Build and run using" is gradle rather than "Intellij IDEA".
+
* When selecting a run configuration for JUnit/ScalaTest, avoid selecting the Gradle one (green round icon) otherwise, an external Gradle process is launched that can start building the project again is not cleanly integrated with IDEA. Use the normal JUnit (red+green arrows icon) or ScalaTest (JUnit like with red overlay).
* For JUnit tests, ensure that the working directory is the top-level **\$MODULE_DIR\$/build-artifacts** as mentioned earlier. Otherwise, many SnappyData-store tests fail to find the resource files required in tests. They also pollute the files, so when launched, this allows those to go into **build-artifacts** that are easier to clean. For that reason, it is preferable to do the same for ScalaTests.
diff --git a/docs/install/system_requirements.md b/docs/install/system_requirements.md
index 2c48d23fd7..dd9e006105 100644
--- a/docs/install/system_requirements.md
+++ b/docs/install/system_requirements.md
@@ -21,16 +21,25 @@ SnappyData turns Apache Spark into a mission-critical, elastic scalable in-memor
## Operating Systems Supported
| Operating System| Version |
-|--------|--------|
-|Red Hat Enterprise Linux|- RHEL 6.0
- RHEL 7.0 (Mininum recommended kernel version: 3.10.0-693.2.2.el7.x86\_64)|
-|Ubuntu|Ubuntu Server 14.04 and later||
-|CentOS|CentOS 6, 7 (Minimum recommended kernel version: 3.10.0-693.2.2.el7.x86\_64)|
+|-----------------|---------|
+|Red Hat Enterprise Linux|RHEL 6, 7 and later (Minimum recommended kernel version: 3.10.0-693.2.2.el7.x86\_64)|
+|Ubuntu|Ubuntu Server 14.04 and later|
+|CentOS|CentOS 6, 7 and later (Minimum recommended kernel version: 3.10.0-693.2.2.el7.x86\_64)|
## Host Machine Requirements
+
Requirements for each host:
-* A supported [Oracle Java SE 8](http://www.oracle.com/technetwork/java/javase/downloads) installation. We recommend minimum version: 1.8.0\_144 (see [SNAP-2017](https://jirasnappydataio.atlassian.net/browse/SNAP-2017), [SNAP-1999](https://jirasnappydataio.atlassian.net/browse/SNAP-1999), [SNAP-1911](https://jirasnappydataio.atlassian.net/browse/SNAP-1911), [SNAP-1375](https://jirasnappydataio.atlassian.net/browse/SNAP-1375) for crashes reported with earlier versions).
+* A supported [Oracle Java SE 8](http://www.oracle.com/technetwork/java/javase/downloads) JDK installation.
+ Required minimum version: 1.8.0\_144 (see [SNAP-2017](https://jirasnappydataio.atlassian.net/browse/SNAP-2017),
+ [SNAP-1999](https://jirasnappydataio.atlassian.net/browse/SNAP-1999),
+ [SNAP-1911](https://jirasnappydataio.atlassian.net/browse/SNAP-1911),
+ [SNAP-1375](https://jirasnappydataio.atlassian.net/browse/SNAP-1375) for crashes reported with earlier versions).
+ Recommended is the latest stable release version.
+
+* Alternatively equivalent version of OpenJDK distributions >= 1.8.0\_144 and recommended is having the
+ latest stable release version. A full JDK installation is required.
* The latest version of Bash shell.
@@ -53,24 +62,19 @@ Requirements for each host:
* If you deploy SnappyData on a virtualized host, consult the documentation provided with the platform, for system requirements and recommended best practices, for running Java and latency-sensitive workloads.
-
+- Some of the python APIs can use SciPy to optimize some algorithms (in linalg package), and some others need Pandas. On recent Red Hat based systems SciPy can be installed using `sudo yum install scipy` command. Whereas, on Debian/Ubuntu based systems you can install using the `sudo apt-get install python-scipy` command. Likewise, Pandas on recent Red Hat based systems can be installed using `sudo yum installed python-pandas` command, while on Debian/Ubuntu based systems it can be installed using the `sudo apt-get install python-pandas` command.
-## Python Integration using pyspark
-- The Python pyspark module has the same requirements as in Apache Spark. The numpy package is required by many modules of pyspark including the examples shipped with SnappyData. On recent Red Hat based systems, it can be installed using `sudo yum install numpy` or `sudo yum install python2-numpy` commands. Whereas, on Debian/Ubuntu based systems, you can install using the `sudo apt-get install python-numpy` command.
+- On Red Hat based systems, some of the above Python packages may be available only after enabling the **EPEL** repository. If these are not available in the repositories for your OS version or if using **EPEL** is not an option, then you can use **pip**. Refer to the respective project documentation for details and alternative options such as Anaconda.
-- Some of the python APIs can use SciPy to optimize some algorithms (in linalg package), and some others need Pandas. On recent Red Hat based systems SciPy can be installed using `sudo yum install scipy` command. Whereas, on Debian/Ubuntu based systems you can install using the `sudo apt-get install python-scipy` command. Likewise, Pandas on recent Red Hat based systems can be installed using `sudo yum installed python-pandas` command, while on Debian/Ubuntu based systems it can be installed using the `sudo apt-get install python-pandas` command.
+- Alternatively Python 3 <= 3.7 can also be used. Consult your distribution documents for the equivalent python 3
+ packages for `numpy`, `scipy` and `pandas`. Or you can use conda/mamba to set up the required python environment.
-- On Red Hat based systems, some of the above Python packages may be available only after enabling the **EPEL** repository. If these are not available in the repositories for your OS version or if using **EPEL** is not an option, then you can use **pip**. Refer to the respective project documentation for details and alternative options such as Anaconda.
## Filesystem Type for Linux Platforms
For optimum disk-store performance, we recommend the use of local filesystem for disk data storage and not over NFS.
-
-
diff --git a/docs/new_features.md b/docs/new_features.md
index c813710550..ede62d986d 100644
--- a/docs/new_features.md
+++ b/docs/new_features.md
@@ -1,29 +1,3 @@
# New Features
-## SnappyData 1.2.0
-
-SnappyData 1.2.0 includes the following new features:
-
-### Support for Cloud Storage and New Data Formats
-Added support for external data sources such as HDFS, S3, Azure Blob Storage (WASB, not ADLS), and GCS. Also, tested and certified support for file formats like CSV, Parquet, XML, JSON, Avro, ORC, text.
-Apache Zeppelin that is embedded with the product, is the easiest way to explore external data sources. Refer to the notebooks under the section **External Data Sources** and **Demos with Big Datasets** for configuring and demonstrations.
-
-### Structured Streaming User Interface
-Introducing a new UI tab to monitor Structured Streaming applications statistics and progress.
-
-### SnappyData Metrics now Fully Compatible with Apache Spark
-Apache Spark provides a flexible way to capture metrics and route these to a multitude of Sinks (JMX, HTTP, etc). SnappyData, besides capturing metrics in its native Statistics DB, also routes all system-wide metrics to any configured Spark Sink, enabling monitoring metrics through a wide selection of external tools.
-
-### Data Extractor Utility
-Introducing the Data Extractor utility as a recovery service in case the cluster fails to come up in some extreme circumstances. For example, when the disk files are corrupted. The utility also permits users to extract their datasets from in-memory tables to cloud storage serving as a backup mechanism.
-
-### Multiline JSON Parsing Support
-SnappyData now supports multiline JSON parsing. An existing Quickstart example has been extended to illustrate multi-line JSON file support.
-
-### Accessing Hive tables through External Hive Metastore
-The facility is provided to access data stored in Hive tables by connecting to the existing Hive metastore in local and remote modes.
-
-### ODBC Driver Support
-
-* Added SSL support on the ODBC driver-side.
-* Added support for various provider type IDs in the .NET framework.
+See the **New Features** section in the [release notes](release_notes/release_notes.md).
diff --git a/docs/prev_doc_ver.md b/docs/prev_doc_ver.md
index 054a10865b..72e78e96ec 100644
--- a/docs/prev_doc_ver.md
+++ b/docs/prev_doc_ver.md
@@ -2,6 +2,7 @@
Click a release to check the corresponding archived product documentation of SnappyData:
+* [SnappyData 1.2.0](https://snappydata-docs.readthedocs.io/en/community_docv1.2.0/)
* [SnappyData 1.1.1](https://snappydata-docs.readthedocs.io/en/community_docv1.1.1/)
diff --git a/docs/programming_guide/scala_interpreter.md b/docs/programming_guide/scala_interpreter.md
index 260c8f20b1..662dcb63d6 100644
--- a/docs/programming_guide/scala_interpreter.md
+++ b/docs/programming_guide/scala_interpreter.md
@@ -1,8 +1,5 @@
# Executing Spark Scala Code using SQL
-!!!Note
- This is an experimental feature in the SnappyData 1.3.0 release.
-
Prior to the 1.2.0 release, any execution of a Spark scala program required the user to compile his Spark program, comply to specific callback API required by SnappyData, package the classes into a JAR, and then submit the application using **snappy-job** tool.
While, this procedure may still be the right option for a production application, it is quite cumbersome for the developer or data scientist wanting to quickly run some Spark code within the SnappyData store cluster and iterate.
diff --git a/docs/reference.md b/docs/reference.md
index 2e87821a58..813cef38df 100644
--- a/docs/reference.md
+++ b/docs/reference.md
@@ -6,6 +6,8 @@ The following topics are covered in this section:
* [SQL Reference Guide](sql_reference.md)
+* [SQL Functions](reference/sql_functions/sql_functions.md)
+
* [Built-in System Procedures](reference/inbuilt_system_procedures/system-procedures.md)
* [System Tables](reference/system_tables/system_tables.md)
diff --git a/docs/reference/API_Reference/odbc_supported_apis.md b/docs/reference/API_Reference/odbc_supported_apis.md
index 30a3382948..8f9db73cd8 100644
--- a/docs/reference/API_Reference/odbc_supported_apis.md
+++ b/docs/reference/API_Reference/odbc_supported_apis.md
@@ -22,7 +22,7 @@ The following APIs are supported for ODBC in Snappy Driver:
| SQLColumns | Core | Yes ||
| SQLConnect | Core | Yes ||
| SQLCopyDesc | Core | Not ||
-| SQLDataSources | Core | Not |As per MSDN document it should implement by Driver Manager.|
+| SQLDataSources | Core | Not |As per MSDN document it should be implemented by Driver Manager.|
| SQLDescribeCol | Core[1] | Yes ||
| SQLDescribeParam | Level 2 | Yes ||
| SQLDisconnect | Core | Yes ||
diff --git a/docs/reference/command_line_utilities/scala-cli.md b/docs/reference/command_line_utilities/scala-cli.md
index 5767307cf6..49e67800b2 100644
--- a/docs/reference/command_line_utilities/scala-cli.md
+++ b/docs/reference/command_line_utilities/scala-cli.md
@@ -1,12 +1,12 @@
# snappy-scala CLI
-The snappy-scala CLI is introduced as an experimental feature in the SnappyData 1.2.0 release. This is similar to the Spark shell in its capabilities. The [Spark documentation](https://spark.apache.org/docs/2.1.1/quick-start.html) defines the Spark shell as follows:
+The snappy-scala CLI is introduced as an experimental feature in the SnappyData 1.2.0 release and is considered a stable feature in the 1.3.0 release. This is similar to the Spark shell in its capabilities. The [Spark documentation](https://spark.apache.org/docs/2.1.1/quick-start.html) defines the Spark shell as follows:
***Spark’s shell provides a simple way to learn the API, as well as a powerful tool to analyze data interactively. It is available in either Scala (which runs on the Java VM and is thus a good way to use existing Java libraries) or Python.***
A developer who is learning the Spark APIs of SparkContext, SparkSession, RDD, DataSet, DataSources, ML, etc. , can use the following utilities:
-* **Spark shell** to quickly bring up an interactive shell and start learning and experimenting with the APIs.
+* **Spark shell** to quickly bring up an interactive shell and start learning and experimenting with the APIs.
* **PySpark**, provided by spark, for interactive Python where you can interactively learn the Python APIs that are provided by Spark.
Spark shell is a spark application that is built on Scala’s REPL (Read-Evaluate-Print loop). It accepts Scala code as input, executes the instructions as per the code, and returns the output of those instructions. After this utility is invoked, the spark driver comes into life, in which a SparkContext, a SparkSession, and a REPL object are initialized, and an interactive shell is provided to the users on the driver VM itself for interactive learning.
@@ -18,7 +18,7 @@ snappy-scala CLI is built on top of the [exec scala](/reference/sql_reference/ex
Although the experience of the snappy-scala CLI is similar to that of a Scala or a Spark shell, yet a couple of important features are either missing or are thinly supported. This is because it is currently an experimental feature. The following are a couple of notable differences between the Spark shell and snappy-scala CLI:
-* The auto completion feature, which is rich in a true Scala or Scala based interpreter. It is almost as rich as an IDE, where it can prompt possible completions, method signature, word completion, syntaxes, etc.
+* The auto completion feature, which is rich in a true Scala or Scala based interpreter. It is almost as rich as an IDE, where it can prompt possible completions, method signature, word completion, syntaxes, etc.
* Support for the list of commands which can be executed on the shell.
The following image shows a simple SnappyData cluster, which is started, and then the snappy-scala is launched to connect.
diff --git a/docs/release_notes/known_issues.md b/docs/release_notes/known_issues_1.1.0.md
similarity index 99%
rename from docs/release_notes/known_issues.md
rename to docs/release_notes/known_issues_1.1.0.md
index 3eb9d243eb..6abb3c681e 100644
--- a/docs/release_notes/known_issues.md
+++ b/docs/release_notes/known_issues_1.1.0.md
@@ -1,4 +1,5 @@
-# Known Issues
+# Known Issues for SnappyData 1.1.0 release
+
The following key issues have been registered as bugs in the SnappyData bug tracking system:
diff --git a/docs/release_notes/release_notes.md b/docs/release_notes/release_notes.md
index 412f119c62..f498688ccb 100644
--- a/docs/release_notes/release_notes.md
+++ b/docs/release_notes/release_notes.md
@@ -1,3 +1,301 @@
-# Release Notes
+# Overview
-[SnappyData 1.2.0 Release Notes](TIB_compute-ce_1.2.0_relnotes.pdf)
+SnappyData™ is a memory-optimized database based on Apache Spark. It delivers very high
+throughput, low latency, and high concurrency for unified analytic workloads that may combine
+streaming, interactive analytics, and artificial intelligence in a single, easy to manage distributed cluster.
+
+In previous releases there were two editions namely, the Community Edition which was a fully functional
+core OSS distribution that was under the Apache Source License v2.0, and the Enterprise Edition
+which was sold by TIBCO Software under the name TIBCO ComputeDB™ that included everything offered
+in the OSS version along with additional capabilities that are closed source and only available
+as part of a licensed subscription.
+
+The SnappyData team is pleased to announce the availability of version 1.3.0 of the platform
+in which all the platform's private modules have been made open-source apart from the streaming
+GemFire connector (which includes non-OSS Pivotal GemFire product jars, and hence cannot be open-sourced).
+These include Approximate Query Processing (AQP) and the JDBC connector repositories
+which also include the off-heap storage support for column tables and the security modules.
+In addition, the ODBC driver has also been made open-source. With this, the entire
+code base of the platform (apart from the GemFire connector) has been made
+open source and there are no longer separate Community and Enterprise editions.
+
+You can find details of the release artifacts towards the end of this page.
+
+The full set of documentation for SnappyData including installation guide, user guide
+and reference guide can be found [here](https://tibcosoftware.github.io/snappydata).
+
+The following table summarizes the high-level features available in the SnappyData platform:
+
+| Feature | Available |
+| ------- | --------- |
+|Mutable Row and Column Store | X |
+|Compatibility with Spark | X |
+|Shared Nothing Persistence and HA | X |
+|REST API for Spark Job Submission | X |
+|Fault Tolerance for Driver | X |
+|Access to the system using JDBC Driver | X |
+|CLI for backup, restore, and export data | X |
+|Spark console extensions | X |
+|System Performance/behavior statistics | X |
+|Support for transactions in Row tables | X |
+|Support for indexing in Row tables | X |
+|Support for snapshot transactions in Column tables | X |
+|Online compaction of column block data | X |
+|Transparent disk overflow of large query results | X |
+|Support for external Hive meta store | X |
+|SQL extensions for stream processing | X |
+|SnappyData sink for structured stream processing | X |
+|Structured Streaming user interface | X |
+|Runtime deployment of packages and jars | X |
+|Scala code execution from SQL (EXEC SCALA) | X |
+|Out of the box support for cloud storage | X |
+|Support for Hadoop 3.2 | X |
+|SnappyData Interpreter for Apache Zeppelin | X |
+|Synopsis Data Engine for Approximate Querying | X |
+|Support for Synopsis Data Engine from TIBCO Spotfire® | X |
+|ODBC Driver with High Concurrency | X |
+|Off-heap data storage for column tables | X |
+|CDC Stream receiver for SQL Server into SnappyData | X |
+|Row Level Security | X |
+|Use encrypted password instead of clear text password | X |
+|Restrict Table, View, Function creation even in user’s own schema | X |
+|LDAP security interface | X |
+|Visual Statistics Display (VSD) tool for system statistics (gfs) files(*) | |
+|GemFire connector | |
+
+(*) NOTE: The graphical Visual Statistics Display (VSD) tool to see the system statistics (gfs) files is not OSS
+and was never shipped with SnappyData. It is available from [GemTalk Systems](https://gemtalksystems.com/products/vsd/)
+or [Pivotal GemFire](https://network.pivotal.io/products/pivotal-gemfire) under their own respective licenses.
+
+
+## New Features
+
+SnappyData 1.3.0 release includes the following new features over the previous 1.2.0 release:
+
+* **Open sourcing of non-OSS components**
+
+ All components except for the streaming GemFire connector are now OSS! This includes the Approximate Querying
+ Engine, off-heap storage for column tables, the streaming JDBC connector for CDC, security modules and the ODBC driver.
+ All new OSS modules are available under the Apache Source License v2.0 like the rest of the product.
+ Overall the new 1.3.0 OSS release is both more feature rich than the erstwhile 1.2.0 Enterprise product,
+ and more efficient.
+
+* **Online compaction of column block data**
+
+ Automatic online compaction of column blocks that have seen significant percentage of deletes or updates.
+ The compaction is triggered in one of the foreground threads performing delete or update operations
+ to avoid the "hidden" background operational costs of the platform. The ratio of data at which compaction
+ is triggered can be configured using two cluster level (or system) properties:
+ * _snappydata.column.compactionRatio_: for the ratio of deletes to trigger compaction (default 0.1)
+ * _snappydata.column.updateCompactionRatio_: for the ratio of updates to trigger compaction (default 0.2)
+
+* **Transparent disk overflow of large query results**
+
+ Queries that return large results have been a problem with Spark and SnappyData alike due to lack
+ of streaming of the final results to application layer resulting in large heap consumption on the driver.
+ There are properties like _spark.driver.maxResultSize_ in Spark to altogether disallow large query results.
+ SnappyData has had driver-side persistence of large results for JDBC/ODBC to reduce the memory pressure
+ but even so fetching multi-GB results was not possible in most cases and would lead to OOME on driver or
+ the server that is servicing the JDBC/ODBC client.
+
+ This new feature adds disk overflow for large query results on the executors completely eliminating all
+ memory problems for any size of results. It works when using the JDBC/ODBC driver and for the
+ SnappySession.sql().toLocalIterator() API. Note that usage of any other Dataset APIs will result in
+ creation of base Spark Dataset implementation that will not use disk overflow. A couple of
+ cluster level (or system) properties can be used to fine-tune the behaviour:
+ * _spark.sql.maxMemoryResultSize_: maximum size of results from a JDBC/ODBC/SQL query in a partition
+ that will be held in memory beyond which the results will be written to disk, while the maximum
+ size of a single disk block is fixed to 8 times this value (default 4mb)
+ * _spark.sql.resultPersistenceTimeout_: maximum duration in seconds for which results overflowed to disk
+ are held on disk after which they are cleaned up (default 21600 i.e. 6 hours)
+
+* **Eager cleanup of broadcast join data blocks**
+
+ The Dataset broadcast join operator uses the Spark broadcast variables to collect required data
+ from the executors (when it is within the _spark.sql.autoBroadcastJoinThreshold_ limit) that is cached
+ on the driver and executors using the BlockManager. The cleanup of this data uses weak references on
+ driver which will be collected in some future GC cycle. When there are frequent queries that perform
+ broadcast joins, then this causes large GC pressure on the nodes even though BlockManager will
+ overflow data to disk after a point. This is because broadcast join is for relatively small
+ data so when those start accumulating for long period of time, they get promoted to old generation
+ which will lead to more frequent full GC cycles that need to do quite a bit of work. This is
+ especially the case for executors since the cleanup is driven by GC on the driver that may happen
+ far more infrequently than the executors, so the GC cycles may fail to clean up old data that
+ has not exceeded the execution cache limit.
+
+ This new feature eagerly removes broadcast join data blocks at the end of query from the executors
+ (which can still fetch from driver on demand) and also from the driver for DML executions.
+
+* **Hadoop upgraded to version 3.2.0 and added ADLS gen 1/2 connectivity**
+
+ Hadoop upgraded to 3.2.0 from 2.7.x to allow support for newer components like ADLS gen 1/2.
+ Azure jars added to the product by default to allow support for ADLS.
+
+* **Enable LRUCOUNT eviction for column tables to enable creation of disk-only column tables**
+
+ LRUCOUNT based eviction was disabled for column tables since the count would represent column blocks
+ per data node while the number of rows being cached in memory would be indeterminate. This is now
+ enabled to allow for cases like disk-only tables with minimal memory caching. So now one can create
+ a column table like `CREATE TABLE diskTable (...) using column options (eviction_by 'lrucount 1')`
+ that will keep only one column block in memory which will be few MB at maximum (in practise only
+ few KB since statistics blocks will be preferred for caching). Documentation covers the fact that
+ LRUCOUNT based eviction for column tables will lead to indeterminate memory usage.
+
+* **Spark layer updated to v2.1.3**
+
+ SnappyData Smart Connector will continue to support Apache Spark 2.1.1 as well as later 2.1.x releases.
+
+* **JDBC driver now supports Java 11**
+
+ The JDBC driver is now compatible with Java 11 and higher till Java 16. It will continue to be
+ compatible with Java 8 like the rest of the product.
+
+Apart from the above new features, the interactive execution of Scala code using **EXEC SCALA** SQL
+that was marked experimental in the previous release is now considered production ready.
+
+
+## Stability and Performance Improvements
+
+SnappyData 1.3.0 includes the following stability and performance improvements:
+
+* Bulk send for multiple statement close messages in hive-metastore getAllTables to substantially
+ improve metastore performance.
+
+* Optimize putInto for small puts using a local cache. (PR#1563)
+
+* Increase default putInto join cache size to infinite and allow it to overflow to disk if large.
+
+* Switch to safe ThreadUtils await methods throughout the code and merge fixes for [SPARK-13747].
+ This fixes potential ThreadLocal leaks in RPC when using ForkJoinPool.
+
+* Cleanup to use a common SnapshotConnectionListener to handle snapshot transactions fixing
+ cases of snapshots ending prematurely especially with the new column compaction feature.
+
+* Add caching of resolved relations in SnappySessionCatalog. This gives a large boost to Spark external
+ table queries when the table metadata is large (for example, file based tables having millions of files).
+
+* Added boolean _snappydata.sql.useDriverCollectForGroupBy_ property to allow driver do the direct collect
+ of partial results for top-level GROUP BY. This avoids the last EXCHANGE for partial results of a GROUP BY
+ query improving the performance substantially for sub-second queries. It should be enabled only when the
+ final size of results of the query is known to be small else can cause heap memory issues on the driver.
+
+* Dashboard statistics collection now cleanly handles cases where the collection takes more time than
+ its refresh interval (default 5 seconds). Previously it could cause multiple threads to pile up and
+ miss updating results from a delayed collection thread.
+
+* Added a pool for UnsafeProjections created in ColumnDelta.mergeStats to reduce the overhead of generated
+ code creation and compilation in every update.
+
+* For column table update/delete, make the output partitioning of ColumnTableScan to be ordered on bucketId.
+ This avoids unnecessary local sort operators being introduced in the update/delete/put plans.
+
+* Continue for all SparkListener failures except OutOfMemoryException. This allows the product to continue
+ working for cases where custom SparkListener supplied by user throws an exception.
+
+* Reduce overheads of EventTracker and TombstoneService to substantially decrease heap usage
+ in applications doing continuous update/delete/put operations.
+
+* Merged pull request #2 from crabo/spark-multiversion-support to improve OldEtriesCleanThread and hot sync locks.
+
+* Removed fsync for DataDictionary that causes major performance issues for large hive metastore updates
+ that also uses the DataDictionary. Replication of DataDictionary across all data nodes and locators
+ ensures its resilience.
+
+* Merged patches for the following Spark issues for increased stability of the product:
+ * [SPARK-27065](https://issues.apache.org/jira/browse/SPARK-27065): Avoid more than one active task set managers
+ for a stage (causes termination of DAGScheduler and thus the whole JVM).
+ * [SPARK-13747](https://issues.apache.org/jira/browse/SPARK-13747): Fix potential ThreadLocal leaks in RPC when
+ using ForkJoinPool (can cause applications using ForkJoinPool to fail e.g. for scala's Await.ready/result).
+ * [SPARK-26352](https://issues.apache.org/jira/browse/SPARK-26352): Join reorder should not change the order of
+ output attributes (can cause unexpected query failures or even JVM SEGV failures).
+ * [SPARK-25081](https://issues.apache.org/jira/browse/SPARK-25081): Nested spill in ShuffleExternalSorter
+ should not access released memory page (can cause JVM SEGV and related failures).
+ * [SPARK-21907](https://issues.apache.org/jira/browse/SPARK-21907): NullPointerException in
+ UnsafeExternalSorter.spill() due to OutOfMemory during spill.
+
+
+## Resolved Issues
+
+SnappyData 1.3.0 resolves the following major issues:
+
+* [SPARK-31918] SparkR support for R 4.0.0+
+
+* [SNAP-3306] Row tables with altered schema having added columns causes failure in recovery mode:
+ Allowing -1 as value for SchemaColumnId, for the case when the column is missing in the schema-version
+ under consideration. This happens when it tries to read older rows, which doesn't have the new column. (PR#1529)
+
+* [SNAP-3326] Allow option of replacing the StringType to desired SQL type (VARCHAR, CHAR, CLOB etc)
+ when creating table by importing external table. The option `string_type_as = [VARCHAR|CHAR|CLOB]`
+ now allows for switching to the required SQL type.
+
+* [SDENT-175] Support for cancelling query via JDBC (PR#1539)
+
+* [SDENT-171] rand() alias in sub-select is not working. Fixed collectProjectsAndFilters for nondeterministic
+ functions in PR#1541.
+
+* [SDENT-187] Wrong results returned by queries against partitioned ROW table.
+ Fix by handling EqualTo(Attribute, Attribute) case (PR#1544).
+ Second set of cases fixed by handling comparisons other than EqualTo. (PR#1546)
+
+* [SDENT-199] Fix for select query with where clause showing the plan without metrics in SQL UI tab. (PR#1552)
+
+* [SDENT-170] Return JDBC metadata in lower case. (Store PR#550)
+
+* [SDENT-185] If connections to ComputeDB is opened, using both hostname and IP address,
+ the ComputeDB ODBC driver throws a System.Data.Odbc.OdbcException.
+
+* [SDENT-139] Exception in extracting SQLState (when it's not set or improper) masks original problem.
+
+* [SDENT-195] Fix NPE by adding a null check for Statement instance before getting its maxRows value.(Store PR#559)
+
+* [SDENT-194] Return the table type as just a `table` instead of `ROW TABLE` or `COLUMN TABLE`.(Store PR#560)
+
+* [GITHUB-1559] JDBC ResultSet metadata search is not case-insensitive.
+ Add upper-case names to search index of ClientResultSet. (PR#565)
+
+* [SNAP-3332] PreparedStatement: Fix for precision/scale of DECIMAL parameter types not sent in
+ execute causing them to be overwritten by another PreparedStatement. (PR#1562, Store PR#567)
+
+* Fix for occasional putInto/update test failures. (Store PR#568)
+
+* Native C++ driver (used by the ODBC driver) updated to the latest versions of library dependencies
+ (thrift, boost, openssl) with many fixes.
+
+* Fixes to the scripts for bind-address and hostname-for-clients auto setup, `hostname -I` not available
+ on some systems and others.
+
+* Switch eclipse-collections to fastutil due to being more comprehensive and better performance.
+
+* [SNAP-3145] Test coverage for external hive metastore.
+
+* [SNAP-3321] Example for EXEC SCALA via JDBC.
+
+* Expanded CDC and Data Extractor test coverage.
+
+
+## Known Issues
+
+The following among the known issues have been **fixed** over the previous 1.2.0 release:
+
+| Key | Item | Description |
+| --- | ---- | ----------- |
+|[SNAP-3306](https://jirasnappydataio.atlassian.net/browse/SNAP-3306) | Row tables with altered schema having added columns causes failure in recovery mode. | A new column that is added in an existing table in normal mode fails to get restored in the recovery mode. |
+
+For the remaining known issues, see the **Known Issues** section of [1.2.0 release notes](https://raw.githubusercontent.com/TIBCOSoftware/snappydata/community_docv1.2.0/docs/release_notes/TIB_compute-ce_1.2.0_relnotes.pdf#%5B%7B%22num%22%3A63%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22XYZ%22%7D%2C69%2C720%2C0%5D).
+Note that the issue links in that document having https://jira.snappydata.io are no longer valid which should be
+changed to https://jirasnappydataio.atlassian.net. For example the broken https://jira.snappydata.io/browse/SNAP-3298
+becomes https://jirasnappydataio.atlassian.net/browse/SNAP-3298.
+
+## Description of Download Artifacts
+
+The following table describes the download artifacts included in SnappyData 1.3.0 release:
+
+| Artifact Name | Description |
+| ------------- | ----------- |
+|snappydata-1.3.0-bin.tar.gz | Full product binary (includes Hadoop 3.2.0) |
+|snappydata-jdbc\_2.11-1.3.0.jar | JDBC client driver and push down JDBC data source for Spark |
+|snappydata-core\_2.11-1.3.0.jar | The single jar needed in Smart Connector mode; an alternative to --packages option |
+|snappydata-odbc\_1.3.0_win.zip | 32-bit and 64-bit ODBC client drivers for Windows |
+|snappydata-1.3.0.sha256 | The SHA256 checksums of the product artifacts. On Linux verify using `sha256sum --check snappydata-1.3.0.sha256`. |
+|snappydata-1.3.0.sha256.gpg | GnuPG signature for snappydata-1.3.0.sha256. Get the public key using `gpg --keyserver hkps://keys.gnupg.net --recv-keys 573D42FDD455480DC33B7105F76D50B69DB1586C`. Then verify using `gpg --verify snappydata-1.3.0.sha256.gpg`. |
+|[snappydata-zeppelin\_2.11-0.8.2.1.jar](https://github.com/TIBCOSoftware/snappy-zeppelin-interpreter/releases/download/v0.8.2.1/snappydata-zeppelin_2.11-0.8.2.1.jar) | The Zeppelin interpreter jar for SnappyData, compatible with Apache Zeppelin 0.8.2. This is already present in the `jars` directory of product installation so does not need to be downloaded separately. |
diff --git a/docs/unsupported.md b/docs/unsupported.md
index b09b6c230b..589fa3e179 100644
--- a/docs/unsupported.md
+++ b/docs/unsupported.md
@@ -1,10 +1,10 @@
# Unsupported Third-Party Modules
-The following third-party modules are not supported by SnappyData 1.3.0 although it is shipped with the product:
-* Spark RDD-based APIs:
- * **org.apache.spark.mllib**
- * **GraphX (Graph Processing)**
- * **Spark Streaming (DStreams)**
+The following third-party modules are not supported by SnappyData 1.3.0, although they are shipped with the product:
-• SnappyData does not support **SparkR (R on Spark)** or **Apache Zeppelin**.
+* Spark RDD-based APIs:
+ * **org.apache.spark.mllib**
+ * **GraphX (Graph Processing)**
+ * **Spark Streaming (DStreams)**
+* SnappyData does not support **SparkR (R on Spark)**.