Skip to content

Commit

Permalink
Documentation updates for 1.3.0
Browse files Browse the repository at this point in the history
- scanned through all changes and added to release_notes.md
- updated descriptions of new added features, performance enhancements etc
- updated all references to Enterprise Edition
- move EXEC SCALA out of the experimental category since it has seen quite a bit of testing
  and is standard part of the demo zeppelin notebooks
- went through many docs and fixed/updated them (too many changes to mention here)
  • Loading branch information
sumwale committed Oct 16, 2021
1 parent f6f55af commit 755c88b
Show file tree
Hide file tree
Showing 20 changed files with 454 additions and 141 deletions.
2 changes: 1 addition & 1 deletion core/src/main/scala/io/snappydata/Literals.scala
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ object Property extends Enumeration {

val MaxMemoryResultSize: SparkValue[String] = Val[String](
s"${Constant.SPARK_PREFIX}sql.maxMemoryResultSize",
"Maximum size of results from a JDBC/ODBC query in a partition that will be held " +
"Maximum size of results from a JDBC/ODBC/SQL query in a partition that will be held " +
"in memory beyond which the results will be written to disk " +
"(value in bytes or k/m/g suffixes for unit, min 1k). Default is 4MB.", Some("4m"))

Expand Down
8 changes: 2 additions & 6 deletions docs/additional_files/license_model.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,4 @@
# Licensing Model

The source code is distributed with Apache License 2.0
<!---
Users can download the fully functional OSS version. Users can deploy the OSS version into production and choose to purchase support subscriptions for the same. This guarantees access to product support teams and any new releases that are delivered including patches, and hotfixes for critical issues with time bound SLAs.</br> The alternative is to deploy the OSS version into production and use the various community channels for support.
The Enterprise edition of the product, TIBCO ComputeDB Enterprise Edition, can be obtained from [edelivery.tibco.com](https://edelivery.tibco.com/storefront/index.ep) </br>You can reach out to [[email protected]](mailto:[email protected]) for more information on purchasing license subscriptions for the product. Subscriptions are priced per core per year with the option to upgrade to premium support if the user desires to do so. Both the open source and enterprise versions can be deployed on-premise or in the cloud. Web based deployment of clusters on AWS and Azure (future support) is available for the product.
--->
The source code is distributed with Apache License 2.0. Users can download the fully functional OSS version
and deploy it in production.
80 changes: 48 additions & 32 deletions docs/additional_files/open_source_components.md
Original file line number Diff line number Diff line change
@@ -1,36 +1,52 @@
# SnappyData Community Edition (Open Source) and TIBCO ComputeDB Enterprise Edition
# SnappyData Community Edition (Open Source)

With the 1.3.0 release, SnappyData Community Edition gets even closer to the TIBCO ComputeDB Enterprise Edition, in terms of the features.
With the 1.3.0 release, SnappyData Community Edition gets close to the erstwhile TIBCO ComputeDB Enterprise Edition, in terms of the features.
Apart from the GemFire connector (that depends on non-OSS Pivotal GemFire jars), the 1.3.0 Community Edition
exceeds the previous TIBCO ComputeDB Enterprise Edition 1.2.0 in both features and performance. You can find a list
of new features and performance improvements in the [release notes](../release_notes/release_notes.md).

The features hitherto available only in Enterprise edition - Off-heap storage for column tables, Approximate Query Processing, LDAP-based Authentication and Authorization, to name a few -
are now availble in SnappyData (community edition) as well.
The features hitherto available only in Enterprise edition - Off-heap storage for column tables, Approximate Query Processing, LDAP-based Authentication and Authorization, ODBC driver, to name a few -
are now available in SnappyData (community edition) as well.

<!---
SnappyData offers a fully functional core OSS distribution, which is the **Community Edition**, that is Apache 2.0 licensed. The **Enterprise Edition** of the product, which is sold by TIBCO Software under the name **TIBCO ComputeDB™**, includes everything that is offered in the OSS version along with additional capabilities that are closed source and only available as part of a licensed subscription. You can download the Enterprise Edition from [TIBCO eDelivery website](https://edelivery.tibco.com).
--->
The capabilities of the **Community Edition** and the additional capabilities of the **Enterprise Edition** are listed in the following table:
The high level capabilities of the **Community Edition** are listed in the following table:

| Feature | Community | Enterprise |
| ------------------- |:------------------------:| :-----------------------:|
| Mutable Row & Column Store | X | X |
| Compatibility with Spark | X | X |
| Shared Nothing Persistence and HA | X | X |
| REST API for Spark Job Submission | X | X |
| Fault Tolerance for Driver | X | X |
| Access to the system using JDBC Driver | X | X |
| CLI for backup, restore, and export data | X | X |
| Spark console extensions | X | X |
| System Perf/Behavior statistics | X | X |
| Support for transactions in Row tables | X | X |
| Support for indexing in Row Tables | X | X |
| SQL extensions for stream processing | X | X |
| Runtime deployment of packages and jars | X | X |
| Synopsis Data Engine for Approximate Querying | X | X |
| ODBC Driver with High Concurrency | X | X |
| Off-heap data storage for column tables | X | X |
| CDC Stream receiver for SQL Server into SnappyData | X | X |
| GemFire/Apache Geode connector | | X |
| Row Level Security | X | X |
| Use encrypted password instead of clear text password | X | X |
| Restrict Table, View, Function creation even in user’s own schema | X | X |
| LDAP security interface | X | X |
| Feature | Available |
| ------- | --------- |
|Mutable Row and Column Store | X |
|Compatibility with Spark | X |
|Shared Nothing Persistence and HA | X |
|REST API for Spark Job Submission | X |
|Fault Tolerance for Driver | X |
|Access to the system using JDBC Driver | X |
|CLI for backup, restore, and export data | X |
|Spark console extensions | X |
|System Performance/behavior statistics | X |
|Support for transactions in Row tables | X |
|Support for indexing in Row tables | X |
|Support for snapshot transactions in Column tables | X |
|Online compaction of column block data | X |
|Transparent disk overflow of large query results | X |
|Support for external Hive meta store | X |
|SQL extensions for stream processing | X |
|SnappyData sink for structured stream processing | X |
|Structured Streaming user interface | X |
|Runtime deployment of packages and jars | X |
|Scala code execution from SQL (EXEC SCALA) | X |
|Out of the box support for cloud storage | X |
|Support for Hadoop 3.2 | X |
|SnappyData Interpreter for Apache Zeppelin | X |
|Synopsis Data Engine for Approximate Querying | X |
|Support for Synopsis Data Engine from TIBCO Spotfire® | X |
|ODBC Driver with High Concurrency | X |
|Off-heap data storage for column tables | X |
|CDC Stream receiver for SQL Server into SnappyData | X |
|Row Level Security | X |
|Use encrypted password instead of clear text password | X |
|Restrict Table, View, Function creation even in user’s own schema | X |
|LDAP security interface | X |
|Visual Statistics Display (VSD) tool for system statistics (gfs) files(*) | |
|GemFire connector | |

(*) NOTE: The graphical Visual Statistics Display (VSD) tool to see the system statistics (gfs) files is not OSS
and was never shipped with SnappyData. It is available from [GemTalk Systems](https://gemtalksystems.com/products/vsd/)
or [Pivotal GemFire](https://network.pivotal.io/products/pivotal-gemfire) under their own respective licenses.
6 changes: 4 additions & 2 deletions docs/best_practices/odbc_jdbc_clients.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# ODBC and JDBC Clients

When using JDBC or ODBC clients, applications must close the ResultSet that is consumed or consume a FORWARD_ONLY ResultSet completely. These ResultSets can keep tables open and thereby block any DDL executions. If the cursor used by ResultSet remains open, then the DDL executions gets timeout.
When using JDBC or ODBC clients, applications must close the ResultSet that is consumed or consume a FORWARD_ONLY ResultSet completely. These ResultSets can keep tables open and thereby block any DDL executions. If the cursor used by ResultSet remains open, then the DDL executions may time out.

Such intermittent ResultSets are eventually cleaned up by the product, but that happens only in a garbage collection (GC) cycle where JVM cleans the weak references corresponding to those ResultSets. Although, this process can take an indeterminate amount of time.
Such intermittent ResultSets are eventually cleaned up by the product, but that happens only in a garbage collection (GC) cycle where JVM cleans the weak references corresponding to those ResultSets.
However, this process can take an indeterminate amount of time so it's recommended for users to clean up the ResultSets,
Statements and other JDBC/ODBC constructs immediately after use (using try-resources in Java and equivalent in other languages).
2 changes: 1 addition & 1 deletion docs/best_practices/transactions_best_practices.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
<a id="snapshot-bestpractise"></a>
## Using Transactions

- For high performance, mimimize the duration of transactions to avoid conflicts with other concurrent transactions. If atomicity for only single row updates is required, then completely avoid using transactions because SnappyData provides atomicity and isolation for single rows without transactions.
- For high performance, minimize the duration of transactions to avoid conflicts with other concurrent transactions. If atomicity for only single row updates is required, then completely avoid using transactions because SnappyData provides atomicity and isolation for single rows without transactions.

- When using transactions, keep the number of rows involved in the transaction as low as possible. SnappyData acquires locks eagerly, and long-lasting transactions increase the probability of conflicts and transaction failures. Avoid transactions for large batch update statements or statements that effect a lot of rows.

Expand Down
8 changes: 6 additions & 2 deletions docs/connectors/tdv.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,13 @@ SnappyData edition 1.3.0 is tested and works with TIBCO Data Virtualization 8.2.
For example:
snappy>deploy jar dv-jar '/snappydata/snappy-connectors/tdv-connector/lib/csjdbc8.jar';


!!!Note
The above jar may not be available in the SnappyData Community edition.
In that case find and copy it from your TDV installation.

!!!Note
You should execute this command only once when you connect to the TDV cluster for the first time.
You should execute this command only once when you connect to the TDV cluster for the first time.

8. Create an external table with JDBC options:

Expand Down
8 changes: 0 additions & 8 deletions docs/experimental.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,3 @@ SnappyData 1.3.0 provides the following features on an experimental basis. These
You can enable authorization of external tables by setting the system property **CHECK_EXTERNAL_TABLE_AUTHZ** to true when the cluster's security is enabled.
System admin or the schema owner can grant or revoke the permissions on external tables to other users.
For example: `GRANT ALL ON <external-table> to <user>;`


## Support ad-hoc, Interactive Execution of Scala code
You can execute Scala code using a new CLI script **snappy-scala** that is built with IJ APIs. You can also run it as an SQL command using prefix **exec scala**.
The Scala code can use any valid/supported Spark API for example, to carry out custom data loading/transformations or to launch a structured streaming job. Since the code is submitted as an SQL command, you can now also use any SQL tool (based on JDBC/ODBC), including Notebook environments, to execute ad-hoc code blocks directly. Prior to this feature, apps were required to use the smart connector or use the SnappyData specific native Zeppelin interpreter.
**exec scala** command can be secured using the SQL GRANT/REVOKE permissions. System admin (DB owner) can grant or revoke permissions for Scala interpreter privilege.

For more information refer to [Executing Spark Scala Code using SQL](/programming_guide/scala_interpreter.md)
6 changes: 3 additions & 3 deletions docs/howto/connect_using_odbc_driver.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,9 @@ To download and install the ODBC driver:
2. Extract **snappydata-odbc_1.3.0_win.zip**. Depending on your Windows installation, extract the contents of the 32-bit or 64-bit version of the SnappyData ODBC Driver.

| Version | ODBC Driver |
|--------|--------|
|32-bit for 32-bit platform|TIB_compute-odbc_1.2.0_win_x86.zip|
|64-bit for 64-bit platform|TIB_compute-odbc_1.2.0_win_x64.zip|
|---------|-------------|
|32-bit for 32-bit platform|snappydata-odbc_1.3.0_win_x86.msi|
|64-bit for 64-bit platform|snappydata-odbc_1.3.0_win_x64.msi|

4. Double-click on the corresponding **msi** file, and follow the steps to complete the installation.

Expand Down
18 changes: 5 additions & 13 deletions docs/install.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,12 @@
# Provisioning SnappyData

SnappyData offers two editions of the product:

* **Community Edition**
* **Enterprise Edition**

The SnappyData **Community Edition** is Apache 2.0 licensed. It is a free, open-source version of the product that can be downloaded by anyone.
The **Enterprise Edition** of the product, which is sold by TIBCO Software under the name **TIBCO ComputeDB™**, includes everything that is offered in the Community Edition along with additional capabilities that are closed source and only available as part of a licensed subscription.
The erstwhile **Enterprise Edition** of the product, which is sold by TIBCO Software under the name **TIBCO ComputeDB™**, includes everything that is offered in the Community Edition along with additional capabilities that are closed source and only available as part of a licensed subscription.

As of the 1.3.0 release, all components that were previously closed source are now OSS (except for the
GemFire connector), and there is only the **Community Edition** that is released.

For more information on the capabilities of the Community Edition and Enterprise Edition, see [Community Edition (Open Source)/Enterprise Edition components](additional_files/open_source_components.md).
For more information on the capabilities of the Community Edition and differences from the previous Enterprise Edition, see [Community Edition (Open Source)](additional_files/open_source_components.md).

<a id= download> </a>
<heading2>Download SnappyData Community Edition</heading2>
Expand All @@ -19,12 +17,6 @@ Download the [SnappyData 1.3.0 Community Edition (Open Source)](https://github.c
* [**SnappyData 1.3.0 Release download link**](https://github.com/TIBCOSoftware/snappydata/releases/download/v1.3.0/snappydata-1.3.0-bin.tar.gz)


<!---
<heading2>Download SnappyData Enterprise Edition</heading2>
You can download the Enterprise Edition from [TIBCO eDelivery website](https://edelivery.tibco.com).
--->

<a id= provisioningsnappy> </a>
<heading2>SnappyData Provisioning Options</heading2>

Expand Down
Loading

0 comments on commit 755c88b

Please sign in to comment.