Skip to content

Commit

Permalink
Merge branch 'develop' of github.com:Avaiga/taipy-doc into develop
Browse files Browse the repository at this point in the history
  • Loading branch information
Fabien Lelaquais committed Nov 6, 2023
2 parents b3c8c5e + 9ae1e18 commit ee599f9
Show file tree
Hide file tree
Showing 9 changed files with 249 additions and 33 deletions.
17 changes: 17 additions & 0 deletions docs/manuals/core/basic_examples/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,23 @@ strongly recommend using it.

In line 4, we simply instantiate and run a Core service.

!!! warning "Core service should be run only once"

At a given time, there should be only one Core service running. If a Core service instance
is already running, running another Core service will raise an exception.

To stop a Core service instance, you can use the `stop()` method.

```python linenums="1"
from taipy import Core

if __name__ == "__main__":
core = Core()
core.run()
...
core.stop()
```

## Creating Scenarios and accessing data

Now you can create and manage *Scenarios*, submit the graph of *Tasks* for execution, and access
Expand Down
5 changes: 2 additions & 3 deletions docs/manuals/core/config/advanced-config.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,11 +120,10 @@ the default configuration applies if some values are not provided.
import pandas as pd

def write_orders_plan(data: pd.DataFrame):
insert_data = list(
data[["date", "product_id", "number_of_products"]].itertuples(index=False, name=None))
insert_data = data[["date", "product_id", "number_of_products"]].to_dict("records")
return [
"DELETE FROM orders",
("INSERT INTO orders VALUES (?, ?, ?)", insert_data)
("INSERT INTO orders VALUES (:date, :product_id, :number_of_products)", insert_data)
]

def train(sales_history: pd.DataFrame):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,10 @@
import pandas as pd

def write_query_builder(data: pd.DataFrame):
insert_data = list(
data[["date", "nb_sales"]].itertuples(index=False, name=None))
insert_data = data[["date", "nb_sales"]].to_dict("records")
return [
"DELETE FROM sales",
("INSERT INTO sales VALUES (?, ?)", insert_data)
("INSERT INTO sales VALUES (:date, :nb_sales)", insert_data)
]

sales_history_cfg = Config.configure_sql_data_node(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,13 @@
import pandas as pd

def write_query_builder(data: pd.DataFrame):
insert_data = list(
data[["date", "nb_sales"]].itertuples(index=False, name=None))
insert_data = data[["date", "nb_sales"]].to_dict("records")
return [
"DELETE FROM sales",
("INSERT INTO sales VALUES (?, ?)", insert_data)
("INSERT INTO sales VALUES (:date, :nb_sales)", insert_data)
]

sales_history_cfg = Config.configure_sql_table_data_node(
sales_history_cfg = Config.configure_sql_data_node(
id="sales_history",
db_name="taipy",
db_engine="sqlite",
Expand Down
4 changes: 2 additions & 2 deletions docs/manuals/core/entities/code_example/my_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@


def write_orders_plan(data: pd.DataFrame):
insert_data = list(data[["date", "product_id", "number_of_products"]].itertuples(index=False, name=None))
return ["DELETE FROM orders", ("INSERT INTO orders VALUES (?, ?, ?)", insert_data)]
insert_data = data[["date", "product_id", "number_of_products"]].to_dict("records")
return ["DELETE FROM orders", ("INSERT INTO orders VALUES (:date, :product_id, :number_of_products)", insert_data)]


def train(sales_history: pd.DataFrame):
Expand Down
206 changes: 189 additions & 17 deletions docs/manuals/core/entities/data-node-mgt.md
Original file line number Diff line number Diff line change
Expand Up @@ -763,9 +763,9 @@ execute a list of queries returned by the query builder:
```python
data = pandas.DataFrame(
[
{"date": "01/08/2019", "product_id": 1 "number_of_products": 450},
{"date": "01/08/2019", "product_id": 3 "number_of_products": 320},
{"date": "01/08/2019", "product_id": 4 "number_of_products": 350},
{"date": "01/08/2019", "product_id": 1, "number_of_products": 450},
{"date": "01/08/2019", "product_id": 3, "number_of_products": 320},
{"date": "01/08/2019", "product_id": 4, "number_of_products": 350},
]
)
Expand Down Expand Up @@ -1163,29 +1163,201 @@ Correspondingly, In memory data node can write any data object that is valid dat
It is also possible to partially read the contents of data nodes, which comes in handy when dealing
with large amounts of data.
This can be achieved by providing an operator, a Tuple of (*field_name*, *value*, *comparison_operator*),
or a list of operators to the `DataNode.filter()^` method:
or a list of operators to the `DataNode.filter()^` method.

```python linenums="1"
data_node.filter(
[("field_name", 14, Operator.EQUAL), ("field_name", 10, Operator.EQUAL)],
JoinOperator.OR
)
Assume that the content of the data node can be represented by the following table.

!!! example "Data sample"

| date | nb_sales |
|------------|----------|
| 12/24/2018 | 1550 |
| 12/25/2018 | 2315 |
| 12/26/2018 | 1832 |

In the following example, the `DataNode.filter()^` method will return all the records from the data node
where the value of the "nb_sales" field is equal to 1550.
The following examples represent the results when read from a data node with different _exposed_type_:

```python
filtered_data = data_node.filter(("nb_sales", 1550, Operator.EQUAL))
```

!!! example "The value of `filtered_data` where "nb_sales" is equal to 1550"

=== "exposed_type = "pandas""

```python
pandas.DataFrame
(
date nb_sales
0 12/24/2018 1550
)
```

=== "exposed_type = "modin""

```python
modin.pandas.DataFrame
(
date nb_sales
0 12/24/2018 1550
)
```

=== "exposed_type = "numpy""

```python
numpy.array([
["12/24/2018", "1550"]
])
```

=== "exposed_type = SaleRow"
```python
[SaleRow("12/24/2018", 1550)]
```

If a list of operators is provided, it is necessary to provide a join operator that will be
used to combine the filtered results from the operators.
used to combine the filtered results from the operators. The default join operator is `JoinOperator.AND`.

It is also possible to use pandas style filtering:
In the following example, the `DataNode.filter()^` method will return all the records from the data node
where the value of the "nb_sales" field is greater or equal to 1000 and less than 2000.
The following examples represent the results when read from a data node with different _exposed_type_:

```python linenums="1"
temp_data = data_node["field_name"]
temp_data[(temp_data == 14) | (temp_data == 10)]
```python
filtered_data = data_node.filter(
[("nb_sales", 1000, Operator.GREATER_OR_EQUAL), ("nb_sales", 2000, Operator.LESS_THAN)]
)
```

!!! warning
!!! example "The value of `filtered_data` where "nb_sales" is greater or equal to 1000 and less than 2000"

=== "exposed_type = "pandas""

```python
pandas.DataFrame
(
date nb_sales
0 12/24/2018 1550
1 12/26/2018 1832
)
```

=== "exposed_type = "modin""

```python
modin.pandas.DataFrame
(
date nb_sales
0 12/24/2018 1550
1 12/26/2018 1832
)
```

=== "exposed_type = "numpy""

```python
numpy.array(
[
["12/24/2018", "1550"],
["12/26/2018", "1832"]
]
)
```

=== "exposed_type = SaleRow"
```python
[
SaleRow("12/24/2018", 1550),
SaleRow("12/26/2018", 1832),
]
```

In another example, the `DataNode.filter()^` method will return all the records from the data node
where the value of the "nb_sales" field is equal to 1550 or greater than 2000.
The following examples represent the results when read from a data node with different _exposed_type_:

```python
filtered_data = data_node.filter(
[("nb_sales", 1550, Operator.EQUAL), ("nb_sales", 2000, Operator.GREATER_THAN)],
JoinOperator.OR,
)
```

!!! example "The value of `filtered_data` where "nb_sales" is equal to 1550 or greater than 2000"

=== "exposed_type = "pandas""

```python
pandas.DataFrame
(
date nb_sales
0 12/24/2018 1550
1 12/25/2018 2315
)
```

=== "exposed_type = "modin""

```python
modin.pandas.DataFrame
(
date nb_sales
0 12/24/2018 1550
1 12/25/2018 2315
)
```

=== "exposed_type = "numpy""

```python
numpy.array(
[
["12/24/2018", "1550"],
["12/25/2018", "2315"],
]
)
```

=== "exposed_type = SaleRow"
```python
[
SaleRow("12/24/2018", 1550),
SaleRow("12/25/2018", 2315),
]
```

With Pandas or Modin data frame as the exposed type, it is also possible to use pandas indexing
and filtering style:

```python
sale_data = data_node["nb_sales"]
filtered_data = data_node[(data_node["nb_sales"] == 1550) | (data_node["nb_sales"] > 2000)]
```

Similarly, with numpy array exposed type, it is possible to use numpy style indexing and filtering
style:

```python
sale_data = data_node[:, 1]
filtered_data = data_node[(data_node[:, 1] == 1550) | (data_node[:, 1] > 2000)]
```

!!! warning "Supported data types"

For now, the `DataNode.filter()^` method and the indexing/filtering style are only implemented
for data as:

- a Pandas or Modin data frame,
- a Numpy array,
- a list of objects,
- a list of dictionaries.

Other data types are not supported.

For now, the `DataNode.filter()^` method is only implemented for `CSVDataNode^`, `ExcelDataNode^`,
`SQLTableDataNode^`, `SQLDataNode` with `"pandas"` as the _**exposed_type**_ value.

# Get parent scenarios, sequences and tasks

Expand Down
4 changes: 2 additions & 2 deletions docs/manuals/core/scheduling/code_example/my_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@


def write_orders_plan(data: pd.DataFrame):
insert_data = list(data[["date", "product_id", "number_of_products"]].itertuples(index=False, name=None))
return ["DELETE FROM orders", ("INSERT INTO orders VALUES (?, ?, ?)", insert_data)]
insert_data = data[["date", "product_id", "number_of_products"]].to_dict("records")
return ["DELETE FROM orders", ("INSERT INTO orders VALUES (:date, :product_id, :number_of_products)", insert_data)]


def train(sales_history: pd.DataFrame):
Expand Down
15 changes: 13 additions & 2 deletions docs/manuals/core/versioning/development_mode.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,5 +41,16 @@ The output on the console indicates that all entities of the development version
In a Notebook environment, development mode is applied by default when the run method of
the Core service is called.

This means all entities of the development version are cleaned every time `Core().run()` is invoked
in a code cell.
This means all entities of the development version are cleaned every time a Core service is
run in a code cell.

To run and stop a Core service instance, you can use the `run()` and `stop()` methods.

```python linenums="1"
from taipy import Core

core = Core()
core.run()
...
core.stop()
```
19 changes: 19 additions & 0 deletions docs/relnotes.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,29 @@ This is the list of changes to Taipy releases as they were published.
If you are using a legacy version, please refer to the
[Legacy Release Notes](relnotes-legacy.md) page.


# Community edition: 3.1

(Work in progress)

## New Features

## Improvements and changes

<h4><strong><code>taipy-core</code></strong> 3.1.0 </h4>

- Running the Core service more than one time will raise an exception to prevent
multiple instances of the Core service to run at the same time.

## Significant bug fixes

<h4><strong><code>taipy-core</code></strong> 3.1.0 </h4>

- Can not write to a SQLDataNode or a SQLTableDataNode using examples provided by the
documentation.<br/>
See [issue #816](https://github.com/Avaiga/taipy-core/issues/816).


# Community edition: 3.0

Published on 2023-10.
Expand Down

0 comments on commit ee599f9

Please sign in to comment.