Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introducing Data Wrangling with Polars #26

Open
wants to merge 31 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
bca2d8f
polars dir created
koushik-ta Feb 8, 2025
cd95559
Merge branch 'marimo-team:main' into feat/issue#18/polars-data-wrangling
koushikkhan Feb 8, 2025
24c07d4
updated why_polars
koushikkhan Feb 8, 2025
1f12cee
deleted layouts dir
koushikkhan Feb 8, 2025
173b025
added readme for polars
koushikkhan Feb 8, 2025
3f17228
notebook indexing updated
koushikkhan Feb 8, 2025
3bdaf96
updated visual aspects
koushikkhan Feb 9, 2025
fb175fb
Update polars/01_why_polars.py
koushikkhan Feb 10, 2025
7be0656
Updated section header - Intuitive syntax
koushikkhan Feb 10, 2025
912d45e
updated text under intuitive syntax
koushikkhan Feb 10, 2025
41a51b5
keeping only code cell for examples
koushikkhan Feb 10, 2025
e7ecc90
updated text under introduction
koushikkhan Feb 10, 2025
1c1351f
updated text under why polars
koushikkhan Feb 10, 2025
2b50c4f
updated text before showing examples
koushikkhan Feb 10, 2025
070c0c7
Updated section header - Choosing Polars over Pandas
koushikkhan Feb 10, 2025
9a474f5
updated text for intro
koushikkhan Feb 10, 2025
4c8f59f
keeping only code cell for examples - polars
koushikkhan Feb 10, 2025
67c85ae
simplifying textual description
koushikkhan Feb 10, 2025
40bbf43
keeping only code cell for examples
koushikkhan Feb 10, 2025
1a5601d
updating section header - A large collection of built-in APIs
koushikkhan Feb 10, 2025
f2c1d1b
updated text under - A large collection of build-in APIs
koushikkhan Feb 10, 2025
15a2dfa
updated section header - Query optimization
koushikkhan Feb 10, 2025
a7e90d2
updated section header - Scalability — handling large datasets in memory
koushikkhan Feb 10, 2025
9ae3f7e
updated textual description
koushikkhan Feb 10, 2025
794f008
updated section header - Compatibility with other machine learning li…
koushikkhan Feb 10, 2025
8bb057d
updated section header - Easy to use, with room for power users
koushikkhan Feb 10, 2025
00e8b42
updated section header - Why not PySpark?
koushikkhan Feb 10, 2025
a664014
updated textual description under - Why not PySpark?
koushikkhan Feb 10, 2025
64ad03e
updated reference description
koushikkhan Feb 10, 2025
e9c1403
updated textual description under introduction
koushikkhan Feb 10, 2025
91124fd
updated reference header
koushikkhan Feb 10, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
299 changes: 299 additions & 0 deletions polars/01_why_polars.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,299 @@
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "marimo",
# "pandas==2.2.3",
# "polars==1.22.0",
# ]
# ///

import marimo

__generated_with = "0.11.0"
app = marimo.App(width="medium")


@app.cell
def _():
import marimo as mo
return (mo,)


@app.cell
def _(mo):
mo.md(
"""
# An introduction to Polars

This notebook provides a birds-eye overview of [Polars](https://pola.rs/), a fast and user-friendly data manipulation library for Python, and compares it to alternatives like Pandas and PySpark.

Like Pandas and PySpark, the central data structure in Polars is **the DataFrame**, a tabular data structure consisting of named columns. For example, the next cell constructs a DataFrame that records the gender, age, and height in centimeters for a number of individuals.

<INSERT CODE CELL>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The suggestion was to split this markdown into two cells, and insert a block of Python in between that creates a DataFrame (for example, the gender, age, and height dataframe) which is reused in subsequent cells. Perhaps

    import polars as pl

    df_pl = pl.DataFrame(
        { 
            "gender": ["Male", "Female", "Male", "Female", "Male", "Female", 
                       "Male", "Female", "Male", "Female"],
            "age": [13, 15, 17, 19, 21, 23, 25, 27, 29, 31],
            "height_cm": [150.0, 170.0, 146.5, 142.0, 155.0, 165.0, 170.8, 130.0, 132.5, 162.0]
        }
    )
    df_pl


Unlike Python's earliest DataFrame library Pandas, Polars was designed with performance and usability in mind — Polars can scale to large datasets with ease while maintaining a simple and intuitive API.

Polars' performance is due to a number of factors, including its implementation and rust and its ability to perform operations in a parallelized and vectorized manner. It supports a wide range of data types, advanced query optimizations, and seamless integration with other Python libraries, making it a versatile tool for data scientists, engineers, and analysts. Additionally, Polars provides a lazy API for deferred execution, allowing users to optimize their workflows by chaining operations and executing them in a single pass.

With its focus on speed, scalability, and ease of use, Polars is quickly becoming a go-to choice for data professionals looking to streamline their data processing pipelines and tackle large-scale data challenges.
"""
)
return


@app.cell
def _(mo):
mo.md(
"""
## Choosing Polars over Pandas


In this section we'll give a few reasons why Polars is a better choice than Pandas, along with examples.
"""
)
return


@app.cell
def _(mo):
mo.md(
"""
### Intuitive syntax

Polars' syntax is similar to PySpark and intuitive like SQL, making heavy use of **method chaining**. This makes it easy for data professionals to transition to Polars, and leads to an API that is more concise and readable than Pandas.

**Example.** In the next few cells, we contrast the code to perform a basic filter and aggregation of data with Pandas to the code required to accomplish the same task with `Polars`.
"""
)
return


@app.cell
def _():
import pandas as pd

df_pd = pd.DataFrame(
{
"Gender": ["Male", "Female", "Male", "Female", "Male", "Female",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: for consistency throughout future notebooks, let's use lowercase keys: "gender", "height_cm".

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, will continue with lower case letter while defining column names.

"Male", "Female", "Male", "Female"],
"Age": [13, 15, 17, 19, 21, 23, 25, 27, 29, 31],
"Height_CM": [150.0, 170.0, 146.5, 142.0, 155.0, 165.0, 170.8, 130.0, 132.5, 162.0]
}
)

# query: average height of male and female after the age of 15 years

# step-1: filter
filtered_df_pd = df_pd[df_pd["Age"] > 15]

# step-2: groupby and aggregation
result_pd = filtered_df_pd.groupby("Gender")["Height_CM"].mean()
result_pd
return df_pd, filtered_df_pd, pd, result_pd


@app.cell
def _(mo):
mo.md(
r"""
The same example can be worked out in Polars more concisely, using method chaining. Notice how the Polars code is essentially as readable as English.
"""
)
return


@app.cell
def _():
import polars as pl

df_pl = pl.DataFrame(
{
"Gender": ["Male", "Female", "Male", "Female", "Male", "Female",
"Male", "Female", "Male", "Female"],
"Age": [13, 15, 17, 19, 21, 23, 25, 27, 29, 31],
"Height_CM": [150.0, 170.0, 146.5, 142.0, 155.0, 165.0, 170.8, 130.0, 132.5, 162.0]
}
)

# query: average height of male and female after the age of 15 years

# filter, groupby and aggregation using method chaining
result_pl = df_pl.filter(pl.col("Age") > 15).group_by("Gender").agg(pl.mean("Height_CM"))
result_pl
return df_pl, pl, result_pl


@app.cell
def _(mo):
mo.md(
"""
Notice how Polars uses a *method-chaining* approach, similar to PySpark, which makes the code more readable and expressive while using a *single line* to design the query.

Additionally, Polars supports SQL-like operations *natively*, that allows you to write SQL queries directly on polars dataframe:
"""
)
return


@app.cell
def _(df_pl):
result = df_pl.sql("SELECT Gender, AVG(Height_CM) FROM self WHERE Age > 15 GROUP BY Gender")
result
return (result,)


@app.cell
def _(mo):
mo.md(
"""
### A large collection of built-in APIs

Polars has a comprehensive API that enables to perform virtually any operation using built-in methods. In contrast, Pandas often requires more complex operations to be handled using the `apply` method with a lambda function. The issue with `apply` is that it processes rows sequentially, looping through the DataFrame one row at a time, which can be inefficient. By leveraging Polars' built-in methods, you can operate on entire columns at once, unlocking the power of **SIMD (Single Instruction, Multiple Data)** parallelism. This approach not only simplifies your code but also significantly improves performance.
"""
)
return


@app.cell
def _(mo):
mo.md(
"""
### Query optimization 📈

A key factor behind Polars' performance lies in its **evaluation strategy**. While Pandas defaults to **eager execution**, executing operations in the exact order they are written, Polars offers both **eager and lazy execution**. With lazy execution, Polars employs a **query optimizer** that analyzes all required operations and determines the most efficient way to execute them. This optimization can involve reordering operations, eliminating redundant calculations, and more.

For example, consider the following expression to calculate the mean of the `Number1` column for categories "A" and "B" in the `Category` column:

```python
(
df
.groupby(by="Category").agg(pl.col("Number1").mean())
.filter(pl.col("Category").is_in(["A", "B"]))
)
```

If executed eagerly, the `groupby` operation would first be applied to the entire DataFrame, followed by filtering the results by `Category`. However, with **lazy execution**, Polars can optimize this process by first filtering the DataFrame to include only the relevant categories ("A" and "B") and then performing the `groupby` operation on the reduced dataset. This approach minimizes unnecessary computations and significantly improves efficiency.
"""
)
return


@app.cell
def _(mo):
mo.md(
"""
### Scalability — handling large datasets in memory ⬆️

Pandas is limited by its single-threaded design and reliance on Python, which makes it inefficient for processing large datasets. Polars, on the other hand, is built in Rust and optimized for parallel processing, enabling it to handle datasets that are orders of magnitude larger.

**Example: Processing a Large Dataset**
In Pandas, loading a large dataset (e.g., 10GB) often results in memory errors:

```python
# This may fail with large datasets
df = pd.read_csv("large_dataset.csv")
```

In Polars, the same operation runs quickly, without memory pressure:

```python
df = pl.read_csv("large_dataset.csv")
```

Polars also supports lazy evaluation, which allows you to optimize your workflows by deferring computations until necessary. This is particularly useful for large datasets:

```python
df = pl.scan_csv("large_dataset.csv") # Lazy DataFrame
result = df.filter(pl.col("A") > 1).groupby("A").agg(pl.sum("B")).collect() # Execute
```
"""
)
return


@app.cell
def _(mo):
mo.md(
"""
### Compatibility with other machine learning libraries 🤝

Polars integrates seamlessly with popular machine learning libraries like Scikit-learn, PyTorch, and TensorFlow. Its ability to handle large datasets efficiently makes it an excellent choice for preprocessing data before feeding it into ML models.

**Example: Preprocessing Data for Scikit-learn**

```python
import polars as pl
from sklearn.linear_model import LinearRegression

# Load and preprocess data
df = pl.read_csv("data.csv")
X = df.select(["feature1", "feature2"]).to_numpy()
y = df.select("target").to_numpy()

# Train a model
model = LinearRegression()
model.fit(X, y)
```

Polars also supports conversion to other formats like NumPy arrays and Pandas DataFrames, ensuring compatibility with virtually any ML library:

```python
# Convert to Pandas DataFrame
pandas_df = df.to_pandas()

# Convert to NumPy array
numpy_array = df.to_numpy()
```
"""
)
return


@app.cell
def _(mo):
mo.md(
"""
### Easy to use, with room for power users

Polars supports advanced operations like

- **date handling**
- **window functions**
- **joins**
- **nested data types**

which is making it a versatile tool for data manipulation.
"""
)
return


@app.cell
def _(mo):
mo.md(
"""
## Why not PySpark?

While **PySpark** is versatile tool that has transformed the way big data is handled and processed in Python, its **complex setup process** can be intimidating, especially for beginners. In contrast, **Polars** requires minimal setup and is ready to use right out of the box, making it more accessible for users of all skill levels.

When deciding between the two, **PySpark** is the preferred choice for processing large datasets distributed across a **multi-node cluster**. However, for computations on a **single-node machine**, **Polars** is an excellent alternative. Remarkably, Polars is capable of handling datasets that exceed the size of the available RAM, making it a powerful tool for efficient data processing even on limited hardware.
"""
)
return


@app.cell
def _(mo):
mo.md(
"""
## 🔖 References

- [Polars official website](https://pola.rs/)
- [Polars vs. Pandas](https://blog.jetbrains.com/pycharm/2024/07/polars-vs-pandas/)
"""
)
return


if __name__ == "__main__":
app.run()
9 changes: 9 additions & 0 deletions polars/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Learn Polars

This collection of marimo notebooks is designed to teach you the basics of data wrangling using a Python library called Polars.

**Running notebooks.** To run a notebook locally, use

```bash
uvx marimo edit <file_url>
```