Monday, December 15, 2025

5 Light-weight Alternate options to Pandas You Ought to Attempt


5 Light-weight Alternate options to Pandas You Ought to Attempt
Picture by Creator

 

Introduction

 
Builders use pandas for knowledge manipulation, however it may be sluggish, particularly with massive datasets. Due to this, many are in search of sooner and lighter options. These choices hold the core options wanted for evaluation whereas specializing in pace, decrease reminiscence use, and ease. On this article, we have a look at 5 light-weight options to pandas you may attempt.

 

1. DuckDB

 
DuckDB is like SQLite for analytics. You may run SQL queries instantly on comma-separated values (CSV) information. It’s helpful if you recognize SQL or work with machine studying pipelines. Set up it with:

 

We are going to use the Titanic dataset and run a easy SQL question on it like this:

import duckdb

url = "https://uncooked.githubusercontent.com/mwaskom/seaborn-data/grasp/titanic.csv"

# Run SQL question on the CSV
consequence = duckdb.question(f"""
    SELECT intercourse, age, survived
    FROM read_csv_auto('{url}')
    WHERE age > 18
""").to_df()

print(consequence.head())

 

Output:


      intercourse     age   survived
0     male    22.0          0
1   feminine    38.0          1
2   feminine    26.0          1
3   feminine    35.0          1
4     male    35.0          0

 

DuckDB runs the SQL question instantly on the CSV file after which converts the output right into a DataFrame. You get SQL pace with Python flexibility.

 

2. Polars

 
Polars is among the hottest knowledge libraries out there as we speak. It’s carried out within the Rust language and is exceptionally quick with minimal reminiscence necessities. The syntax can also be very clear. Let’s set up it utilizing pip:

 

Now, let’s use the Titanic dataset to cowl a easy instance:

import polars as pl

# Load dataset 
url = "https://uncooked.githubusercontent.com/mwaskom/seaborn-data/grasp/titanic.csv"
df = pl.read_csv(url)

consequence = df.filter(pl.col("age") > 40).choose(["sex", "age", "survived"])
print(consequence)

 

Output:


form: (150, 3)
┌────────┬──────┬──────────┐
│ intercourse    ┆ age  ┆ survived │
│ ---    ┆ ---  ┆ ---      │
│ str    ┆ f64  ┆ i64      │
╞════════╪══════╪══════════╡
│ male   ┆ 54.0 ┆ 0        │
│ feminine ┆ 58.0 ┆ 1        │
│ feminine ┆ 55.0 ┆ 1        │
│ male   ┆ 66.0 ┆ 0        │
│ male   ┆ 42.0 ┆ 0        │
│ …      ┆ …    ┆ …        │
│ feminine ┆ 48.0 ┆ 1        │
│ feminine ┆ 42.0 ┆ 1        │
│ feminine ┆ 47.0 ┆ 1        │
│ male   ┆ 47.0 ┆ 0        │
│ feminine ┆ 56.0 ┆ 1        │
└────────┴──────┴──────────┘

 

Polars reads the CSV, filters rows based mostly on an age situation, and selects a subset of the columns.

 

3. PyArrow

 
PyArrow is a light-weight library for columnar knowledge. Instruments like Polars use Apache Arrow for pace and reminiscence effectivity. It’s not a full substitute for pandas however is superb for studying information and preprocessing. Set up it with:

 

For our instance, let’s use the Iris dataset in CSV kind as follows:

import pyarrow.csv as csv
import pyarrow.compute as laptop
import urllib.request

# Obtain the Iris CSV 
url = "https://uncooked.githubusercontent.com/mwaskom/seaborn-data/grasp/iris.csv"
local_file = "iris.csv"
urllib.request.urlretrieve(url, local_file)

# Learn with PyArrow
desk = csv.read_csv(local_file)

# Filter rows
filtered = desk.filter(laptop.better(desk['sepal_length'], 5.0))

print(filtered.slice(0, 5))

 

Output:


pyarrow.Desk
sepal_length: double
sepal_width: double
petal_length: double
petal_width: double
species: string
----
sepal_length: [[5.1,5.4,5.4,5.8,5.7]]
sepal_width: [[3.5,3.9,3.7,4,4.4]]
petal_length: [[1.4,1.7,1.5,1.2,1.5]]
petal_width: [[0.2,0.4,0.2,0.2,0.4]]
species: [["setosa","setosa","setosa","setosa","setosa"]]

 

PyArrow reads the CSV and converts it right into a columnar format. Every column’s identify and kind are listed in a transparent schema. This setup makes it quick to examine and filter massive datasets.

 

4. Modin

 
Modin is for anybody who needs sooner efficiency with out studying a brand new library. It makes use of the identical pandas API however runs operations in parallel. You don’t want to alter your present code; simply replace the import. The whole lot else works like regular pandas. Set up it with pip:

 

For higher understanding, let’s attempt a small instance utilizing the identical Titanic dataset as follows:

import modin.pandas as pd
url = "https://uncooked.githubusercontent.com/mwaskom/seaborn-data/grasp/titanic.csv"

# Load the dataset
df = pd.read_csv(url)

# Filter the dataset 
adults = df[df["age"] > 18]

# Choose just a few columns to show
adults_small = adults[["survived", "sex", "age", "class"]]

# Show consequence
adults_small.head()

 

Output:


   survived     intercourse   age   class
0         0    male  22.0   Third
1         1  feminine  38.0   First
2         1  feminine  26.0   Third
3         1  feminine  35.0   First
4         0    male  35.0   Third

 

Modin spreads work throughout CPU cores, which suggests you’re going to get higher efficiency with out having to do something additional.

 

5. Dask

 
How do you deal with massive knowledge with out growing RAM? Dask is a superb selection when you could have information which can be larger in dimension than your laptop’s random entry reminiscence (RAM). It makes use of lazy analysis, so it doesn’t load your complete dataset into reminiscence. This helps you course of hundreds of thousands of rows easily. Set up it with:

pip set up dask[complete]

 

To attempt it out, we are able to use the Chicago Crime dataset, as follows:

import dask.dataframe as dd
import urllib.request

url = "https://knowledge.cityofchicago.org/api/views/ijzp-q8t2/rows.csv?accessType=DOWNLOAD"
local_file = "chicago_crime.csv"
urllib.request.urlretrieve(url, local_file)

# Learn CSV with Dask (lazy analysis)
df = dd.read_csv(local_file, dtype=str)  # all columns as string

# Filter crimes categorised as 'THEFT'
thefts = df[df['Primary Type'] == 'THEFT']

# Choose just a few related columns
thefts_small = thefts[["ID", "Date", "Primary Type", "Description", "District"]]

print(thefts_small.head())

 

Output:


          ID                   Date Main Kind       Description District            
5   13204489 09/06/2023 11:00:00 AM        THEFT         OVER $500      001
50  13179181 08/17/2023 03:15:00 PM        THEFT      RETAIL THEFT      014
51  13179344 08/17/2023 07:25:00 PM        THEFT      RETAIL THEFT      014
53  13181885 08/20/2023 06:00:00 AM        THEFT    $500 AND UNDER      025
56  13184491 08/22/2023 11:44:00 AM        THEFT      RETAIL THEFT      014

 

Filtering (Main Kind == 'THEFT') and deciding on columns are lazy operations. Filtering occurs immediately as a result of Dask processes knowledge in chunks reasonably than loading every thing directly.

 

Conclusion

 
We lined 5 options to pandas and how one can use them. The article retains issues easy and targeted. Verify the official documentation for every library for full particulars:

In the event you run into any points, depart a remark and I’ll assist.
 
 

Kanwal Mehreen is a machine studying engineer and a technical author with a profound ardour for knowledge science and the intersection of AI with drugs. She co-authored the book “Maximizing Productiveness with ChatGPT”. As a Google Era Scholar 2022 for APAC, she champions range and tutorial excellence. She’s additionally acknowledged as a Teradata Range in Tech Scholar, Mitacs Globalink Analysis Scholar, and Harvard WeCode Scholar. Kanwal is an ardent advocate for change, having based FEMCodes to empower girls in STEM fields.

Related Articles

Latest Articles