How to Load and Manipulate Datasets in Python Using Pandas

In this blog you will be able to discover various data manipulation methods using pandas library. By learning manipulation techniques, you can do the EDA in an efficient way.
Dec 20, 2024
12 min read

Python is widely known for its flexibility and simplicity in data analysis, and one of the most powerful libraries that make this possible is Pandas. Whether you're dealing with small datasets or massive ones, Pandas offers intuitive tools for data manipulation. In this article, we'll walk through how to load and manipulate data in Python using Pandas, covering common operations that are essential for any data analysis project. If you're new to this, you're in the right place!

What is Pandas?

Pandas is an open-source library built on top of NumPy, providing fast, flexible, and expressive data structures designed to work with relational or labeled data easily. With Pandas, you can perform operations like filtering, sorting, aggregating, and transforming data with ease. It provides high-level data structures and a variety of functions designed to make working with structured data fast, easy, and expressive. Created primarily to work with tabular data (like spreadsheets or SQL tables), Pandas is widely used in data science, machine learning, and data analysis projects due to its intuitive API and powerful features. Pandas introduces two primary data structures: DataFrame (for 2D data) and Series (for 1D data). 

These structures allow you to store and manipulate data in a way that resembles tables (rows and columns) for DataFrames or single columns for Series. Pandas provides robust capabilities for data transformation. Whether it’s reshaping data with pivot tables, merging or joining DataFrames, or transposing, Pandas makes these operations efficient and intuitive. Pandas is a must-know library for anyone working with data in Python. It simplifies everything from loading datasets to performing complex analyses, making it an indispensable tool for data manipulation.

Setting Up: Installing Pandas and Datasets Library

Before working with datasets, you need to have Pandas installed in your Python environment. You can install Pandas using the following command:

pip install pandas

You can also load sample datasets directly from popular libraries such as datasets. If you're exploring various datasets for practice or research, you may find datasets helpful. By using pip install, a dataset can be installed. To install datasets, run:

pip install datasets

This allows you to access various built-in datasets commonly used for machine learning and analysis.

Loading Datasets in Python Using Pandas

Once you have installed Pandas, you're ready to load your dataset. Pandas supports multiple file formats, including CSV, Excel, JSON, SQL, and more.

Loading CSV Files

CSV (Comma Separated Values) files are widely used for storing datasets. To load a CSV file using Pandas, you can use the read_csv() function:

import pandas as pd

# Load a dataset

data = pd.read_csv('your_dataset.csv')

# Display the first 5 rows

print(data.head())

Loading Excel Files

If you're working with Excel files, you can load them using read_excel():

data = pd.read_excel('your_dataset.xlsx')

# Display the dataset

print(data.head())

Pandas also allows you to specify sheet names when working with multi-sheet Excel files:

data = pd.read_excel('your_dataset.xlsx', sheet_name='Sheet1')

Loading JSON Files

JSON is another popular format, especially when dealing with APIs. You can load JSON files into a Pandas DataFrame using read_json():

data = pd.read_json('your_dataset.json')

# Display the dataset

print(data.head())

Loading Datasets from Databases

If your dataset is stored in an SQL database, you can use Pandas to query and load the data. First, you’ll need to install a SQL connector (like sqlite3 or SQLAlchemy), then use read_sql() to load the data:

import sqlite3

# Establish a connection

conn = sqlite3.connect('database.db')

# Load data from a SQL query

data = pd.read_sql('SELECT * FROM table_name', conn)

print(data.head())

how to load datasets from Scikit-learn into Pandas:

First, ensure that Scikit-learn is installed in your environment:

pip install scikit-learn

Scikit-learn offers datasets in different formats, such as dictionaries and Bunch objects. Some common datasets include the Iris, Boston Housing, Wine, and Diabetes datasets. Using the scikit-learn library we can load dataset into python pandas.

Here’s how to load a few of these datasets into Pandas:

a) Loading the Iris Dataset

The Iris dataset is commonly used for classification tasks.

from sklearn.datasets import load_iris

import pandas as pd

# Load the dataset

iris = load_iris()

# Convert to Pandas DataFrame

iris_df = pd.DataFrame(iris.data, columns=iris.feature_names)

# Add the target column

iris_df['target'] = iris.target

# Display the first 5 rows

print(iris_df.head())

b) Loading the Wine Dataset

The Wine dataset is used for classification tasks, similar to the Iris dataset.

from sklearn.datasets import load_wine

import pandas as pd

# Load the dataset

wine = load_wine()

# Convert to Pandas DataFrame

wine_df = pd.DataFrame(wine.data, columns=wine.feature_names)

# Add the target column

wine_df['target'] = wine.target

# Display the first 5 rows

print(wine_df.head())

c) Loading the Diabetes Dataset

The Diabetes dataset is often used for regression tasks.

from sklearn.datasets import load_diabetes

import pandas as pd

# Load the dataset

diabetes = load_diabetes()

# Convert to Pandas DataFrame

diabetes_df = pd.DataFrame(diabetes.data, columns=diabetes.feature_names)

# Add the target column

diabetes_df['target'] = diabetes.target

# Display the first 5 rows

print(diabetes_df.head())

Manipulating Datasets in Python

Once your dataset is loaded, the next step is to manipulate and clean the data for analysis. Pandas provides numerous tools for this.

Viewing Data

To get a quick look at your dataset, use the following functions:

  • head(): View the first few rows
  • info(): Display information about the DataFrame (columns, data types, etc.)
  • describe(): Get summary statistics for numerical columns

print(data.head())     # First 5 rows

print(data.info())     # DataFrame info

print(data.describe()) # Summary statistics

Filtering Data

To filter rows based on conditions, you can use the following syntax:

# Filter rows where 'column_name' is greater than a value

filtered_data = data[data['column_name'] > value]

print(filtered_data)

You can also filter by multiple conditions:

# Filter where 'column_1' is greater than value1 and 'column_2' is less than value2

filtered_data = data[(data['column_1'] > value1) & (data['column_2'] < value2)]

Sorting Data

Sorting a dataset based on a specific column can be achieved using sort_values():

# Sort data by 'column_name'

sorted_data = data.sort_values(by='column_name')

Grouping Data

If you want to perform operations like sum, mean, or count on groups of data, you can use the groupby() function:

# Group data by 'column_name' and calculate the mean

grouped_data = data.groupby('column_name').mean()

print(grouped_data)

Handling Missing Data

It's common to encounter missing data in datasets. You can use Pandas to handle these missing values efficiently.

  • Drop missing values: Removes rows with missing values.

data = data.dropna()

  • Fill missing values: Replaces missing values with a specific value (e.g., the column mean).

data = data.fillna(data.mean())

Saving Datasets

After manipulating your dataset, you might want to save it for further use. Pandas allows you to export DataFrames to various formats like CSV, Excel, and JSON.

Save as CSV

data.to_csv('cleaned_dataset.csv', index=False)

Save as Excel

data.to_excel('cleaned_dataset.xlsx', index=False)

Advanced Topics to explore

Here are a few more advanced topics to include in your blog on How to Load and Manipulate Datasets using Pandas:

1. Merging and Joining DataFrames

  • Learn to merge multiple DataFrames using merge(), which is similar to SQL joins (inner, outer, left, right).
  • Understand how to use concat() to concatenate DataFrames along rows or columns.

2. Pivot Tables and Crosstabulation

  • Use pivot_table() to summarize and reorganize data.
  • Perform cross-tabulations with pd.crosstab() for deeper insights.

3. Reshaping Data with Stack and Unstack

  • Use stack() and unstack() to convert columns to rows and vice versa, especially useful for MultiIndex DataFrames.

4. Time Series Data Manipulation

  • Handle and analyze time-series data by parsing dates, resampling, and performing rolling window calculations.

5. Optimizing Performance with Pandas

  • Learn how to optimize memory usage by working with large datasets using chunks and efficient data types (categorical).

These additions will help broaden the scope of your blog, giving readers more advanced data manipulation tools.

Conclusion

Pandas makes working with datasets using Python extremely efficient. Whether you're loading data from a file, manipulating it to fit your needs, or saving the cleaned dataset, Pandas offers all the tools to streamline your workflow. By mastering these basic operations, you'll be well on your way to becoming proficient in Python data manipulation.

Start experimenting with your datasets and take advantage of the wide range of functionalities Pandas has to offer!

Ready to transform your AI career? Join our expert-led courses at SkillCamper today and start your journey to success. Sign up now to gain in-demand skills from industry professionals.

If you're a beginner, take the first step toward mastering Python! Check out this Fullstack Generative AI course to get started with the basics and advance to complex topics at your own pace.

To stay updated with latest trends and technologies, to prepare specifically for interviews, make sure to read our detailed blogs:

Top 10 NLP Techniques Every Data Scientist Should Know: Understand NLP techniques easily and make your foundation strong.

SIMILAR BLOGS

Interested in Writing for Us?

Share your expertise, inspire others, and join a community of passionate writers. Submit your articles on topics that matter to our readers. Gain visibility, grow your portfolio, and make an impact.
Join Now