Oops! Something went wrong while submitting the form.

We use cookies to improve your browsing experience on our website, to show you personalised content and to analize our website traffic. By browsing our website, you consent to our use of cookies. Read privacy policy.

Oops! Something went wrong while submitting the form.

X

Velotio Technologies acquires Media Magic Technologies to expand its Mobile and Media engineering capabilities. Learn more about the acquisition and our future plans here.

Python is a great language for doing data analysis, primarily because of the fantastic ecosystem of data-centric Python packages. Pandas is one of those packages and makes importing and analyzing data much easier.

Pandas aims to integrate the functionality of NumPy and matplotlib to give you a convenient tool for data analytics and visualization. Besides the integration, it also makes the usage far more better.

In this blog, I’ll give you a list of useful pandas snippets that can be reused over and over again. These will definitely save you some time that you may otherwise need to skim through the comprehensive Pandas docs.

The data structures in Pandas are capable of holding elements of any type: Series, DataFrame.

Series

A one-dimensional object that can hold any data type such as integers, floats, and strings

A Series object can be created of different values. Series can be remembered similar to a Python list.

In the below example, NaN is NumPy’s nan symbol which tells us that the element is not a number but it can be used as one numerical type pointing out to be not a number. The type of series is an object because the series has mixed contents of strings and numbers.

A two-dimensional labeled data structure where columns can be of different types.

Each column in a Pandas DataFrame represents a Series object in memory.

In order to convert a certain Python object (dictionary, lists, etc) to a DataFrame, it is extremely easy. From the python dictionaries, the keys map to Column names while values correspond to a list of column values.

Now you will have to only replace “filetype” with the actual type of the file, like csv or excel. You will have to give the path of the file inside the parenthesis as the first argument. You can also pass in different arguments that relate to opening the file. (Reading a csv file? See this)

DataFrame comprises of three sub-components, the index, columns, and the data (also known as values).

The index represents a sequence of values. In the DataFrame, it always on the left side. Values in an index are in bold font. Each individual value of the index is called a label. Index is like positions while the labels are values at that particular index. Sometimes the index is also referred to as row labels. In all the examples below, the labels and indexes are the same and are just integers beginning from 0 up to n-1, where n is the number of rows in the table.

Selecting rows is done using loc and iloc:

loc gets rows (or columns) with particular labels from the index. Raises KeyError when the items are not found.

iloc gets rows (or columns) at particular positions/index (so it only takes integers). Raises IndexError if a requested indexer is out-of-bounds.

In Pandas, selecting data is very easy and similar to accessing an element from a dictionary or a list.

You can select a column (df[col_name]) and it will return column with label col_name as a Series, because rows and columns are stored as Series in a DataFrame, If you need to access more columns (df[[col_name_1, col_name_2]]) and it returns columns as a new DataFrame.

Filtering DataFrames with Conditional Logic

Let’s say we want all the companies with the vertical as B2B, the logic would be:

Furthermore, it’s also possible to sort values by multiple columns with different orders. colname_1 is being sorted in ascending order and colname_2 in descending order by using:

This operation involves 3 steps; splitting of the data, applying a function on each of the group, and finally combining the results into a data structure. This can be used to group large amounts of data and compute operations on these groups.

df.groupby(colname) returns a groupby object for values from one column while df.groupby([col1,col2]) returns a groupby object for values from multiple columns.

Data Cleansing

Data cleaning is a very important step in data analysis.

This drops the rows having empty values or df.dropna(axis=1) to drop the columns.

Also, if you wish to fill the missing values with other values, use df.fillna(x). This fills all the missing values with the value x (here you can put any value that you want) or s.fillna(s.mean()) which replaces null values with the mean (mean can be replaced with any function from the arithmetic section).

Operations on Complete Rows, Columns, or Even All Data

The .iterrows() loops 2 variables together i.e, the index of the row and the row itself, variable i is the index and variable row is the row in the code above.

Tips & Tricks

Using ufuncs(also known as Universal Functions). Python has the .apply() which applies a function to columns/rows. Similarly, Ufuncs can be used while preprocessing. What is the difference between ufuncs and .apply()?

Ufuncs is a numpy library, implemented in C which is highly efficient (ufuncs are around 10 times faster).

isinf: Element-wise checks for positive or negative infinity.

isnan: Element-wise checks for NaN and returns result as a boolean array.

isnat: Element-wise checks for NaT (not time) and returns result as a boolean array.

trunc: Return the truncated value of the input, element-wise.

.dt commands: Element-wise processing for date objects.

High-Performance Pandas

Pandas performs various vectorized/broadcasted operations and grouping-type operations. These operations are efficient and effective.

As of version 0.13, Pandas included tools that allow us to directly access C-speed operations without costly allocation of intermediate arrays. There are two functions, eval() and query().

Memory usage: Every operation which involves NumPy/Pandas DataFrames results into implicit creation of temporary variables. In such cases, if the memory usage of these temporary variables is greater, using eval() and query() is an appropriate choice to reduce the memory usage.

Computation time:Traditional method of performing NumPy/Pandas operations is faster for smaller arrays! The real benefit of eval()/query() is achieved mainly because of the saved memory, and also because of the cleaner syntax they offer.

Conclusion

Pandas is a powerful and fun library for data manipulation/analysis. It comes with easy syntax and fast operations. The blog highlights the most used pandas implementation and optimizations. Best way to master your skills over pandas is to use real datasets, beginning with Kaggle kernels to learning how to use pandas for data analysis. Check out more onreal time text classification using Kafka and Scikit-learn and explanatory vs. predictive models in machine learning here.

Thank you! Your submission has been received!

Oops! Something went wrong while submitting the form.

Python is a great language for doing data analysis, primarily because of the fantastic ecosystem of data-centric Python packages. Pandas is one of those packages and makes importing and analyzing data much easier.

Pandas aims to integrate the functionality of NumPy and matplotlib to give you a convenient tool for data analytics and visualization. Besides the integration, it also makes the usage far more better.

In this blog, I’ll give you a list of useful pandas snippets that can be reused over and over again. These will definitely save you some time that you may otherwise need to skim through the comprehensive Pandas docs.

The data structures in Pandas are capable of holding elements of any type: Series, DataFrame.

Series

A one-dimensional object that can hold any data type such as integers, floats, and strings

A Series object can be created of different values. Series can be remembered similar to a Python list.

In the below example, NaN is NumPy’s nan symbol which tells us that the element is not a number but it can be used as one numerical type pointing out to be not a number. The type of series is an object because the series has mixed contents of strings and numbers.

A two-dimensional labeled data structure where columns can be of different types.

Each column in a Pandas DataFrame represents a Series object in memory.

In order to convert a certain Python object (dictionary, lists, etc) to a DataFrame, it is extremely easy. From the python dictionaries, the keys map to Column names while values correspond to a list of column values.

Now you will have to only replace “filetype” with the actual type of the file, like csv or excel. You will have to give the path of the file inside the parenthesis as the first argument. You can also pass in different arguments that relate to opening the file. (Reading a csv file? See this)

DataFrame comprises of three sub-components, the index, columns, and the data (also known as values).

The index represents a sequence of values. In the DataFrame, it always on the left side. Values in an index are in bold font. Each individual value of the index is called a label. Index is like positions while the labels are values at that particular index. Sometimes the index is also referred to as row labels. In all the examples below, the labels and indexes are the same and are just integers beginning from 0 up to n-1, where n is the number of rows in the table.

Selecting rows is done using loc and iloc:

loc gets rows (or columns) with particular labels from the index. Raises KeyError when the items are not found.

iloc gets rows (or columns) at particular positions/index (so it only takes integers). Raises IndexError if a requested indexer is out-of-bounds.

In Pandas, selecting data is very easy and similar to accessing an element from a dictionary or a list.

You can select a column (df[col_name]) and it will return column with label col_name as a Series, because rows and columns are stored as Series in a DataFrame, If you need to access more columns (df[[col_name_1, col_name_2]]) and it returns columns as a new DataFrame.

Filtering DataFrames with Conditional Logic

Let’s say we want all the companies with the vertical as B2B, the logic would be:

Furthermore, it’s also possible to sort values by multiple columns with different orders. colname_1 is being sorted in ascending order and colname_2 in descending order by using:

This operation involves 3 steps; splitting of the data, applying a function on each of the group, and finally combining the results into a data structure. This can be used to group large amounts of data and compute operations on these groups.

df.groupby(colname) returns a groupby object for values from one column while df.groupby([col1,col2]) returns a groupby object for values from multiple columns.

Data Cleansing

Data cleaning is a very important step in data analysis.

This drops the rows having empty values or df.dropna(axis=1) to drop the columns.

Also, if you wish to fill the missing values with other values, use df.fillna(x). This fills all the missing values with the value x (here you can put any value that you want) or s.fillna(s.mean()) which replaces null values with the mean (mean can be replaced with any function from the arithmetic section).

Operations on Complete Rows, Columns, or Even All Data

The .iterrows() loops 2 variables together i.e, the index of the row and the row itself, variable i is the index and variable row is the row in the code above.

Tips & Tricks

Using ufuncs(also known as Universal Functions). Python has the .apply() which applies a function to columns/rows. Similarly, Ufuncs can be used while preprocessing. What is the difference between ufuncs and .apply()?

Ufuncs is a numpy library, implemented in C which is highly efficient (ufuncs are around 10 times faster).

isinf: Element-wise checks for positive or negative infinity.

isnan: Element-wise checks for NaN and returns result as a boolean array.

isnat: Element-wise checks for NaT (not time) and returns result as a boolean array.

trunc: Return the truncated value of the input, element-wise.

.dt commands: Element-wise processing for date objects.

High-Performance Pandas

Pandas performs various vectorized/broadcasted operations and grouping-type operations. These operations are efficient and effective.

As of version 0.13, Pandas included tools that allow us to directly access C-speed operations without costly allocation of intermediate arrays. There are two functions, eval() and query().

Memory usage: Every operation which involves NumPy/Pandas DataFrames results into implicit creation of temporary variables. In such cases, if the memory usage of these temporary variables is greater, using eval() and query() is an appropriate choice to reduce the memory usage.

Computation time:Traditional method of performing NumPy/Pandas operations is faster for smaller arrays! The real benefit of eval()/query() is achieved mainly because of the saved memory, and also because of the cleaner syntax they offer.

Conclusion

Pandas is a powerful and fun library for data manipulation/analysis. It comes with easy syntax and fast operations. The blog highlights the most used pandas implementation and optimizations. Best way to master your skills over pandas is to use real datasets, beginning with Kaggle kernels to learning how to use pandas for data analysis. Check out more onreal time text classification using Kafka and Scikit-learn and explanatory vs. predictive models in machine learning here.

Velotio Technologies is an outsourced software product development partner for top technology startups and enterprises. We partner with companies to design, develop, and scale their products. Our work has been featured on TechCrunch, Product Hunt and more.

We have partnered with our customers to built 90+ transformational products in areas of edge computing, customer data platforms, exascale storage, cloud-native platforms, chatbots, clinical trials, healthcare and investment banking.

Since our founding in 2016, our team has completed more than 90 projects with 220+ employees across the following areas:

Building web/mobile applications

Architecting Cloud infrastructure and Data analytics platforms