Dunder Data Challenge #5 — Keeping Values Within the Interquartile Range

dunder data challenges Nov 14, 2019

In this challenge, you are given a table of closing stock prices for 10 different stocks with data going back as far as 1999. For each stock, calculate the interquartile range (IQR). Return a DataFrame that satisfies the following conditions:

  • Keep values as they are if they are within the IQR
  • For values lower than the first quartile, make them equal equal to the exact value of the first quartile
  • For values higher than the third quartile, make them equal equal to the exact value of the third quartile

Start this challenge in a Jupyter Notebook right now thanks to Binder (mybinder.org)

import pandas as pd
stocks = pd.read_csv('../data/stocks10.csv', index_col='date', parse_dates=['date'])
stocks.head()

Challenge

There is a straightforward solution that completes this challenge in a single line of readable code. Can you find it?

Become a pandas expert

If you are looking to completely master the pandas library and become a trusted expert for doing data science work,...

Continue Reading...

Dunder Data Challenge #4 - Solution

dunder data challenges Nov 13, 2019

In this post, I detail the solution to Dunder Data Challenge #4 — Finding the Date of the Largest Percentage Stock Price Drop.

Solution

To begin, we need to find the percentage drop for each stock for each day. pandas has a built-in method for this called pct_change. By default, it finds the percentage change between the current value and the one immediately above it. Like most DataFrame methods, it treats each column independently from the others.

If we call it on our current DataFrame, we’ll get an error as it will not work on our date column. Let’s re-read in the data, converting the date column to a datetime and place it in the index.

stocks = pd.read_csv('../data/stocks10.csv', parse_dates=['date'],
index_col='date')
stocks.head()

Placing the date column in the index is a key part of this challenge that makes our solution quite a bit nicer. Let’s now call the pct_change method to get the percentage change for each trading day.

...
Continue Reading...

Dunder Data Challenge #4 - Finding the Date of the Largest Percentage Stock Price Drop

dunder data challenges Nov 12, 2019

In this challenge, you are given a table of closing stock prices for 10 different stocks with data going back as far as 1999. For each stock, find the date where it had its largest one-day percentage loss.

Begin working this challenge now in a Jupyter Notebook with Binder

Begin working this challenge now in a Jupyter Notebook thanks to Binder (mybinder.org). The data is found in the stocks10.csv file with the ticker symbol as a column name.

The Dunder Data Challenges Github repository also contains all of the challenges.

Challenge

Can you return a Series that has the ticker symbols in the index and the date where the largest percentage price drop happened as the values? There is a nice, fast solution that uses just a minimal amount of code without any loops.

Extra challenge

Can you return a DataFrame with the ticker symbol as the columns with a row for the date and another row for the percentage price drop?

Become an Expert

Continue Reading...

Dunder Data Challenge #3 - Optimal Solution

dunder data challenges Sep 17, 2019

In this article, I will present an ‘optimal’ solution to Dunder Data Challenge #3. Please refer to that article for the problem setup. Work on this challenge directly in a Jupyter Notebook right now by clicking this link.

Naive Solution — Custom function with apply

The naive solution was presented in detail in the previous article. The end result was a massive custom function containing many boolean filters used to find specific subsets of data to aggregate. For each group, a Series was returned with 11 values. Each of these values became a new column in the resulting DataFrame. Let’s take a look at the custom function:

 

Our performance using this naive solution takes nearly 4 seconds.

Become an Expert

Continue Reading...

Use the brackets to select a single pandas DataFrame column and not dot notation

pandas Sep 13, 2019
 

pandas offers its users two choices to select a single column of data and that is with either brackets or dot notation. In this article, I suggest using the brackets and not dot notation for the following ten reasons.

  1. Select column names with spaces
  2. Select column names that have the same name as methods
  3. Select columns with variables
  4. Select non-string columns
  5. Set new columns
  6. Select multiple columns
  7. Dot notation is a strict subset of the brackets
  8. Use one way which works for all situations
  9. Auto-completion works in the brackets and following it
  10. Brackets are the canonical way to select subsets for all objects

Selecting a single column

Let’s begin by creating a small DataFrame with a few columns

import pandas as pd
df = pd.DataFrame({'name': ['Niko', 'Penelope', 'Aria'],
'average score': [10, 5, 3],
'max': [99, 100, 3]})
df
 

Let’s select the name column with dot notation. Many pandas users like dot notation.

>>> df.name
0 Niko
1 Penelope
2 Aria

...

Continue Reading...

Dunder Data Challenge #3 - Naive Solution

dunder data challenges Sep 12, 2019

To view the problem setup, go to the Dunder Data Challenge #3 post. This post will contain the solution.

Become an Expert

I will first present a naive solution that returns the correct results, but is extremely slow. It uses a large custom function with the groupby apply method. Using the groupby apply method has potential to capsize your program as performance can be awful.

One of my first attempts at using a groupby apply to solve a complex grouping problem resulted in a computation that took about eight hours to finish. The dataset was fairly large, at around a million rows, but could still easily fit in memory. I eventually ended up solving the problem using SAS (and not pandas) and shrank the execution...

Continue Reading...

Dunder Data Challenge #3 - Multiple Custom Grouping Aggregations

dunder data challenges Sep 09, 2019

Welcome to the third edition of the Dunder Data Challenge series designed to help you learn python, data science, and machine learning. Begin working on any of the challenges directly in a Jupyter Notebook courtesy of Binder (mybinder.org).

This challenge is going to be fairly difficult, but should answer a question that many pandas users face — What is the best way to perform a groupby that does many custom aggregations? In this context, a ‘custom aggregation’ is defined as one that is not directly available to use from pandas and one that you must write a custom function.

In Dunder Data Challenge #1, a single aggregation, which required a custom grouping function, was the desired result. In this challenge, you’ll need to return several aggregations when grouping. There are a few different solutions to this problem, but depending on how you arrive at your solution, there could arise enormous performance differences. I am...

Continue Reading...

Dunder Data Challenge #2 - Explain the 1,000x Speed Difference when taking the Mean

dunder data challenges Sep 08, 2019

Welcome to the second edition of the Dunder Data Challenge series designed to help you learn python, data science, and machine learning. Begin working on any of the challenges directly in a Jupyter Notebook courtesy of Binder (mybinder.org).

In this challenge, your goal is to explain why taking the mean of the following DataFrame is more than 1,000x faster when setting the parameter   numeric_only to True

Learn Data Science with Python

I have several online and in-person courses available on dunderdata.com to teach you Python, data science, and machine learning.

Online Courses

  • Master Data Analysis with Python — a comprehensive course with access to over 500 pages of text, 300 exercises, 13 hours of video, multiple projects, and detailed solutions
  • Exercise Python — master the fundamentals of Python with access to over 300 pages of text, 150 exercises, multiple projects and detailed solutions
  • Intro to...
Continue Reading...

Dunder Data Challenge #1 - Optimize Custom Grouping Function

dunder data challenges Sep 07, 2019

This is the first edition of the Dunder Data Challenge series designed to help you learn python, data science, and machine learning. Begin working on any of the challenges directly in a Jupyter Notebook thanks to Binder (mybinder.org).

In this challenge, your goal is to find the fastest solution while only using the Pandas library.

Become an Expert

The Challenge

The college_pop dataset contains the name, state, and population of all higher-ed institutions in the US and its territories. For each state, find the percentage of the total state population made up by the 5 largest colleges of that state. Below, you can inspect the first few rows of the...

Continue Reading...

Pandas Cookbook — Develop Powerful Routines for Exploring Real-World Datasets

pandas Jul 18, 2019

In this article, I will discuss the overall approach I took to writing Pandas Cookbook along with highlights of each chapter.

New Book — Master Data Analysis with Python

I have a new book titled Master Data Analysis with Python that is far superior to Pandas Cookbook. It contains over 300 exercises and projects to reinforce all the material and will receive continuous updates through 2020. If you are interested in Pandas Cookbook, I would strongly suggest to purchase Master Data Analysis with Python instead.

All Access Pass!

If you want to learn python, data analysis, and machine learning, then the All Access Pass! will provide you access to all my current and future material for one low price.

Pandas Cookbook Guiding Principles

I had three main guiding principles when writing the book:

  • Use of real-world datasets
  • Focus on doing data analysis
  • Writing modern, idiomatic pandas

First, I wanted you, the reader, to explore real-world datasets and not randomly...

Continue Reading...
1 2
Close

50% Complete

Two Step

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.