Converting DF to ndarray using MNIST Digit Recognizer as example

Dataset Link 1

Dataset Link 2

The data files train.csv and test.csv contain gray-scale images of hand-drawn digits, from zero through nine.

Each image is 28 pixels in height and 28 pixels in width, for a total of 784 pixels in total. Each pixel has a single pixel-value associated with it, indicating the lightness or darkness of that pixel, with higher numbers meaning darker. This pixel-value is an integer between 0 and 255, inclusive.

The training data set, (train.csv), has 785 columns. The first column, called "label", is the digit that was drawn by the user. The rest of the columns contain the pixel-values of the associated image.

Each pixel column in the training set has a name like pixelx, where x is an integer between 0 and 783, inclusive. To locate this pixel on the image, suppose that we have decomposed x as x = i * 28 + j, where i and j are integers between 0 and 27, inclusive. Then pixelx is located on row i and column j of a 28 x 28 matrix, (indexing by zero).

For example, pixel31 indicates the pixel that is in the fourth column from the left, and the second row from the top, as in the ascii-diagram below.

Visually, if we omit the "pixel" prefix, the pixels make up the image like this:

    000 001 002 003 ... 026 027
    028 029 030 031 ... 054 055
    056 057 058 059 ... 082 083
     |   |   |   |  ...  |   |
    728 729 730 731 ... 754 755
    756 757 758 759 ... 782 783

The test data set, (test.csv), is the same as the training set, except that it does not contain the "label" column.

Your submission file should be in the following format: For each of the 28000 images in the test set, output a single line containing the ImageId and the digit you predict. For example, if you predict that the first image is of a 3, the second image is of a 7, and the third image is of a 8, then your submission file would look like:

    ImageId,Label
    1,3
    2,7
    3,8 
    (27997 more lines)

The evaluation metric for this contest is the categorization accuracy, or the proportion of test images that are correctly classified. For example, a categorization accuracy of 0.97 indicates that you have correctly classified all but 3% of the images.

In [1]:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import confusion_matrix
import csv
%matplotlib inline
import pandas as pd

from dnn_utils_v2 import sigmoid, relu, sigmoid_backward, relu_backward

np.random.seed(2)

Reading Data

In [2]:
#using pandas to read dataset
train_data = pd.read_csv('./data/train.csv')
In [3]:
train_data.head()
Out[3]:
label pixel0 pixel1 pixel2 pixel3 pixel4 pixel5 pixel6 pixel7 pixel8 ... pixel774 pixel775 pixel776 pixel777 pixel778 pixel779 pixel780 pixel781 pixel782 pixel783
0 1 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
2 1 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
3 4 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
4 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0

5 rows × 785 columns

In [4]:
train_data.shape
Out[4]:
(42000, 785)
In [5]:
train_labels = train_data['label']
train_labels.head()
Out[5]:
0    1
1    0
2    1
3    4
4    0
Name: label, dtype: int64
In [6]:
#extracting the lebel column from the dataset
train_labels = train_data['label'].values.reshape(42000, 1)
In [7]:
train_labels.shape
Out[7]:
(42000, 1)
In [8]:
train_labels = train_labels.T
In [9]:
type(train_labels)
Out[9]:
numpy.ndarray
In [10]:
#since we have extracted the label column, we remove it from the dataset
del train_data['label']
In [11]:
train_data.shape
Out[11]:
(42000, 784)

At this point I've

  • I've read data from dataset using pandas
  • removed the label column from the dataset

Now I need to figure out a way to convert the pandas.DataFrame and pandas.Series to numpy array. But before that I'll try to write a routine to display the images in the dataset

In [12]:
image = train_data.iloc[[0]]
In [13]:
image.shape
Out[13]:
(1, 784)
In [14]:
type(image)
Out[14]:
pandas.core.frame.DataFrame
In [15]:
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.values.html#pandas.DataFrame.values
image = image.values
In [16]:
plt.imshow(image.reshape(28, 28))
Out[16]:
<matplotlib.image.AxesImage at 0x1223da9b0>

Displaying the images in the dataset

In [17]:
def show_image(index):
    image = train_data.iloc[[index]]
    image = image.values #convertind the pands.Dataframe to numpy ndarray
    plt.imshow(image.reshape(28, 28))
In [18]:
show_image(34)

Converting the pandas dataframe to numpy ndarray

In [19]:
print('For the Dataset: ')
print(train_data.shape)
print(type(train_data))

print('\nFor the labels: ')
print(train_labels.shape)
print(type(train_labels))
For the Dataset: 
(42000, 784)
<class 'pandas.core.frame.DataFrame'>

For the labels: 
(1, 42000)
<class 'numpy.ndarray'>

So now we need to convert the train_data to numpy ndarray

In [20]:
train_data = train_data.values
print(train_data.shape)
print(type(train_data))
(42000, 784)
<class 'numpy.ndarray'>
In [21]:
train_data = train_data.T
print(train_data.shape)
(784, 42000)

I will load the test set later

References

In [ ]: