03wk-13: 타이타닉 / 로지스틱 – 추가해설

Author

최규빈

Published

September 21, 2023

1. 강의영상

2. Import

import numpy as np
import pandas as pd 
import sklearn.linear_model

3. Data 불러오기

df_train = pd.read_csv('titanic/train.csv')
df_test = pd.read_csv('titanic/test.csv')
df_train
PassengerId Survived Pclass Name Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris male 22.0 1 0 A/5 21171 7.2500 NaN S
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th... female 38.0 1 0 PC 17599 71.2833 C85 C
2 3 1 3 Heikkinen, Miss. Laina female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S
3 4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel) female 35.0 1 0 113803 53.1000 C123 S
4 5 0 3 Allen, Mr. William Henry male 35.0 0 0 373450 8.0500 NaN S
... ... ... ... ... ... ... ... ... ... ... ... ...
886 887 0 2 Montvila, Rev. Juozas male 27.0 0 0 211536 13.0000 NaN S
887 888 1 1 Graham, Miss. Margaret Edith female 19.0 0 0 112053 30.0000 B42 S
888 889 0 3 Johnston, Miss. Catherine Helen "Carrie" female NaN 1 2 W./C. 6607 23.4500 NaN S
889 890 1 1 Behr, Mr. Karl Howell male 26.0 0 0 111369 30.0000 C148 C
890 891 0 3 Dooley, Mr. Patrick male 32.0 0 0 370376 7.7500 NaN Q

891 rows × 12 columns

set(df_train) - set(df_test)
{'Survived'}

4. 분석 – 실패

A. 데이터 정리

X = pd.get_dummies(df_train.drop(['PassengerId','Survived'],axis=1))
y = df_train[['Survived']]

B. Predictor 생성

predictr = sklearn.linear_model.LogisticRegression()
predictr 
LogisticRegression()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

C. 학습 (fit, learn)

predictr.fit(X,y)
ValueError: Input X contains NaN.
LogisticRegression does not accept missing values encoded as NaN natively. For supervised learning, you might want to consider sklearn.ensemble.HistGradientBoostingClassifier and Regressor which accept missing values encoded as NaNs natively. Alternatively, it is possible to preprocess the data, for instance by using an imputer transformer in a pipeline or drop samples with missing values. See https://scikit-learn.org/stable/modules/impute.html You can find a list of all estimators that handle NaN values at the following page: https://scikit-learn.org/stable/modules/impute.html#estimators-that-handle-nan-values

5. 원인분석

df_train.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
 #   Column       Non-Null Count  Dtype  
---  ------       --------------  -----  
 0   PassengerId  891 non-null    int64  
 1   Survived     891 non-null    int64  
 2   Pclass       891 non-null    int64  
 3   Name         891 non-null    object 
 4   Sex          891 non-null    object 
 5   Age          714 non-null    float64
 6   SibSp        891 non-null    int64  
 7   Parch        891 non-null    int64  
 8   Ticket       891 non-null    object 
 9   Fare         891 non-null    float64
 10  Cabin        204 non-null    object 
 11  Embarked     889 non-null    object 
dtypes: float64(2), int64(5), object(5)
memory usage: 83.7+ KB

- 문제1: Cabin열은 너무 많은 결측치를 가지고 있다.

- 문제2: Name 혹은 Ticket과 같은 변수는 one-hot 인코딩 하기 어색하다.

len(set(df_train['Name']))
891
len(set(df_train['Ticket']))
681
2023-10-24 추가해설

밸런스게임 예제: 이걸 one-hot 인코딩한다면 열이 엄청나게 많아진다. 이러한 열들은 당연히 y와 직접적인 상관관계는 없게 나올것이다. 즉 쓸모없는 변수들이 많아지는 상황이 되는데 이는 모듈21에서 소개한 “밸런스게임으로 변수를 증가시킨” 예제와 비슷한 상황이다.

- 문제3: df_train의 AgeEmbarked에 약간 포함된 결측치가 마음에 걸린다.. \(\to\) 빼자!

- 문제4: df_test의 Fare에 포함된 결측값도 걸린다. \(\to\) 빼자!

df_test.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 418 entries, 0 to 417
Data columns (total 11 columns):
 #   Column       Non-Null Count  Dtype  
---  ------       --------------  -----  
 0   PassengerId  418 non-null    int64  
 1   Pclass       418 non-null    int64  
 2   Name         418 non-null    object 
 3   Sex          418 non-null    object 
 4   Age          332 non-null    float64
 5   SibSp        418 non-null    int64  
 6   Parch        418 non-null    int64  
 7   Ticket       418 non-null    object 
 8   Fare         417 non-null    float64
 9   Cabin        91 non-null     object 
 10  Embarked     418 non-null    object 
dtypes: float64(2), int64(4), object(5)
memory usage: 36.0+ KB
2023-10-24 추가해설

결측치처리: 결측치가 포함된 열을 제외하는 것도 좋지만 적절한 값을 impute하는 것도 좋다.

6. 분석 – 성공

A. 데이터정리

X = pd.get_dummies(df_train[["Pclass", "Sex", "SibSp", "Parch"]])
y = df_train[["Survived"]]

B. Predictor 생성

predictr = sklearn.linear_model.LogisticRegression()

C. 학습

predictr.fit(X, y)
/home/cgb2/anaconda3/envs/ag/lib/python3.10/site-packages/sklearn/utils/validation.py:1143: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
  y = column_or_1d(y, warn=True)
LogisticRegression()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

D. 예측

#predictr.predict(X)
df_train.assign(Survived_hat=predictr.predict(X)).loc[:,['Survived','Survived_hat']]
Survived Survived_hat
0 0 0
1 1 1
2 1 1
3 1 1
4 0 0
... ... ...
886 0 0
887 1 1
888 0 1
889 1 0
890 0 0

891 rows × 2 columns

E. 평가

predictr.score(X,y)
0.8002244668911336

7. 제출 (HW)