0% found this document useful (0 votes)
30 views2 pages

Experiment 2 FDL - Jupyter Notebook

Hello

Uploaded by

riteshkumxr3668
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views2 pages

Experiment 2 FDL - Jupyter Notebook

Hello

Uploaded by

riteshkumxr3668
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

In [10]: df['Exited' ].

value_counts ()

Import Libraries Female 4544


Name: count, dtype: int64
In [3]: import numpy as np
import pandas as pd In [13]: df. drop(columns =['RowNumber' , 'CustomerId' ,'Surname' ],inplace =True )

In [4]: df=pd .read_csv ("Churn_Modelling.csv" ) In [15]: df = pd.get_dummies ( df, columns =['Geography','Gender' ], prefix=[ 'Geography' ,
df.head () 'Gender' ],drop_first = False , dtype = int)

RowNumber CustomerId Surname CreditScore Geography Gender Age Tenure Balance Nu In [16]: X = df.drop (columns= ['Exited' ], axis = 1)
y = df['Exited' ]
0 1 15634602 Hargrave 619 France Female 42.0 2 0.00
Out[4]:
1 2 15647311 Hill 608 Spain Female 41.0 1 83807.86 In [17]: from sklearn .model_selection import train_test_split
2 3 15619304 Onio 502 France Female 42.0 8 159660.80 X_train ,X_test ,y_train ,y_test = train_test_split (X,y,test_size =0.2 ,random_state =4

3 4 15701354 Boni 699 France Female 39.0 1 0.00


4 5 15737888 Mitchell 850 Spain Female 43.0 2 125510.82 In [30]: from sklearn .preprocessing import StandardScaler
scaler = StandardScaler ()
X_train = scaler .fit_transform (X_train )
X_test = scaler .transform (X_test )
In [5]: df.shape

Out[5]: (10002, 14)

In [6]: df.info ()

<class 'pandas.core.frame.DataFrame' >


RangeIndex: 10002 entries, 0 to 10001
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 RowNumber 10002 non-null int64
1 CustomerId 10002 non-null int64
2 Surname 10002 non-null object
3 CreditScore 10002 non-null int64
4 Geography 10001 non-null object
5 Gender 10002 non-null object
6 Age 10001 non-null float64
7 Tenure 10002 non-null int64
8 Balance 10002 non-null float64
9 NumOfProducts 10002 non-null int64
10 HasCrCard 10001 non-null float64
11 IsActiveMember 10001 non-null float64
12 EstimatedSalary 10002 non-null float64
13 Exited 10002 non-null int64 dtypes: float64(5),
int64(6), object(3) memory usage: 1.1+ MB
In [38]: model .fit(X_train , y_train ,epochs =10)
In [7]: df.duplicated ().sum()

Epoch 1/10
Out[7]: 2
251/251 [==============================] - 0s 2ms/step - loss: nan - accuracy:
0.7955
Epoch 2/10
251/251 [==============================] - 0s 1ms/step - loss: nan - accuracy:
0.7955
Epoch 3/10
251/251 [==============================] - 0s 1ms/step - loss: nan - accuracy:
0.7955
Epoch 4/10
251/251 [==============================] - 0s 2ms/step - loss: nan - accuracy:
0.7955
Epoch 5/10
251/251 [==============================] - 0s 2ms/step - loss: nan - accuracy:
In [35]: import tensorflow from tensorflow import
0.7955
keras from tensorflow.keras import
Epoch 6/10
Sequential from tensorflow.keras.layers
251/251 [==============================] - 0s 1ms/step - loss: nan - accuracy:
import Dense model = Sequential()
0.7955
model.add(Dense(units=3, activation='sigmoid', input_dim = 13))
Epoch 7/10
model.add(Dense(units=1, activation='sigmoid')) model.summary()
251/251 [==============================] - 0s 1ms/step - loss: nan - accuracy:
Model: "sequential_3" 0.7955
_________________________________________________________________ Epoch 8/10
Layer (type) Output Shape Param # 251/251 [==============================] - 0s 2ms/step - loss: nan - accuracy:
================================================================= 0.7955
dense_6 (Dense) (None, 3) 42 Epoch 9/10
dense_7 (Dense) (None, 1) 4 251/251 [==============================] - 0s 2ms/step - loss: nan - accuracy:
================================================================= 0.7955
Total params: 46 (184.00 Byte) Epoch 10/10
Trainable params: 46 (184.00 Byte) 251/251 [==============================] - 0s 2ms/step - loss: nan - accuracy:
Non-trainable params: 0 (0.00 Byte) 0.7955
_________________________________________________________________
Out[38]: <keras.src.callbacks.History at 0x2cd0e2a99c0>

In [36]: from tensorflow.keras.optimizers import Adam In [40]: model .layers [1].get_weights ()


model.compile(optimizer=Adam(learning_rate=0.001),
loss='binary_crossentropy', Out[40]: [array([[nan],
metrics=['accuracy']) [nan],
Out[10]: Exited [nan]], dtype=float32),
0 7964 In [45]: print("classification Report", classification_report (y_test, y_pred))
1 2038
Name: count, dtype: int64 classification Report precision recall f1-score support

In [11]: df[ 'Geography' ].value_counts () 0 0.80 1.00 0.89 1599


1 0.00 0.00 0.00 402
Out[11]: Geography
France 5014 accuracy 0.80 2001
Germany 2510 macro avg 0.40 0.50 0.44 2001
Spain 2477 weighted avg 0.64 0.80 0.71 2001
Name: count, dtype: int64

In [12]: df[ 'Gender']. value_counts () C:\Users\Harleen Manmeet\AppData\Local\Programs\Python\Python310\lib\site-


packag es\sklearn\metrics\_classification.py:1344: UndefinedMetricWarning:
Precision an d F-score are ill-defined and being set to 0.0 in labels with no
Out[12]: Gender
predicted sampl es. Use `zero_division` parameter to control this behavior.
Male 5458
_warn_prf(average, modifier, msg_start, len(result))
array([nan], dtype=float32)]

In [41]: y_log = model .predict( X_test)

63/63 [==============================] - 0s 1ms/step

In [42]: y_pred = np.where (y_log > 0.5,1 ,0)

In [43]: from sklearn.metrics import accuracy_score , classification_report

In [44]: accuracy_score (y_test,y_pred )

Out[44]: 0.7991004497751124
C:\Users\Harleen
Manmeet\AppData\Local\Programs\Python\Python310\lib\site-packag
es\sklearn\metrics\_classification.py:1344:
UndefinedMetricWarning: Precision an d F-score are ill-defined
and being set to 0.0 in labels with no predicted sampl es. Use
`zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
C:\Users\Harleen
Manmeet\AppData\Local\Programs\Python\Python310\lib\site-packag
es\sklearn\metrics\_classification.py:1344:
UndefinedMetricWarning: Precision an d F-score are ill-defined
and being set to 0.0 in labels with no predicted sampl es. Use
`zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))

In [ ]:

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy