Napolean HillThink and Grow RichDarby Darby

Napolean HillThink and Grow RichDarby

Darby2-3

Darby

Darby10PythonR

DarbiesPDF

Import other necessary libraries like pandas,

Identify feature and response variable(s) and

values must be numeric and numpy arrays

x_train=input_variables_values_training_datasets y_train=target_variables_values_training_datasets x_test=input_variables_values_test_datasets

Create linear regression objectlinear = linear_model.LinearRegression()

Train the model using the training sets and check scorelinear.fit(x_train, y_train) linear.score(x_train, y_train)

Equation coefficient and Intercept print(Coefficient: \n, ef_) print(Intercept: \n, tercept_) Predict Output

Identify feature and response variable(s) and

values must be numeric and numpy arrays

x_train – input_variables_values_training_datasets

y_train – target_variables_values_training_datasets

x_test – input_variables_values_test_datasets

Train the model using the training sets and

linear – lm(y_train ~ ., data = x)summary(linear)

from sklearn.linear_model import LogisticRegression

Assumed you have, X (predictor) and Y (target)

for training data set and x_test(predictor)

Train the model using the training sets

print(Intercept: \n, model.intercept_)

Train the model using the training sets and check score

logistic – glm(y_train ~ ., data = x,family=binomial) summary(logistic)

Predict Outputpredicted= predict(logistic,x_test)

Import other necessary libraries like pandas, numpy… from sklearn import tree

Assumed you have, X (predictor) and Y (target) for

training data set and x_test(predictor) of test_dataset

Create tree objectmodel = tree.DecisionTreeClassifier(criterion=gini) for classification, here you can change the algorithm as gini or entropy (information gain) by

model = tree.DecisionTreeRegressor() for

Train the model using the training sets and check score

Predict Outputpredicted= model.predict(x_test)

fit – rpart(y_train ~ ., data = x,method=class) summary(fit)

Predict Outputpredicted= predict(fit,x_test)

Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset

Create SVM classification objectmodel = svm.svc()

there are various options associatedwith it, this is simple for classification.

Train the model using the training sets and check score

Predict Outputpredicted= model.predict(x_test)

x – cbind(x_train,y_train) Fitting model

fit -svm(y_train ~ ., data = x) summary(fit)

Predict Outputpredicted= predict(fit,x_test)

Import Libraryfrom sklearn.naive_bayes import GaussianNB

Assumed you have, X (predictor) and Y (target) for

training data set and x_test(predictor) of test_dataset

Create SVM classification object model = GaussianNB()

there is other distribution for multinomial classes like Bernoulli Naive Bayes

Train the model using the training sets and check

Predict Outputpredicted= model.predict(x_test)

x – cbind(x_train,y_train)Fitting model

fit -naiveBayes(y_train ~ ., data = x) summary(fit)

Predict Outputpredicted= predict(fit,x_test)

from sklearn.neighbors import KNeighborsClassifier

Assumed you have, X (predictor) and Y (target) for

training data set and x_test(predictor) of test_dataset

Create KNeighbors classifier object model KNeighborsClassifier(n_neighbors=6)

Train the model using the training sets and check score model.fit(X, y)

Predict Outputpredicted= model.predict(x_test)

fit -knn(y_train ~ ., data = x,k=5) summary(fit)

Assumed you have, X (attributes) for training data set

and x_test(attributes) of test_dataset

Create KNeighbors classifier object model

k_means = KMeans(n_clusters=3, random_state=0)

Train the model using the training sets and check score model.fit(X)

Predict Outputpredicted= model.predict(x_test)

Import Libraryfrom sklearn.ensemble import RandomForestClassifier

Assumed you have, X (predictor) and Y (target) for

training data set and x_test(predictor) of test_dataset

Create Random Forest objectmodel= RandomForestClassifier()

Train the model using the training sets and check score model.fit(X, y)

Predict Outputpredicted= model.predict(x_test)

fit – randomForest(Species ~ ., x,ntree=500) summary(fit)

Predict Outputpredicted= predict(fit,x_test)

Assumed you have training and test data set as train and

Create PCA object pca= decomposition.PCA(n_components=k) default value of k =min(n_sample, n_features)

Reduced the dimension of training dataset using PCA train_reduced = pca.fit_transform(train)

Reduced the dimension of test datasettest_reduced = pca.transform(test)

pca – princomp(train, cor = TRUE)

train_reduced – predict(pca,train)

test_reduced – predict(pca,test)

from sklearn.ensemble import GradientBoostingClassifier

Assumed you have, X (predictor) and Y (target) for

training data set and x_test(predictor) of test_dataset

Create Gradient Boosting Classifier object

model= GradientBoostingClassifier(n_estimators=100, \ learning_rate=1.0, max_depth=1, random_state=0)

Train the model using the training sets and check score model.fit(X, y)

Fitting modelfitControl – trainControl( method = repeatedcv, + number = 4, repeats = 4)

fit – train(y ~ ., data = x, method = gbm,+ trControl = fitControl,verbose = FALSE)

predicted= predict(fit,x_test,type= prob)[,2]

BigDataDigest

/huanzhuangiPad mini 4~

…

A/B Test

Data IDE…

…