본 포스트는 지속적으로 내용을 추가하고 있습니다.

이번 시간에는 Oxford University의 Ph.D Student인 Andrew Trask와 함께 감정분석을 한다. 지금까지 배운걸 다시 상기해보자.

  • NEURAL NETWORK
  • FWD/BACK PROPAGATION
  • GRADIENT DESCENT
  • MEAN SQUARED ERROR
  • TRAIN/TEST SPLIT

지금까지 배운 내용을 천천히 곱씹으면서 다음 단계로 진행한다.

  1. Curating a dataset
  2. Validate the theory
  3. Transform data into input and output
  4. Iterate Serveral Times
  5. Understand Weight inside

Lesson: Curate a Dataset

가장 먼저 영화 리뷰 데이터셋인 review.txt 파일을 뜯어보도록 하겠다.

def pretty_print_review_and_label(i):
    print(labels[i] + "\t:\t" + reviews[i][:80] + "...")

g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()

g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
25000
reviews[0]
'bromwell high is a cartoon comedy . it ran at the same time as some other programs about school life  such as  teachers  . my   years in the teaching profession lead me to believe that bromwell high  s satire is much closer to reality than is  teachers  . the scramble to survive financially  the insightful students who can see right through their pathetic teachers  pomp  the pettiness of the whole situation  all remind me of the schools i knew and their students . when i saw the episode in which a student repeatedly tried to burn down the school  i immediately recalled . . . . . . . . . at . . . . . . . . . . high . a classic line inspector i  m here to sack one of your teachers . student welcome to bromwell high . i expect that many adults of my age think that bromwell high is far fetched . what a pity that it isn  t   '
labels[0]
'POSITIVE'

Lesson: Develop a Predictive Theory

print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
labels.txt 	 : 	 reviews.txt

NEGATIVE	:	this movie is terrible but it has some good effects .  ...
POSITIVE	:	adrian pasdar is excellent is this film . he makes a fascinating woman .  ...
NEGATIVE	:	comment this movie is impossible . is terrible  very improbable  bad interpretat...
POSITIVE	:	excellent episode movie ala pulp fiction .  days   suicides . it doesnt get more...
NEGATIVE	:	if you haven  t seen this  it  s terrible . it is pure trash . i saw this about ...
POSITIVE	:	this schiffer guy is a real genius  the movie is of excellent quality and both e...

label과 review를 보기 좋게 정렬하고 어떻게 시작해야 할지 생각해보자. Negative Review에는 ‘terrible’같은 단어가 많이 등장하고, postive review에는 ‘excellent’같은 단어가 많이 나온다.

Review와 label사이의 correlation은 어떤 관계가 있을까? 알파벳 하나 하나 단위로 보면 어떨까. 알파벳 ‘m’하나만 보면, 이 리뷰가 Positive인지 Negative인지 알 수 있을까? ‘m’,’t’,….이것만 봐서는 알 수 없다. 그럼 ‘단어’단위로 넣어보자. ‘this’,’movie’ 같은 단어는 감정이 없지만 ‘terrible’, ‘trash’만 들으면 부정적이라는 느낌이 오고 ‘excellent’, ‘genious’같은 단어를 보면 긍정적인 느낌이 느껴진다. 많이 등장할수록 그 느낌은 더 강해질 것 같다.

그래, 단어 단위로 input data를 구성하자.

Project 1: Quick Theory Validation

코드는 단어를 세는 방식이 되면 괜찮지 않을까? python의 Counter Class를 사용해보자.

from collections import Counter
import numpy as np

이제 3개의 Counter object를 만든다.

각각 positive 단어, negative 단어, 중립적인 단어들이다.

# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()

이제 모든 Positive 리뷰를 돌아가면서 단어 별 Count를 올려준다. 그리고 모든 Negative 리뷰를 돌아가면서 단어 별 Count를 올려준다.

# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects

for index in range(len(reviews)):  
    for word in reviews[index].split():
        if labels[index] == 'POSITIVE':
            positive_counts[word] += 1
            total_counts[word] += 1
        else :
            negative_counts[word] += 1
            total_counts[word] += 1

이제 각 count를 내림차순으로 정렬한다.

# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
[('.', 167538),
 ('the', 163389),
 ('a', 79321),
 ('and', 74385),
 ('of', 69009),
 ('to', 68974),
 ('br', 52637),
...]

위 결론에서 볼 수 있듯이, 긍정이든 부정이든 ‘the’, ‘a’같은 단어가 매우 자주 등장하는 것을 볼 수 있다. 우리는 가장 많이 등장하는 단어를 찾는 것이 아니라 긍정/부정을 잘 표현하는 단어를 찾고싶다. 따라서 긍정 리뷰와 부정 리뷰 사이 단어의 비율을 찾아본다.

TODO: 모든 단어들의 positive : negative 비율을 찾아 pos_neg_ratios에 저장한다.

# Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()

# TODO: Calculate the ratios of positive and negative uses of the most common words
#       Consider words to be "common" if they've been used at least 100 times
for word in total_counts:
    pos_neg_ratios[word] = positive_counts[word] / (float(negative_counts[word])+1)

이렇게 조사한 단어의 몇가지 예를 보자

print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
Pos-to-neg ratio for 'the' = 1.0607993145235326
Pos-to-neg ratio for 'amazing' = 4.022813688212928
Pos-to-neg ratio for 'terrible' = 0.17744252873563218

이제 몇가지 규칙이 보인다.

  • 긍정적인 느낌의 단어들 (예를 들어 “amazing”) 은 ratio가 1보다 크다. 긍정적인 느낌이 더 많이 들고 자주 등장할수록 1에서 멀어진다.

  • 부정적인 느낌의 단어들 (예를 들어 “terrible”) 은 ratio가 1보다 작다. 마찬가지로 부정적인 느낌이 더 많이 들고 자주 등장할 수록 0과 가까워진다.

  • 중립적인 느낌의 단어들 (예를 들어 “the”) 은 ratio가 1과 비슷하다.

이제 우리는 ratio를 이용해 감정을 구별할 수 있겠다. 하지만 아직은 계산을 하기가 조금 어렵다. 매우 긍적적인 단어 “amazing”은 ratio 값이 4이고, 부정적인 단어 “terrible”은 값이 0에 가깝기 때문에 몇가지 문제가 발생한다.

  • 1은 중립적이다. “amazing”은 값이 4이고 “terrible”은 값이 0.18인데, 두 단어와 1과의 거리는 몇 배나 차이난다. 따라서 이 값으로 직접 비교는 할 수 없다.
  • 따라서 중립적인 값을 중심으로 이러한 불균형을 보정해주는 과정이 필요하다.
  • 중간 값은 1보다 0이 계산하기 편하다.

우리는 ratio를 구하는데 나누기을 사용했다. 이런 종류의 보정에는 log함수가 잘 작동한다.

TODO: 모든 ratio값을 log로 변환한다.

# TODO: Convert ratios to logs
print (pos_neg_ratios["iraq"])

for word in pos_neg_ratios:
    if pos_neg_ratios[word] != 0:
        pos_neg_ratios[word] = np.log(pos_neg_ratios[word])

print(pos_neg_ratios["iraq"])
#     pos_neg_ratios[word] = np.log(pos_neg_ratios[word])
0.9111111111111111
-0.09309042306601198

이제 새롭게 정의된 값들을 보자.

print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
Pos-to-neg ratio for 'the' = 0.05902269426102881
Pos-to-neg ratio for 'amazing' = 1.3919815802404802
Pos-to-neg ratio for 'terrible' = -1.7291085042663878

이제 중립적인 단어는 0에 가까운 값을 가지고, 긍정적인 단어 “Amazing”과 부정적인 단어 “terrible”의 1보다 큰 값을 가지면서 서로 부호가 다른 것을 볼 수 있다. 이는 납득할 수 있는 것이다. 값이 양수이면 긍정적, 음수이면 부정적인 느낌이 강하다고 볼 수 있다.

이제 가장 긍정적인 느낌의 단어를 보기위해 값들을 내림차순 정리해주자.

# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
[('edie', 4.6913478822291435),
 ('antwone', 4.477336814478207),
 ('din', 4.406719247264253),
 ('gunga', 4.189654742026425),
 ('goldsworthy', 4.174387269895637),
 ('gypo', 4.0943445622221),

 ...]

이제 부정적인 리뷰에 가장 빈번하게 출연했던 단어들을 살펴보자.

list(reversed(pos_neg_ratios.most_common()))[0:30]
#pos_neg_ratios.most_common()[:-31:-1] 라고 쓸 수도 있다.
[('boll', -4.969813299576001),
 ('uwe', -4.624972813284271),
 ('thunderbirds', -4.127134385045092),
 ('beowulf', -4.110873864173311),
 ('dahmer', -3.9889840465642745),
 ('wayans', -3.9318256327243257),
...]

위 결과를 보면, 중립적인 단어들은 0에 가깝고 긍정적인 리뷰에 더 많이 등장한 단어는 +3 정도의 최댓값을, 부정적인 리뷰에 더 많이 등장한 단어는 -3정도의 최댓값을 가진다. 이것이 우리가 logarithm을 사용한 이유이다.

from IPython.display import Image

review = "This was a horrible, terrible movie."

Image(filename='sentiment_network.png')

png

review = "The movie was excellent"

Image(filename='sentiment_network_pos.png')

png

Project 2: Creating the Input/Output Data

TODO: 이제 모든 단어를 담고있는 Vocab이라는 set 을 만든다.

# TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = set(total_counts.keys())

아래 cell을 실행시키면 vocab의 사이즈를 볼 수 있다.

vocab_size = len(vocab)
print(vocab_size)
74074

아래 이미지를 보자. 이제 우리는 아래 이미지와 같은 신경망을 코딩할 것이다. layer_0는 input layer, layer_1은 hidden_layer, layer_2는 output layer에 해당한다.

from IPython.display import Image
Image(filename='sentiment_network_2.png')

png

TODO: layer_0에 해당하는 numpy array를 만들고 모두 0으로 초기화한다.이때 layer_0는 1개의 row와 vocab_size만큼의 columns를 가진 2차원 matrix이다.

layer_0 = np.zeros(( 1,vocab_size ))

아래 셀을 실행시키면, (1, 74074)가 나와야 한다.

layer_0.shape
(1, 74074)
from IPython.display import Image
Image(filename='sentiment_network.png')

png

layer_0 는 모든 단어에 대해 한 개의 entry를 가지고 있다. 이제 각 단어에 대한 index를 알아야 한다. 따라서 모든 단어에 대한 index를 저장한 lookup table을 만든다.

# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
    word2index[word] = i

# display the map of words to indices
word2index
{'': 0,
 'kusama': 1,
 'chakra': 2,
 'blur': 3,
 'subjective': 4,
 'luminary': 5,
 'trude': 6,
 'miniskirt': 7,
 'heath': 8,
 ...}

TODO: update_input_layer는 각 단어가 주어진 review에 얼마나 많이 등장한지 그 횟수를 세어서 저장한 layer이다.

def update_input_layer(review):
    """ Modify the global layer_0 to represent the vector form of review.
    The element at a given index of layer_0 should represent
    how many times the given word occurs in the review.
    Args:
        review(string) - the string of the review
    Returns:
        None
    """
    global layer_0
    # clear out previous state by resetting the layer to be all 0s
    layer_0 *= 0

    # TODO: count how many times each word is used in the given review and store the results in layer_0
    for word in review.split(" "):
        layer_0[0][word2index[word]] += 1

아래 cell을 실행시켜서 첫번째 review에 대한 input layer를 업데이트 해보자.

update_input_layer(reviews[0])
layer_0
array([[18.,  0.,  0., ...,  0.,  0.,  0.]])

TODO: get_target_for_labels 함수를 만든다. 이 함수는 주어진 label이 NEGATIVE 이나 POSITIVE인지에 따라 0 이나 1을 return한다.

def get_target_for_label(label):
    """Convert a label to `0` or `1`.
    Args:
        label(string) - Either "POSITIVE" or "NEGATIVE".
    Returns:
        `0` or `1`.
    """
    if label == "POSITIVE":
        return 1
    else :
        return 0

'POSITIVE' label을 입력하면 1이 출력된다.

labels[0]
'POSITIVE'
get_target_for_label(labels[0])
1

'NEGATIVE' label을 입력하면 0이 출력된다.

labels[1]
'NEGATIVE'
get_target_for_label(labels[1])
0

Project 3: Building a Neural Network

TODO: 이제 SentimentalNetwork Class를 만들어보자.

import time
import sys
import numpy as np

# Encapsulate our neural network in a class
class SentimentNetwork:
    def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
        """Create a SentimenNetwork with the given settings
        Args:
            reviews(list) - List of reviews used for training
            labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
            hidden_nodes(int) - Number of nodes to create in the hidden layer
            learning_rate(float) - Learning rate to use while training

        """
        # Assign a seed to our random number generator to ensure we get
        # reproducable results during development
        np.random.seed(1)

        # process the reviews and their associated labels so that everything
        # is ready for training
        self.pre_process_data(reviews, labels)

        # Build the network to have the number of hidden nodes and the learning rate that
        # were passed into this initializer. Make the same number of input nodes as
        # there are vocabulary words and create a single output node.
        self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)

    def pre_process_data(self, reviews, labels):

        # TODO: populate review_vocab with all of the words in the given reviews
        #       Remember to split reviews into individual words
        #       using "split(' ')" instead of "split()".
        review_vocab = set()

        for review in reviews:
            for word in review.split(' '):
                review_vocab.add(word)

        # Convert the vocabulary set to a list so we can access words via indices
        self.review_vocab = list(review_vocab)


        # TODO: populate label_vocab with all of the words in the given labels.
        #       There is no need to split the labels because each one is a single word.
        label_vocab = set()

        for label in labels :
            label_vocab.add(label)

        # Convert the label vocabulary set to a list so we can access labels via indices
        self.label_vocab = list(label_vocab)

        # Store the sizes of the review and label vocabularies.
        self.review_vocab_size = len(self.review_vocab)
        self.label_vocab_size = len(self.label_vocab)

        # Create a dictionary of words in the vocabulary mapped to index positions
        # TODO: populate self.word2index with indices for all the words in self.review_vocab
        #       like you saw earlier in the notebook
        self.word2index = {}
        for i, word in enumerate(self.review_vocab) :
            self.word2index[word] = i

        # Create a dictionary of labels mapped to index positions
        # TODO: do the same thing you did for self.word2index and self.review_vocab,
        #       but for self.label2index and self.label_vocab instead
        self.label2index = {}
        for i, label in enumerate(self.label_vocab) :
            self.label2index[label] = i

    def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
        # Store the number of nodes in input, hidden, and output layers.
        self.input_nodes = input_nodes
        self.hidden_nodes = hidden_nodes
        self.output_nodes = output_nodes

        # Store the learning rate
        self.learning_rate = learning_rate

        # Initialize weights

        # TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
        #       the input layer and the hidden layer.
        self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))

        # TODO: initialize self.weights_1_2 as a matrix of random values.
        #       These are the weights between the hidden layer and the output layer.
        self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,(self.hidden_nodes, self.output_nodes))

        # TODO: Create the input layer, a two-dimensional matrix with shape
        #       1 x input_nodes, with all values initialized to zero
        self.layer_0 = np.zeros((1,input_nodes))


    def update_input_layer(self,review):
        # TODO: You can copy most of the code you wrote for update_input_layer
        #       earlier in this notebook.
        #
        #       However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
        #       THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
        #       For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"

        self.layer_0 *= 0
        for word in review.split(" "):
            if(word in self.word2index.keys()):
                self.layer_0[0][self.word2index[word]] += 1


    def get_target_for_label(self,label):
        # TODO: Copy the code you wrote for get_target_for_label
        #       earlier in this notebook.

        if label == 'POSITIVE':
            return 1
        else:
            return 0

    def sigmoid(self,x):
        # TODO: Return the result of calculating the sigmoid activation function
        #       shown in the lectures
        return (1/(1 + np.exp(-x)))

    def sigmoid_output_2_derivative(self,output):
        # TODO: Return the derivative of the sigmoid activation function,
        #       where "output" is the original output from the sigmoid fucntion
        return (1 - output)*(output)

    def train(self, training_reviews, training_labels):

        # make sure out we have a matching number of reviews and labels
        assert(len(training_reviews) == len(training_labels))

        # Keep track of correct predictions to display accuracy during training
        correct_so_far = 0

        # Remember when we started for printing time statistics
        start = time.time()

        # loop through all the given reviews and run a forward and backward pass,
        # updating weights for every item
        for i in range(len(training_reviews)):

            # TODO: Get the next review and its correct label
            review = training_reviews[i]
            label = training_labels[i]

            # TODO: Implement the forward pass through the network.
            #       That means use the given review to update the input layer,
            #       then calculate values for the hidden layer,
            #       and finally calculate the output layer.
            #
            #       Do not use an activation function for the hidden layer,
            #       but use the sigmoid activation function for the output layer.

            #Update Input Layer
            self.update_input_layer(review)

            #Layer 1
            layer_1 = np.dot(self.layer_0,self.weights_0_1)

            #Layer 2
            layer_2 = self.sigmoid(np.dot(layer_1,self.weights_1_2))

            # TODO: Implement the back propagation pass here.
            #       That means calculate the error for the forward pass's prediction
            #       and update the weights in the network according to their
            #       contributions toward the error, as calculated via the
            #       gradient descent and back propagation algorithms you
            #       learned in class.

            #Output Error
            # error = y - hat{y}
            # output_delta = error * sigmoid`{y}
            layer_2_error = layer_2 - get_target_for_label(label)
            layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)

            #Backpropagate Error
            layer_1_error = np.dot(layer_2_delta,self.weights_1_2.T)
            layer_1_delta = layer_1_error

            #Update Weight
            self.weights_0_1 -= np.dot(self.layer_0.T, layer_1_delta) * self.learning_rate
            self.weights_1_2 -= np.dot(layer_1.T, layer_2_delta) * self.learning_rate


            # TODO: Keep track of correct predictions. To determine if the prediction was
            #       correct, check that the absolute value of the output error
            #       is less than 0.5. If so, add one to the correct_so_far count.
            if layer_2 >= 0.5 and label == "POSITIVE" :
                correct_so_far += 1
            elif layer_2 < 0.5 and label == "NEGATIVE" :
                correct_so_far += 1


            # For debug purposes, print out our prediction accuracy and speed
            # throughout the training process.

            elapsed_time = float(time.time() - start)
            reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0

            sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
                             + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
                             + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
                             + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
            if(i % 2500 == 0):
                print("")

    def test(self, testing_reviews, testing_labels):
        """
        Attempts to predict the labels for the given testing_reviews,
        and uses the test_labels to calculate the accuracy of those predictions.
        """

        # keep track of how many correct predictions we make
        correct = 0

        # we'll time how many predictions per second we make
        start = time.time()

        # Loop through each of the given reviews and call run to predict
        # its label.
        for i in range(len(testing_reviews)):
            pred = self.run(testing_reviews[i])
            if(pred == testing_labels[i]):
                correct += 1

            # For debug purposes, print out our prediction accuracy and speed
            # throughout the prediction process.

            elapsed_time = float(time.time() - start)
            reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0

            sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
                             + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
                             + " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
                             + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")

    def run(self, review):
        """
        Returns a POSITIVE or NEGATIVE prediction for the given review.
        """
        # TODO: Run a forward pass through the network, like you did in the
        #       "train" function. That means use the given review to
        #       update the input layer, then calculate values for the hidden layer,
        #       and finally calculate the output layer.
        #
        #       Note: The review passed into this function for prediction
        #             might come from anywhere, so you should convert it
        #             to lower case prior to using it.
        self.update_input_layer(review)
        layer_1 = np.dot(self.layer_0,self.weights_0_1)
        layer_2 = self.sigmoid(np.dot(layer_1,self.weights_1_2))


        # TODO: The output layer should now contain a prediction.
        #       Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
        #       and `NEGATIVE` otherwise.
        if layer_2[0] >= 0.5: return "POSITIVE"
        else: return "NEGATIVE"

이제 SentimentNetwork을 만든다. 마지막 1,000개의 리뷰는 testing data이다. 여기서, learning rate = 0.1 로 설정해보았다.

mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)

마지막 1,000개 리뷰 (test set)에 대해서 테스트를 한번 진행해본다. 아직 네트워크를 학습시키지 않았으므로 정확도는 50%가 나온다.

mlp.test(reviews[-1000:],labels[-1000:])
Progress:99.9% Speed(reviews/sec):1007. #Correct:500 #Tested:1000 Testing Accuracy:50.0%

이제 학습을 진행해본다.

mlp.train(reviews[:-1000],labels[:-1000])
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):328.7 #Correct:1251 #Trained:2501 Training Accuracy:50.0%
Progress:20.8% Speed(reviews/sec):332.4 #Correct:2501 #Trained:5001 Training Accuracy:50.0%
Progress:31.2% Speed(reviews/sec):336.9 #Correct:3751 #Trained:7501 Training Accuracy:50.0%
Progress:41.6% Speed(reviews/sec):334.3 #Correct:5001 #Trained:10001 Training Accuracy:50.0%
Progress:52.0% Speed(reviews/sec):335.6 #Correct:6251 #Trained:12501 Training Accuracy:50.0%
Progress:62.5% Speed(reviews/sec):337.1 #Correct:7501 #Trained:15001 Training Accuracy:50.0%
Progress:72.9% Speed(reviews/sec):335.1 #Correct:8751 #Trained:17501 Training Accuracy:50.0%
Progress:83.3% Speed(reviews/sec):332.4 #Correct:10001 #Trained:20001 Training Accuracy:50.0%
Progress:93.7% Speed(reviews/sec):328.6 #Correct:11251 #Trained:22501 Training Accuracy:50.0%
Progress:99.9% Speed(reviews/sec):325.7 #Correct:12000 #Trained:24000 Training Accuracy:50.0%

아마 학습이 잘 진행되지 않을 것이다. 그 이유는 바로 learning rate가 너무 크게 설정되었기 때문이다. 조금 더 작은 값인 0.01로 설정하고 다시 학습을 진행해본다.

mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):300.6 #Correct:1248 #Trained:2501 Training Accuracy:49.9%
Progress:20.8% Speed(reviews/sec):301.2 #Correct:2498 #Trained:5001 Training Accuracy:49.9%
Progress:31.2% Speed(reviews/sec):302.7 #Correct:3748 #Trained:7501 Training Accuracy:49.9%
Progress:41.6% Speed(reviews/sec):305.2 #Correct:4998 #Trained:10001 Training Accuracy:49.9%
Progress:52.0% Speed(reviews/sec):305.3 #Correct:6248 #Trained:12501 Training Accuracy:49.9%
Progress:62.5% Speed(reviews/sec):300.3 #Correct:7492 #Trained:15001 Training Accuracy:49.9%
Progress:72.9% Speed(reviews/sec):298.5 #Correct:8747 #Trained:17501 Training Accuracy:49.9%
Progress:83.3% Speed(reviews/sec):300.4 #Correct:9997 #Trained:20001 Training Accuracy:49.9%
Progress:93.7% Speed(reviews/sec):298.4 #Correct:11247 #Trained:22501 Training Accuracy:49.9%
Progress:99.9% Speed(reviews/sec):296.9 #Correct:11996 #Trained:24000 Training Accuracy:49.9%

아직도 잘 되지 않을 것이다. 조금 더 작게 해보자. 이번에는 0.001로 설정하고 학습을 진행해본다.

mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):299.1 #Correct:1256 #Trained:2501 Training Accuracy:50.2%
Progress:20.8% Speed(reviews/sec):297.6 #Correct:2627 #Trained:5001 Training Accuracy:52.5%
Progress:31.2% Speed(reviews/sec):292.3 #Correct:4088 #Trained:7501 Training Accuracy:54.4%
Progress:41.6% Speed(reviews/sec):289.2 #Correct:5605 #Trained:10001 Training Accuracy:56.0%
Progress:52.0% Speed(reviews/sec):291.4 #Correct:7203 #Trained:12501 Training Accuracy:57.6%
Progress:62.5% Speed(reviews/sec):294.2 #Correct:8463 #Trained:15001 Training Accuracy:56.4%
Progress:72.9% Speed(reviews/sec):298.6 #Correct:10036 #Trained:17501 Training Accuracy:57.3%
Progress:83.3% Speed(reviews/sec):304.8 #Correct:11701 #Trained:20001 Training Accuracy:58.5%
Progress:93.7% Speed(reviews/sec):307.8 #Correct:13378 #Trained:22501 Training Accuracy:59.4%
Progress:99.9% Speed(reviews/sec):308.5 #Correct:14383 #Trained:24000 Training Accuracy:59.9%

learning rate가 0.001일때, 비로소 신경망이 유의미한 추정을 하기 시작했다. 아직 썩 맘에 들지는 않지만, 이러한 방법의 가능성은 엿볼 수 있었다. 이제 이 신경망을 더욱 개선하도록 해보자.

Understanding Neural Noise

from IPython.display import Image
Image(filename='sentiment_network.png')

png

def update_input_layer(review):

    global layer_0

    # clear out previous state, reset the layer to be all 0s
    layer_0 *= 0
    for word in review.split(" "):
        layer_0[0][word2index[word]] += 1

update_input_layer(reviews[0])
layer_0
array([[18.,  0.,  0., ...,  0.,  0.,  0.]])
review_counter = Counter()
for word in reviews[0].split(" "):
    review_counter[word] += 1
review_counter.most_common()
[('.', 27),
 ('', 18),
 ('the', 9),
 ('to', 6),
 ('high', 5),
 ('i', 5),
 ('bromwell', 4),
 ...
 ('far', 1),
 ('fetched', 1),
 ('what', 1),
 ('pity', 1),
 ('isn', 1),
 ('t', 1)]

Project 4: Reducing Noise in Our Input Data

신경망은 input data의 질에 따라서 성능이 크게 달라진다. input data는 유의미한 값 뿐만 아니라 각종 noise들도 많이 포함되어 있다. 우리는 이러한 noise를 이해하고 지워줄 것이다.

TODO: update_input_layer 함수를 수정하자. 각 단어별로 ‘얼마나 많이 있는지’ 세지 말고, ‘있는지’ 확인만 하도록 하자. 즉, 단어별로 count를 세지 말고 있으면 0, 없으면 1이 되는 것이다.

    def update_input_layer(self,review):

        self.layer_0 *= 0
        for word in review.split(" "):
            if(word in self.word2index.keys()):
+++                self.layer_0[0][self.word2index[word]] += 1
---                self.layer_0[0][self.word2index[word]] = 1

+ 하나만 지워준다. 그 결과는 놀라울 것이다.

# TODO: -Copy the SentimentNetwork class from Projet 3 lesson
#       -Modify it to reduce noise, like in the video

import time
import sys
import numpy as np

class SentimentNetwork:
    def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
        """Create a SentimenNetwork with the given settings
        Args:
            reviews(list) - List of reviews used for training
            labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
            hidden_nodes(int) - Number of nodes to create in the hidden layer
            learning_rate(float) - Learning rate to use while training

        """
        np.random.seed(1)
        self.pre_process_data(reviews, labels)
        self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)

    def pre_process_data(self, reviews, labels):

        review_vocab = set()
        for review in reviews:
            for word in review.split(' '):
                review_vocab.add(word)

        self.review_vocab = list(review_vocab)
        label_vocab = set()
        for label in labels :
            label_vocab.add(label)

        self.label_vocab = list(label_vocab)
        self.review_vocab_size = len(self.review_vocab)
        self.label_vocab_size = len(self.label_vocab)

        self.word2index = {}
        for i, word in enumerate(self.review_vocab) :
            self.word2index[word] = i

        self.label2index = {}
        for i, label in enumerate(self.label_vocab) :
            self.label2index[label] = i

    def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
        # Store the number of nodes in input, hidden, and output layers.
        self.input_nodes = input_nodes
        self.hidden_nodes = hidden_nodes
        self.output_nodes = output_nodes

        # Store the learning rate
        self.learning_rate = learning_rate

        # Initialize weights
        self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
        self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,(self.hidden_nodes, self.output_nodes))
        self.layer_0 = np.zeros((1,input_nodes))


    def update_input_layer(self,review):

        self.layer_0 *= 0
        for word in review.split(" "):
            if(word in self.word2index.keys()):
                self.layer_0[0][self.word2index[word]] = 1


    def get_target_for_label(self,label):

        if label == 'POSITIVE':
            return 1
        else:
            return 0

    def sigmoid(self,x):
        return (1/(1 + np.exp(-x)))

    def sigmoid_output_2_derivative(self,output):
        return (1 - output)*(output)

    def train(self, training_reviews, training_labels):

        assert(len(training_reviews) == len(training_labels))
        correct_so_far = 0
        start = time.time()

        for i in range(len(training_reviews)):
            review = training_reviews[i]
            label = training_labels[i]

            #Update Input Layer
            self.update_input_layer(review)

            #Layer 1
            layer_1 = np.dot(self.layer_0,self.weights_0_1)

            #Layer 2
            layer_2 = self.sigmoid(np.dot(layer_1,self.weights_1_2))

            #Output Error
            layer_2_error = layer_2 - get_target_for_label(label)
            layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)

            #Backpropagate Error
            layer_1_error = np.dot(layer_2_delta,self.weights_1_2.T)
            layer_1_delta = layer_1_error

            #Update Weight
            self.weights_0_1 -= np.dot(self.layer_0.T, layer_1_delta) * self.learning_rate
            self.weights_1_2 -= np.dot(layer_1.T, layer_2_delta) * self.learning_rate

            if layer_2 >= 0.5 and label == "POSITIVE" :
                correct_so_far += 1
            elif layer_2 < 0.5 and label == "NEGATIVE" :
                correct_so_far += 1

            elapsed_time = float(time.time() - start)
            reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0

            sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
                             + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
                             + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
                             + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
            if(i % 2500 == 0):
                print("")

    def test(self, testing_reviews, testing_labels):
        """
        Attempts to predict the labels for the given testing_reviews,
        and uses the test_labels to calculate the accuracy of those predictions.
        """

        correct = 0
        start = time.time()

        for i in range(len(testing_reviews)):
            pred = self.run(testing_reviews[i])
            if(pred == testing_labels[i]):
                correct += 1

            elapsed_time = float(time.time() - start)
            reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0

            sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
                             + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
                             + " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
                             + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")

    def run(self, review):
        """
        Returns a POSITIVE or NEGATIVE prediction for the given review.
        """
        self.update_input_layer(review)
        layer_1 = np.dot(self.layer_0,self.weights_0_1)
        layer_2 = self.sigmoid(np.dot(layer_1,self.weights_1_2))

        if layer_2[0] >= 0.5: return "POSITIVE"
        else: return "NEGATIVE"
    def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):

        self.input_nodes = input_nodes
        self.hidden_nodes = hidden_nodes
        self.output_nodes = output_nodes
        self.learning_rate = learning_rate

        self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
        self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,(self.hidden_nodes, self.output_nodes))
        self.layer_0 = np.zeros((1,input_nodes))


    def update_input_layer(self,review):

        self.layer_0 *= 0

        for word in review.split(" "):
            if(word in self.word2index.keys()):
                self.layer_0[0][self.word2index[word]] = 1


    def get_target_for_label(self,label):

        if label == 'POSITIVE':
            return 1
        else:
            return 0

    def sigmoid(self,x):
        return (1/(1 + np.exp(-x)))

    def sigmoid_output_2_derivative(self,output):
        return (1 - output)*(output)

    def train(self, training_reviews, training_labels):

        assert(len(training_reviews) == len(training_labels))
        correct_so_far = 0
        start = time.time()

        for i in range(len(training_reviews)):

            review = training_reviews[i]
            label = training_labels[i]
            self.update_input_layer(review)

            layer_1 = np.dot(self.layer_0,self.weights_0_1)
            layer_2 = self.sigmoid(np.dot(layer_1,self.weights_1_2))

            #Output Error
            layer_2_error = layer_2 - get_target_for_label(label)
            layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)

            #Backpropagate Error
            layer_1_error = np.dot(layer_2_delta,self.weights_1_2.T)
            layer_1_delta = layer_1_error

            #Update Weight
            self.weights_0_1 -= np.dot(self.layer_0.T, layer_1_delta) * self.learning_rate
            self.weights_1_2 -= np.dot(layer_1.T, layer_2_delta) * self.learning_rate

            if layer_2 >= 0.5 and label == "POSITIVE" :
                correct_so_far += 1
            elif layer_2 < 0.5 and label == "NEGATIVE" :
                correct_so_far += 1

            #debug
            elapsed_time = float(time.time() - start)
            reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0

            sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
                             + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
                             + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
                             + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
            if(i % 2500 == 0):
                print("")

    def test(self, testing_reviews, testing_labels):
        """
        Attempts to predict the labels for the given testing_reviews,
        and uses the test_labels to calculate the accuracy of those predictions.
        """

        correct = 0
        start = time.time()

        for i in range(len(testing_reviews)):
            pred = self.run(testing_reviews[i])
            if(pred == testing_labels[i]):
                correct += 1

            elapsed_time = float(time.time() - start)
            reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0

            sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
                             + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
                             + " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
                             + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")

    def run(self, review):
        """
        Returns a POSITIVE or NEGATIVE prediction for the given review.
        """

        self.update_input_layer(review)
        layer_1 = np.dot(self.layer_0,self.weights_0_1)
        layer_2 = self.sigmoid(np.dot(layer_1,self.weights_1_2))

        if layer_2[0] >= 0.5: return "POSITIVE"
        else: return "NEGATIVE"

다시 학습을 해보자. learning rate0.1부터 시작한다.

mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):308.1 #Correct:1803 #Trained:2501 Training Accuracy:72.0%
Progress:20.8% Speed(reviews/sec):331.2 #Correct:3798 #Trained:5001 Training Accuracy:75.9%
Progress:31.2% Speed(reviews/sec):336.2 #Correct:5880 #Trained:7501 Training Accuracy:78.3%
Progress:41.6% Speed(reviews/sec):337.3 #Correct:8014 #Trained:10001 Training Accuracy:80.1%
Progress:52.0% Speed(reviews/sec):337.6 #Correct:10150 #Trained:12501 Training Accuracy:81.1%
Progress:62.5% Speed(reviews/sec):335.6 #Correct:12294 #Trained:15001 Training Accuracy:81.9%
Progress:72.9% Speed(reviews/sec):331.1 #Correct:14418 #Trained:17501 Training Accuracy:82.3%
Progress:83.3% Speed(reviews/sec):331.1 #Correct:16595 #Trained:20001 Training Accuracy:82.9%
Progress:93.7% Speed(reviews/sec):333.3 #Correct:18775 #Trained:22501 Training Accuracy:83.4%
Progress:99.9% Speed(reviews/sec):333.4 #Correct:20095 #Trained:24000 Training Accuracy:83.7%

놀랍다. learning rate가 0.1인데도 불구하고 정확도가 놀라울 정도로 향상됬다. 이전에는 전혀 학습이 되질 않았던 것에 비하면 정말 놀라운 효과이다. learning rate0.001로 두고 다시 한번 학습을 해보자.

mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):346.7 #Correct:1941 #Trained:2501 Training Accuracy:77.6%
Progress:20.8% Speed(reviews/sec):340.3 #Correct:3988 #Trained:5001 Training Accuracy:79.7%
Progress:31.2% Speed(reviews/sec):339.1 #Correct:6086 #Trained:7501 Training Accuracy:81.1%
Progress:41.6% Speed(reviews/sec):344.2 #Correct:8205 #Trained:10001 Training Accuracy:82.0%
Progress:52.0% Speed(reviews/sec):341.4 #Correct:10338 #Trained:12501 Training Accuracy:82.6%
Progress:62.5% Speed(reviews/sec):337.3 #Correct:12424 #Trained:15001 Training Accuracy:82.8%
Progress:72.9% Speed(reviews/sec):334.6 #Correct:14525 #Trained:17501 Training Accuracy:82.9%
Progress:83.3% Speed(reviews/sec):332.3 #Correct:16698 #Trained:20001 Training Accuracy:83.4%
Progress:93.7% Speed(reviews/sec):328.3 #Correct:18857 #Trained:22501 Training Accuracy:83.8%
Progress:99.9% Speed(reviews/sec):325.6 #Correct:20173 #Trained:24000 Training Accuracy:84.0%

놀랍게도(?) learning rate를 1/100으로 줄였지만 효과는 매우 미미했다. 이제 test set에 대해서 test를 진행해본다.

mlp.test(reviews[-1000:],labels[-1000:])
Progress:99.9% Speed(reviews/sec):1603. #Correct:848 #Tested:1000 Testing Accuracy:84.8%

Analyzing Inefficiencies in our Network

png

layer_0 = np.zeros(10)
layer_0
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
layer_0[4] = 1
layer_0[9] = 1
layer_0
array([0., 0., 0., 0., 1., 0., 0., 0., 0., 1.])
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
array([-0.10503756,  0.44222989,  0.24392938, -0.55961832,  0.21389503])
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
    layer_1 += (1 * weights_0_1[index])
layer_1
array([-0.10503756,  0.44222989,  0.24392938, -0.55961832,  0.21389503])

png

layer_1 = np.zeros(5)
for index in indices:
    layer_1 += (weights_0_1[index])
layer_1
array([-0.10503756,  0.44222989,  0.24392938, -0.55961832,  0.21389503])

Project 5: Making our Network More Efficient

이전 project에서는 Noise를 제거해서 신경망의 정확도를 높였다. 이번 project에서는 불필요한 연산을 줄여 (최적화) 학습속도를 빠르게 만들겠다.

TODO:

# TODO: -Copy the SentimentNetwork class from Project 4 lesson
#       -Modify it according to the above instructions

import time
import sys
import numpy as np

class SentimentNetwork:
    def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
        """Create a SentimenNetwork with the given settings
        Args:
            reviews(list) - List of reviews used for training
            labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
            hidden_nodes(int) - Number of nodes to create in the hidden layer
            learning_rate(float) - Learning rate to use while training

        """

        np.random.seed(1)
        self.pre_process_data(reviews, labels)
        self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)

    def pre_process_data(self, reviews, labels):

        review_vocab = set()

        for review in reviews:
            for word in review.split(' '):
                review_vocab.add(word)
        self.review_vocab = list(review_vocab)
        label_vocab = set()

        for label in labels :
            label_vocab.add(label)
        self.label_vocab = list(label_vocab)

        self.review_vocab_size = len(self.review_vocab)
        self.label_vocab_size = len(self.label_vocab)

        self.word2index = {}
        for i, word in enumerate(self.review_vocab) :
            self.word2index[word] = i

        self.label2index = {}
        for i, label in enumerate(self.label_vocab) :
            self.label2index[label] = i


    def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
        self.input_nodes = input_nodes
        self.hidden_nodes = hidden_nodes
        self.output_nodes = output_nodes
        self.learning_rate = learning_rate

        self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
        self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,(self.hidden_nodes, self.output_nodes))

# 이제 input layer를 분리하지 않을 것이다. 따라서 `self.layer_0`에 관련된 표현은 모두 삭제한다.
# hidden layer는 더욱 직접적으로 다뤄줄 것이다. 따라서 `self.layer_1`를 만든다. 이 layer는 1 x hidden_nodes의 차원을 가지고 있는 2차원 matrix이다. 초기값은 모두 0이다.        
        self.layer_1 = np.zeros((1,hidden_nodes))


    def get_target_for_label(self,label):
        if label == 'POSITIVE':
            return 1
        else:
            return 0

    def sigmoid(self,x):
        return (1/(1 + np.exp(-x)))

    def sigmoid_output_2_derivative(self,output):
        return (1 - output)*(output)

    def train(self, training_reviews_raw, training_labels):

# **TODO**:
# >* 입력 변수 `training_reviews`를 `training_reviews_raw`로 수정한다.
# >* 함수의 앞부분에 모든 review들을 indices의 list로 바꿔줄 것이다. (`word2index`를 사용한다.) 그리고 `training_reviews_raw`의 각 review를 할당한 `training_reviews`라는 이름의 local `list`를 만든다. 이 list들은 review의 각 단어에 대한 indice를 담고 있다.
# >* `update_input_layer` 호출을 삭제한다.
# >* local `layer_1` 대신 `self`의 `layer_1`을 사용한다.
# >* forward pass에서 `layer_1`을 업데이트 하는 과정을 삭제한다. input value가 1 아니면 0이기 때문에, 우리는 곱셈의 과정을 생략하고 ( num * 1 = num ), input이 1이면 weight를 더해주고 0이면 지나갈 것이다.

        training_reviews = list()
        for review_raw in training_reviews_raw :
            review = set()
            for word in review_raw.split(" "):
                if (word in self.word2index.keys()):
                    review.add(self.word2index[word])
            training_reviews.append(list(review))

        assert(len(training_reviews) == len(training_labels))
        correct_so_far = 0
        start = time.time()

        for i in range(len(training_reviews)):
            review = training_reviews[i]
            label = training_labels[i]
            self.layer_1 *= 0

            for j in review:
                self.layer_1 += (self.weights_0_1[j])

            layer_2 = self.sigmoid(np.dot(self.layer_1,self.weights_1_2))

            layer_2_error = layer_2 - get_target_for_label(label)
            layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)

            layer_1_error = np.dot(layer_2_delta,self.weights_1_2.T)
            layer_1_delta = layer_1_error

            for j in review:
                self.weights_0_1[j] -= layer_1_delta[0] * self.learning_rate

            self.weights_1_2 -= np.dot(self.layer_1.T, layer_2_delta) * self.learning_rate

            if layer_2 >= 0.5 and label == "POSITIVE" :
                correct_so_far += 1
            elif layer_2 < 0.5 and label == "NEGATIVE" :
                correct_so_far += 1

            elapsed_time = float(time.time() - start)
            reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0

            sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
                             + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
                             + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
                             + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
            if(i % 2500 == 0):
                print("")

    def test(self, testing_reviews, testing_labels):
        """
        Attempts to predict the labels for the given testing_reviews,
        and uses the test_labels to calculate the accuracy of those predictions.
        """
        correct = 0
        start = time.time()

        for i in range(len(testing_reviews)):
            pred = self.run(testing_reviews[i])
            if(pred == testing_labels[i]):
                correct += 1

            elapsed_time = float(time.time() - start)
            reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0

            sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
                             + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
                             + " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
                             + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")

    def run(self, review):
        """
        Returns a POSITIVE or NEGATIVE prediction for the given review.
        """

# >* `update_input_layer` 호출을 삭제한다.
# >* local `layer_1` 대신 `self`의 `layer_1`을 사용한다.
# >* `train`에 햇던 것처럼 `review`에 대한 pre-process 과정을 추가한다.


        self.layer_1 *= 0
        indices = set()
        for word in review.lower().split(" "):
            if (word in self.word2index.keys()):
                    indices.add(self.word2index[word])

        for i in indices:
                self.layer_1 += self.weights_0_1[i]   

        layer_2 = self.sigmoid(np.dot(self.layer_1,self.weights_1_2))

        if layer_2[0] >= 0.5: return "POSITIVE"
        else: return "NEGATIVE"

학습을 진행해보자.

mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):1648. #Correct:1813 #Trained:2501 Training Accuracy:72.4%
Progress:20.8% Speed(reviews/sec):1612. #Correct:3800 #Trained:5001 Training Accuracy:75.9%
Progress:31.2% Speed(reviews/sec):1567. #Correct:5892 #Trained:7501 Training Accuracy:78.5%
Progress:41.6% Speed(reviews/sec):1561. #Correct:8020 #Trained:10001 Training Accuracy:80.1%
Progress:52.0% Speed(reviews/sec):1542. #Correct:10151 #Trained:12501 Training Accuracy:81.2%
Progress:62.5% Speed(reviews/sec):1546. #Correct:12278 #Trained:15001 Training Accuracy:81.8%
Progress:72.9% Speed(reviews/sec):1548. #Correct:14394 #Trained:17501 Training Accuracy:82.2%
Progress:83.3% Speed(reviews/sec):1533. #Correct:16574 #Trained:20001 Training Accuracy:82.8%
Progress:93.7% Speed(reviews/sec):1524. #Correct:18766 #Trained:22501 Training Accuracy:83.4%
Progress:99.9% Speed(reviews/sec):1534. #Correct:20086 #Trained:24000 Training Accuracy:83.6%

Speed(reviews/sec)에 주목하자. 학습속도를 나타낸 것이다. 이전에 300reviews/sec이었던 학습속도가 몇가지 코드를 수정하니 무려 1600reviews/sec으로 빨라졌다! 무려 5배 이상 빨라진 것이다. 놀라운 개선이다.

mlp.test(reviews[-1000:],labels[-1000:])
Progress:99.9% Speed(reviews/sec):1629. #Correct:848 #Tested:1000 Testing Accuracy:84.8%
Image(filename='sentiment_network_sparse_2.png')

png

# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
[('edie', 4.6913478822291435),
 ('antwone', 4.477336814478207),
 ('din', 4.406719247264253),
 ('gunga', 4.189654742026425),
 ('goldsworthy', 4.174387269895637),
 ('gypo', 4.0943445622221),
 ('yokai', 4.0943445622221),
 ...]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
<div class="bk-root">
    <a href="https://bokeh.pydata.org" target="_blank" class="bk-logo bk-logo-small bk-logo-notebook"></a>
    <span id="d959ada1-daad-4b3a-8dc3-131aaebb43ca">Loading BokehJS ...</span>
</div>
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)

p = figure(tools="pan,wheel_zoom,reset,save",
           toolbar_location="above",
           title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
/home/eungbean/anaconda3/lib/python3.6/site-packages/ipykernel/__main__.py:1: DeprecationWarning: The normed argument is ignored when density is provided. In future passing both will result in an error.
  if __name__ == '__main__':
frequency_frequency = Counter()

for word, cnt in total_counts.most_common():
    frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)

p = figure(tools="pan,wheel_zoom,reset,save",
           toolbar_location="above",
           title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
/home/eungbean/anaconda3/lib/python3.6/site-packages/ipykernel/__main__.py:1: DeprecationWarning: The normed argument is ignored when density is provided. In future passing both will result in an error.
  if __name__ == '__main__':

Project 6: Reducing Noise by Strategically Reducing the Vocabulary

TODO: 이제 통계적인 방법을 도입하여 SentimentNetwork의 performance를 더욱 개선할 것이다. 단어들 중 중립적인 단어가 너무 많아 정확도와 연산량이 낭비되고 있다. 중립적인 감정의 단어들은 cutoff한 후 신경망에 input할 것이다.

다음과 같은 과정을 추가하여 구현한다.

# TODO: -Copy the SentimentNetwork class from Project 5 lesson
#       -Modify it according to the above instructions

import time
import sys
import numpy as np
from collections import Counter

class SentimentNetwork:
    def __init__(self, reviews, labels, min_count = 10, polarity_cutoff = 0.1, hidden_nodes = 10, learning_rate = 0.1):
        """Create a SentimenNetwork with the given settings
        Args:
            reviews(list) - List of reviews used for training
            labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
            hidden_nodes(int) - Number of nodes to create in the hidden layer
            learning_rate(float) - Learning rate to use while training

        """

        np.random.seed(1)

# >* `min_count` 와 `polarity_cutoff` 변수를 추가해 `pre_process_data`를 호출할때 사용한다.
        self.pre_process_data(reviews, labels, polarity_cutoff, min_count)
        self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)

    def pre_process_data(self, reviews, labels, polarity_cutoff, min_count):

# >* `min_count` 과 `polarity_cutoff` 변수를 추가한다.
# >* 리뷰에 사용된 단어의 positive-to-negative ratios를 계산한다. 다른 점이 있다면, 함수가 아니라 별도의 Class로 만들어준다.

        positive_counts = Counter()
        negative_counts = Counter()
        total_counts = Counter()

        for i in range(len(reviews)) :
            if labels[i] == 'POSITIVE':
                for word in reviews[i].split(" "):
                    positive_counts[word] += 1
                    total_counts[word] += 1
            else:
                for word in reviews[i].split(" "):
                    negative_counts[word] += 1
                    total_counts[word] += 1

    # >* cutoff에 따라 postive-to-negative ratio가 기준 이상인 단어들만 학습에 사용되게 할 수 있다. 적당한 cutoff를 선택한다.
    # >* 단어가 `min_count` 번 이상 나오는 경우에만 vocabulary에 추가되도록 한다.
    # >* `polarity_cutoff` 이상의 postive-to-negative를 가지는 경우에만 vocabulary에 추가되도록 한다.

        pos_neg_ratios = Counter()

        for word, counts in list(total_counts.most_common()):
            if (counts >= 50):
                pos_neg_ratio = positive_counts[word] / float(negative_counts[word]+1)
                pos_neg_ratios[word] = pos_neg_ratio

        for word, ratio in pos_neg_ratios.most_common():
            if ratio > 1:
                pos_neg_ratios[word] = np.log(ratio)
            else:
                pos_neg_ratios[word] = np.log(1 / (ratio + 0.01))
        #
        ## end New for Project 6
        ## ----------------------------------------

        review_vocab = set()
        for review in reviews:
            for word in review.split(' '):
                if(total_counts[word] > min_count):
                    if((pos_neg_ratios[word]>=polarity_cutoff) or (pos_neg_ratios[word]<= -polarity_cutoff)):
                        review_vocab.add(word)
                else:
                    review_vocab.add(word)
        self.review_vocab = list(review_vocab)

        label_vocab = set()
        for label in labels :
            label_vocab.add(label)
        self.label_vocab = list(label_vocab)

        self.review_vocab_size = len(self.review_vocab)
        self.label_vocab_size = len(self.label_vocab)

        self.word2index = {}
        for i, word in enumerate(self.review_vocab) :
            self.word2index[word] = i

        self.label2index = {}
        for i, label in enumerate(self.label_vocab) :
            self.label2index[label] = i


    def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
        self.input_nodes = input_nodes
        self.hidden_nodes = hidden_nodes
        self.output_nodes = output_nodes
        self.learning_rate = learning_rate

        self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
        self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,(self.hidden_nodes, self.output_nodes))

        self.layer_1 = np.zeros((1,hidden_nodes))


    def get_target_for_label(self,label):
        if label == 'POSITIVE':
            return 1
        else:
            return 0

    def sigmoid(self,x):
        return (1/(1 + np.exp(-x)))

    def sigmoid_output_2_derivative(self,output):
        return (1 - output)*(output)

    def train(self, training_reviews_raw, training_labels):

        training_reviews = list()
        for review_raw in training_reviews_raw :
            review = set()
            for word in review_raw.split(" "):
                if (word in self.word2index.keys()):
                    review.add(self.word2index[word])
            training_reviews.append(list(review))

        assert(len(training_reviews) == len(training_labels))
        correct_so_far = 0
        start = time.time()

        for i in range(len(training_reviews)):
            review = training_reviews[i]
            label = training_labels[i]
            self.layer_1 *= 0

            for j in review:
                self.layer_1 += (self.weights_0_1[j])

            layer_2 = self.sigmoid(np.dot(self.layer_1,self.weights_1_2))

            layer_2_error = layer_2 - get_target_for_label(label)
            layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)

            layer_1_error = np.dot(layer_2_delta,self.weights_1_2.T)
            layer_1_delta = layer_1_error

            for j in review:
                self.weights_0_1[j] -= layer_1_delta[0] * self.learning_rate

            self.weights_1_2 -= np.dot(self.layer_1.T, layer_2_delta) * self.learning_rate

            if layer_2 >= 0.5 and label == "POSITIVE" :
                correct_so_far += 1
            elif layer_2 < 0.5 and label == "NEGATIVE" :
                correct_so_far += 1

            elapsed_time = float(time.time() - start)
            reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0

            sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
                             + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
                             + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
                             + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
            if(i % 2500 == 0):
                print("")

    def test(self, testing_reviews, testing_labels):
        """
        Attempts to predict the labels for the given testing_reviews,
        and uses the test_labels to calculate the accuracy of those predictions.
        """
        correct = 0
        start = time.time()

        for i in range(len(testing_reviews)):
            pred = self.run(testing_reviews[i])
            if(pred == testing_labels[i]):
                correct += 1

            elapsed_time = float(time.time() - start)
            reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0

            sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
                             + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
                             + " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
                             + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")

    def run(self, review):
        """
        Returns a POSITIVE or NEGATIVE prediction for the given review.
        """
        self.layer_1 *= 0
        indices = set()
        for word in review.lower().split(" "):
            if (word in self.word2index.keys()):
                    indices.add(self.word2index[word])

        for i in indices:
                self.layer_1 += self.weights_0_1[i]   

        layer_2 = self.sigmoid(np.dot(self.layer_1,self.weights_1_2))

        if layer_2[0] >= 0.5: return "POSITIVE"
        else: return "NEGATIVE"

코드를 완성했다면, 네트워크를 훈련시켜보자.

mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):1710. #Correct:1985 #Trained:2501 Training Accuracy:79.3%
Progress:20.8% Speed(reviews/sec):1716. #Correct:4048 #Trained:5001 Training Accuracy:80.9%
Progress:31.2% Speed(reviews/sec):1694. #Correct:6159 #Trained:7501 Training Accuracy:82.1%
Progress:41.6% Speed(reviews/sec):1685. #Correct:8316 #Trained:10001 Training Accuracy:83.1%
Progress:52.0% Speed(reviews/sec):1705. #Correct:10473 #Trained:12501 Training Accuracy:83.7%
Progress:62.5% Speed(reviews/sec):1709. #Correct:12616 #Trained:15001 Training Accuracy:84.1%
Progress:72.9% Speed(reviews/sec):1699. #Correct:14760 #Trained:17501 Training Accuracy:84.3%
Progress:83.3% Speed(reviews/sec):1696. #Correct:16936 #Trained:20001 Training Accuracy:84.6%
Progress:93.7% Speed(reviews/sec):1696. #Correct:19119 #Trained:22501 Training Accuracy:84.9%
Progress:99.9% Speed(reviews/sec):1695. #Correct:20439 #Trained:24000 Training Accuracy:85.1%

그리고 아래 cell을 실행해 test를 해본다.

mlp.test(reviews[-1000:],labels[-1000:])
Progress:99.9% Speed(reviews/sec):3053. #Correct:855 #Tested:1000 Testing Accuracy:85.5%

polarity cutoff를 더욱 크게 해서 다시 학습해보자.

mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):7051. #Correct:2114 #Trained:2501 Training Accuracy:84.5%
Progress:20.8% Speed(reviews/sec):6467. #Correct:4232 #Trained:5001 Training Accuracy:84.6%
Progress:31.2% Speed(reviews/sec):6674. #Correct:6360 #Trained:7501 Training Accuracy:84.7%
Progress:41.6% Speed(reviews/sec):6537. #Correct:8497 #Trained:10001 Training Accuracy:84.9%
Progress:52.0% Speed(reviews/sec):6504. #Correct:10635 #Trained:12501 Training Accuracy:85.0%
Progress:62.5% Speed(reviews/sec):6403. #Correct:12780 #Trained:15001 Training Accuracy:85.1%
Progress:72.9% Speed(reviews/sec):6449. #Correct:14895 #Trained:17501 Training Accuracy:85.1%
Progress:83.3% Speed(reviews/sec):6289. #Correct:17076 #Trained:20001 Training Accuracy:85.3%
Progress:93.7% Speed(reviews/sec):6176. #Correct:19256 #Trained:22501 Training Accuracy:85.5%
Progress:99.9% Speed(reviews/sec):6147. #Correct:20559 #Trained:24000 Training Accuracy:85.6%

테스트를 진행한다.

mlp.test(reviews[-1000:],labels[-1000:])
Progress:99.9% Speed(reviews/sec):6867. #Correct:838 #Tested:1000 Testing Accuracy:83.8%

Analysis: What’s Going on in the Weights?

mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):1397. #Correct:1962 #Trained:2501 Training Accuracy:78.4%
Progress:20.8% Speed(reviews/sec):1401. #Correct:4002 #Trained:5001 Training Accuracy:80.0%
Progress:31.2% Speed(reviews/sec):1433. #Correct:6120 #Trained:7501 Training Accuracy:81.5%
Progress:41.6% Speed(reviews/sec):1442. #Correct:8271 #Trained:10001 Training Accuracy:82.7%
Progress:52.0% Speed(reviews/sec):1426. #Correct:10431 #Trained:12501 Training Accuracy:83.4%
Progress:62.5% Speed(reviews/sec):1429. #Correct:12565 #Trained:15001 Training Accuracy:83.7%
Progress:72.9% Speed(reviews/sec):1433. #Correct:14670 #Trained:17501 Training Accuracy:83.8%
Progress:83.3% Speed(reviews/sec):1418. #Correct:16833 #Trained:20001 Training Accuracy:84.1%
Progress:93.7% Speed(reviews/sec):1421. #Correct:19015 #Trained:22501 Training Accuracy:84.5%
Progress:99.9% Speed(reviews/sec):1425. #Correct:20335 #Trained:24000 Training Accuracy:84.7%
Image(filename='sentiment_network_sparse.png')

png

def get_most_similar_words(focus = "horrible"):
    most_similar = Counter()

    for word in mlp_full.word2index.keys():
        most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])

    return most_similar.most_common()
get_most_similar_words("excellent")
[('excellent', 0.13672950757352476),
 ('perfect', 0.12548286087225943),
 ('amazing', 0.0918276339259997),
 ('today', 0.0902236626944142),
 ('wonderful', 0.08935597696221462),
 ('fun', 0.08750446667420686),
 ...]
get_most_similar_words("terrible")
[('worst', 0.1696610725904985),
 ('awful', 0.12026847019691247),
 ('waste', 0.11945367265311006),
 ('poor', 0.09275888757443551),
 ('terrible', 0.09142538719772796),
 ('dull', 0.08420927167822362),
 ('poorly', 0.08124154451604204),
 ...]
import matplotlib.colors as colors

words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
    if(word in mlp_full.word2index.keys()):
        words_to_visualize.append(word)

for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
    if(word in mlp_full.word2index.keys()):
        words_to_visualize.append(word)
pos = 0
neg = 0

colors_list = list()
vectors_list = list()
for word in words_to_visualize:
    if word in pos_neg_ratios.keys():
        vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
        if(pos_neg_ratios[word] > 0):
            pos+=1
            colors_list.append("#00ff00")
        else:
            neg+=1
            colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
           toolbar_location="above",
           title="vector T-SNE for most polarized words")

source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
                                    x2=words_top_ted_tsne[:,1],
                                    names=words_to_visualize,
                                    color=colors_list))

p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")

word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
                  text_font_size="8pt", text_color="#555555",
                  source=source, text_align='center')
p.add_layout(word_labels)

show(p)

# green indicates positive words, black indicates negative words

Comments

Eungbean Lee's Picture

About Eungbean Lee

Lee is a Student, Programmer, Engineer, Designer and a DJ

Seoul, South Korea https://eungbean.github.io