TensorFlow Basics – Part 1

Basic Syntax, Graphs, Variables & Placeholders Basic Syntax In [1]: import tensorflow as tf In [2]: print(tf.__version__) 1.3.0 Tensors n-dimensional array In [3]: hello = tf.constant(‘Hello ‘) In [4]: world = tf.constant(‘World’) In [5]: type(hello) Out[5]: tensorflow.python.framework.ops.Tensor Tensorflow Session In [7]: with tf.Session() as sess: result = sess.run(hello+world) In [8]: result Out[8]: b’Hello World’ Computations In [9]: a …

Introduction to Neural Networks

Manual creation of Neural Networks In this notebook we will manually build out a neural network that mimics the TensorFlow API. 1. Super() and Object-Oriented Programming In [4]: class SimpleClass(): def __init__(self): print(‘hello’) def yell(self): print(‘YELLING’) In [5]: x = SimpleClass() hello yell within function is an example of methods! In [7]: x.yell() …

Regular Expressions

Symbols Identifiers \d – any numbers \D – anything but a number \s – space \S – anything but a space \w – any character \W – anything but a character . – any character, except for a new line \b – whitespace around words . – a period Modifiers …

Natural Language Processing with NLTK – Part 2

Text Classification In [26]: import nltk import random from nltk.corpus import movie_reviews In [27]: documents = [(list(movie_reviews.words(fileid)), category) for category in movie_reviews.categories() for fileid in movie_reviews.fileids(category)] Shuffling the documents as it’s in ordered category In [28]: random.shuffle(documents) Lowercase all words and converting list to Frequency Distribution In [29]: all_words = [] In [30]: for w …

Natural Language Processing with NLTK – Part 1

Preparation In [1]: import nltk Download all the packages In [2]: #nltk.download() Tokenisation Two types Word tokenisers – separate by words Sentence tokenisers – separate by sentences Terminology Corpora – body of text (e.g. medical journals, presidential speeches) Lexicon – words and their meanings (e.g. investor-speak dictionary vs regular english-speak dictionary, i.e. …

Basic web scraping with BeautifulSoup4

Introduction BeautifulSoup is a python library for pulling data out of HTML and XML files. It provides idiomatic ways of navigating, searching, and modifying the parse tree. This python library is useful for scraping websites, extracting informations. For example, you can use BeautifulSoup to extract reviews from Amazon, to gauge …