Search This Blog

Thursday, December 8, 2016

Back Propagation

An useful link for understanding the Back Propagation step by step

Link


Tuesday, November 29, 2016

Common Probability Distributions

An interesting introduction to the most used Probability Distributions.

LINK

Monday, November 7, 2016

Conda Environments

Anaconda Environments

Anaconda is an open data science platform powered by Python. The user can create separate environments. The following table is a simple summary of the information you can find here
OperationWindowsLinux, OS X
Create a new conda environmentconda create -n envName python=2.7The Same of Windows
List the conda environmentsconda env listThe Same of Windows
Change environmentsactivate envNamesource activate envName
Deactivate the environmentdeactivatesource deactivate

Thursday, September 29, 2016

Some interesting videos on Deep Learning [Work in Progress]

This post shall contain some interesting videos on Deep Learning. The list shall be continuously updated [Last Update 20 Aug. 2017].

Deep Learning

Saturday, July 23, 2016

MapReduce in Apache Spark

Based on the Course CS120x Distributed Machine Learning with Apache Spark.

Basically we can summarize the map/reduce paradigm as following:
  • Map: transforms a series of elements by applying a function individually to each element in the series. It then returns the series of transformed elements.
  • Filter: applies a function individually to each element in a series but, the function evaluates to True or False and only elements that evaluate to True are retained.
  • Reduce: operates on pairs of elements in a series. It applies a function that takes in two values and returns a single value. Using this function, reduce is able to, iteratively, “reduce” a series to a single value.
We have define an array of 10 elements and transform it in a Resilient Distributed Dataset (RDD)
numberRDD = range(0,10)
numberRDD = sc.parallelize(numberRDD, 4)
numberRDD.collect()
> Out[1]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] 
Map the numberRDD using a lambda function that mulitplies each element by 5
numberRDD.map(lambda x:x*5).collect()
> Out[2]: [0, 5, 10, 15, 20, 25, 30, 35, 40, 45] 
Filter the numberRDD in order to obtain only the number multiple of 2
numberRDD.filter(lambda x:x%2==0).collect()
> Out[3]: [0, 2, 4, 6, 8]
Reduce the numberRDD summing pairs of numbers
numberRDD.reduce(lambda x1,x2:x1+x2)
> Out[4]: [45]
Putting all together we obtain the sum of the numbers in the even positions
numberRDD.map(lambda x:x*5).filter(lambda x:x%2==0).reduce(lambda x1,x2:x1+x2)
> Out[5]: [100]
This post has been written using Markdown and Dillinger. Here an interesting Markdown Cheatsheet

Sunday, July 17, 2016

Convolutional Layer of CNN in one Picture

A complete course at Stanford has devoted to Convolutional Neural Network.
The Course Notes (by Andrej Karpathy) are well written and they worth a look.

That course notes have inspired me to create a picture for summarising some concepts.



An interesting summary (adapted from here ) is the following:

Input Layer

  • Size: $W_1 \times H_1 \times D_1$
  • Hyperparameters:
    • Number of filters $K$
    • Dimension of the filter $F \times F \times D_1$
    • Stride: $S$
    • Amount of Zero Padding: $P$
Output Layer
  • Size: $W_2 \times H_2 \times D_2$
  • $W_2 = \frac{W_1 - F + 2P}{S} + 1$
  • $H_2 = \frac{H_1 - F + 2P}{S} + 1$
  • $D_2 = K$
The parameter sharing introduces $F \times F \times D_1$ per filter, for a total of $(F \times F \times D_1) \times K$ weights and $K$ biases

In the output volume, the d-th depth slice (of size $W_2 \times H2$) is the result of performing a valid convolution of the d-th filter over the input volume with a stride of $S$, and then offset by d-th bias.

Another interesting post on the Convolutional Neural Network is here

Saturday, July 16, 2016

TensorFlow on Databricks

TensorFlow is an Open Source Software Library for Machine Learning and AI tasks.
In these months is becoming a widely used tool in the AI community (and not only).

Databricks is an interesting Cluster Manager based on Apache Spark. It offers a Community Edition for free (pricing).

Since some ML tasks can be very computational intensive (e.g. training of the Deep Networks) could be a good idea to have a Cluster on Databricks and use it.

You can run this Notebook on your Databricks cluster (or import it).
Even though the Notebook says that "It is not required for the Databricks Community Edition", I experimented that it is necessary for the Community Edition as well.

Friday, October 19, 2012

Variable Function Name in MATLAB

Suppose we want to call four different type of ODE solver, we can define a cell containing the 4 function handler as below:

functionName = {@ode45 @ode23 @ode113 @ode15s};

 and we can call in a cycle the desired function:
 for iFunction = 1 : size(functionName,2)  
   f = functionName{iFunction};  
   tol = 1e-4;  
   options = odeset ('RelTol',tol,'AbsTol',tol);  
   [t,y] = f('function_rhs', xspan, ic, options); % SOLVE ODEs 
 
   figure  
   plot(t,y)  
   title(func2str(f))  
 end 

Where function_rhs is the right side of the ODE we want to solve.
In this way we'll obtain 4 figures with the ODE solver name in the title.

Tuesday, May 3, 2011

2. Discrete Time Fourier Transform



Recall that if x(n) is absolutlely summable, that is, $\sum_{-\infty}^{+\infty} |x(n)| < \infty$ than its Discrete-Time Fourier Transform is given by:
\[
X(e^{j \omega})=\sum_{n=- \infty}^{+ \infty} x(n) \cdot e^{-j \cdot \omega \cdot n} \tag{3.1}
\]
This function transforms a discrete signal $x(n)$ into a complex-valued continuous function $X(e^{j \omega})$.

${\bf Matrix Implementation}$
If $x(n)$ is a finite duration sequence, we can use MATLAB to compute the Discrete Time Fourier Transform (DTFT) $X(e^{j\omega})$ numerically at any frequency $\omega$.

Let assume that the sequence x(n) has N samples between $n_1 \leq n \leq n_N$ we can define:
\[\omega_k =\frac{\pi}{M}k, \qquad k=0,1,....,M\]
which are (M+1) frequencies between $[0, \pi]$. Then (1) can be written as:

\[
X(e^{j\omega_k})=\sum_{r=1}^{N} x(n_r) \cdot e^{-j \cdot \omega_k \cdot n_r}  = \sum_{r=1}^{N} x(n_r) \cdot e^{-j \cdot \frac{\pi}{M} k \cdot n_r}, \qquad k=0,...,M \tag{3.2}
\]

Let's define the vector $\mathbf{X} = \begin{bmatrix} X(e^{j\omega_0}) \\ \vdots \\ X(e^{j\omega_M}) \end{bmatrix}$ where the k-th element $X(e^{j\omega_k})$ has the below expression:

\[X(e^{j\omega_k})= [ x(n_1) \cdot e^{-j \cdot \frac{\pi}{M}\cdot k \cdot n_1} + \ldots + x(n_N) \cdot  e^{-j \cdot \frac{\pi}{M}\cdot k \cdot n_N} ] \]

if we define the matrix $\mathbf{W}=\begin{bmatrix} W_0\\ \vdots \\W_M \end{bmatrix}$ where the k-th row $W_k$ has the below expression:

\[ W_k = \begin{bmatrix} e^{-j \cdot \frac{\pi}{M}\cdot k \cdot n_1}, \ldots, e^{-j \cdot \frac{\pi}{M}\cdot k \cdot n_N} \end{bmatrix} \]

and $\mathbf{x(n)} = \begin{bmatrix} x(n_1) \\ \vdots  \\ x(n_N) \end{bmatrix}$, we can write (3.2) as follow:

\[\mathbf{X} = \mathbf{W} \cdot \mathbf{x}\]

Let's consider the row vectors ${\bf k}$ and ${\bf n}$ that represent, respectively, the index of the frequency and for the time.
\[ \begin{matrix}
\mathbf{k}= \begin{bmatrix} 0, 1, \ldots, M\end{bmatrix}\\
\mathbf{n}= \begin{bmatrix} n_1, n_2, \ldots, n_N\end{bmatrix}
\end{matrix} \]
the product $\mathbf{k^T \cdot n}$ is equal to:
\[ \begin{bmatrix} 0 \\1 \\ \vdots \\M\end{bmatrix} \cdot \begin{bmatrix} n_1 & n_2 & \ldots & n_N\end{bmatrix} = \begin{bmatrix} 0 & 0 & \ldots & 0 \\ n_1 & n_2 & \ldots & n_N \\ \vdots & \vdots & \ddots & \vdots \\ M \cdot n_1 & M \cdot n_2 & \ldots & M \cdot n_N \end{bmatrix} \]
so we can rewrite $\mathbf{W}$

\[ \mathbf{W} = \begin{bmatrix} W_0 \\ W_1 \\ \vdots \\ W_M \end{bmatrix} = \begin{bmatrix} e^{-j \cdot \frac{\pi}{M}\cdot 0 \cdot n_1} \ldots e^{-j \cdot \frac{\pi}{M}\cdot 0 \cdot n_N} \\ \vdots \\ e^{-j \cdot \frac{\pi}{M}\cdot M \cdot n_1} \ldots e^{-j \cdot \frac{\pi}{M}\cdot M \cdot n_N} \end{bmatrix} =
 exp[-j \cdot \frac{\pi}{M} \cdot \mathbf{k^T \cdot n}] \]

finally the complete matrix form for DTFT is
\[\mathbf{X} = exp[-j \cdot \frac{\pi}{M} \cdot \mathbf{k^T \cdot n}] \cdot \mathbf{x} \tag{3.3} \]