Lohrasp is an ancient persian name meaning the owner of speedy horse. 

Here I will share some of my experiences with you and I hope that it will be useful for you.


We started easy-tensorflow.com with my friends to provide easy to learn steps for deep learning in TensorFlow.  Our first problem was Jekyll and Pelican, I noticed that it is time consuming to create a website using those software. So I decided to go with a CMS that I already knew: joomla.

However the problem was that we needed to have the contents from our github. It was a headache and wasted a lot of my precious time that I could have spent on my research or improving my machine learning skills.

Finally I could manage it and I went through the following steps:

1- Use the following command from command prompt ( if you are using Anaconda the normal CMD may not work and you should use the Anaconda Prompt)

For example we want to convert the main.ipynb to main.html

jupyter nbconvert --to html main.ipynb

or without format which is preferred and doesn't need to go through step 2:

jupyter nbconvert --to html --template basic main.ipynb

2- open the html using a text editor ( I use notepad++ because of the many tools) and remove the section containing the CSS data. Its between be below lines. In my file It started from line 7 to line 11743. Yes its a lot!!

<style type="text/css">



If you have Mathjax  stuffs, you also can delete them.


3- Install custom CSS module through joomla administrator site. You can find it here.

4- Download the jupyter.css file.


4- Go to module

  • activate it
  • copy the contents of jupyter.css into the custom css main tab
  • assign a module position. I created a new position in the template for it called none_empty.
  • assign to all menus that you want to use the custom CSS for the Jupyter Notebooks.

5- go to the article and copy and paste the contents from the converted html from step 1 and 2. In the article go to html view using toggle editor and paste the content.

6- add the following line to the top of your article:

{loadposition x}


And you are set.


 This is complementary to works of

Gang Chen : A Gentle Tutorial of Recurrent Neural Network with Error Backpropagation


Forward pass

In a supervised learning, for a single input X we have the following two layers neural net, where * corresponds to the the correct output.


\[H = \frac{1}{1+\exp{( W^1 X + W^h H^{t-1} + b})}\]




The softmax function \(p = \frac{e^{f^*}}{ \sum_j e^{f_j} }\) is a part of cross entropy which is used as loss function. For the ith training example the loss is:

\[L_i = -\log\left(\frac{e^{f_{y_i}}}{ \sum_j e^{f_j} }\right)\]

The loss for a training batch  is:

\[L = \underbrace{\frac{1}{N} \sum_i L_i}_ \text{data loss}  + \underbrace{\frac{1}{2} \lambda \sum_k\sum_l W_{k,1}^2}_\text{regularization loss}\]


For the first class the loss is:

\[Loss =-log{(p^1)}= -\log\left(\frac{e^{\hat y^*}}{ e^{\hat y^1}+e^{\hat y^*}+e^{\hat y^3} }\right)\]


 Back propagation

For the last weights and biases we have:

and for the first weights and biases we have:

\[\frac{\partial{Loss}}{ \partial{W^1_{2,1}}} =\frac{\partial{Loss}}{ \partial{H^1}}\times \frac{\partial{H^1}}{ \partial{W^1_{2,1}}}\]


[1] http://cs231n.github.io/

[2] http://karpathy.github.io/2015/05/21/rnn-effectiveness/