top of page

RESEARCH BLOG


Meditating with Microprocessors Series: Part-1: AI based hardware (Microprocessor) tuning
The catch is that the deeper they go into sleep states, the more is the time they take to come back to full awareness to execute instruction

Mohit Kumar
Sep 5, 20205 min read


RNN Series:LSTM internals:Part-3: The Backward Propagation
Introduction In this multi-part series, we look inside LSTM forward pass. If you haven’t already read it I suggest run through the...

Mohit Kumar
Aug 28, 20194 min read


RNN Series: LSTM internals: Part-2: The Forward pass
Introduction In this multi-part series, we look inside LSTM forward pass. If you haven’t already read it I suggest run through the...

Mohit Kumar
Aug 26, 20193 min read


RNN Series: LSTM internals: Part-1: The Big Picture
Introduction LSTMs today are cool. Everything, well almost everything, in the modern Deep Learning landscape is made of LSTMs. I know...

Mohit Kumar
Aug 26, 20195 min read


Softmax and Cross-entropy
Introduction This is a loss calculating function post the yhat(predicted value) that measures the difference between Labels and predicted...

Mohit Kumar
Aug 24, 20193 min read


Softmax and its Gradient
Softmax, why its chosen, its gradient, its relationship with cross-entropy loss and the combined gradient. In this article, I “dumb” it down

Mohit Kumar
Aug 24, 20194 min read


Introduction: Why another technical blog?
Artificial Intelligence especially deep learning employs extremely complex models. Complexity stems from two major sources. Firstly,...

Mohit Kumar
Jul 31, 20195 min read
bottom of page