Deeplearning

My 4-steps to learn deep learning for genomics

Step 1, get a high-level understanding Watch statquest by Josh Starmer. 1blue3brown deep learning playlist Step2, code it out! If you are into python, watch “The spelled-out intro to neural networks and backpropagation: building micrograd”: I still code in R for most of the time, so I walk through the R code in the deep learning with R book.

How to code a variational autoencoder (VAE) in R using the MNIST dataset

Imagine you have a bunch of pictures of cats, and you want to find a way to generate new cat pictures that look similar to the ones you have. A variation autoencoder (VAE) is like a magical tool for creating these new cat pictures. Here’s how it works: Encoder: The VAE first takes your cat pictures and passes them through an encoder. This encoder is like a detective that tries to capture the important features of the cats, such as their fur color, size, and shape.

Predict TCR cancer specificity using 1d convolutional and LSTM neural networks

The T-cell receptor (TCR) is a special molecule found on the surface of a type of immune cell called a T-cell. Think of T-cells like soldiers in your body’s defense system that help identify and attack foreign invaders like viruses and bacteria. The TCR is like a sensor or antenna that allows T-cells to recognize specific targets, kind of like how a key fits into a lock. When the TCR encounters a target it recognizes, it sends signals to the T-cell telling it to attack and destroy the invader.

Generative AI: Text generation using Long short-term memory (LSTM) model

In the world of deep learning, generating sequence data is a fundamental task. Typically, this involves training a network, often an RNN (Recurrent Neural Network) or a convnet (Convolutional Neural Network), to predict the next token or a sequence of tokens in a given sequence, using the preceding tokens as input. For example, when provided with the input “the cat is on the ma,” the network’s objective is to predict the next character, such as ‘t.

How to use 1d convolutional neural network (conv1d) to predict DNA sequence binding to protein

In the mysterious world of DNA, where the secrets of life are encoded, scientists are harnessing the power of cutting-edge technology to decipher the language of genes. One of the remarkable tools they’re using is the 1D Convolutionary Neural Network, or 1D CNN, which might sound like jargon from a sci-fi movie, but it’s actually a game-changer in DNA sequence analysis. Imagine DNA as a long, intricate string of letters, like a never-ending alphabet book.

Long Short-term memory (LSTM) Recurrent Neural Network (RNN) to classify movie reviews

A major characteristic of all neural networks I have used so far, such as densely connected networks and convnets (CNN) (see my previous post), is that they have no memory. Each input shown to them is processed independently, with no state kept in between inputs. In other words, they do not take into the context of the words (the words around the word). Imagine you’re reading a book, and you want to understand the story by keeping track of what’s happening in the plot.

Understand word embedding and use deep learning to classify movie reviews

Picture this: a computer that can actually grasp the emotions hidden in movie reviews – sensing whether they’re shouting with joy or grumbling in disappointment. This mind-bending capability comes from two incredible technologies: deep learning and word embedding. But don’t worry if these sound like jargon; I am here to unravel the mystery. Think of deep learning as a supercharged brain for computers. Just like we learn from experience, computers learn from data.

How to classify MNIST images with convolutional neural network

Introduction An artificial intelligence system called a convolutional neural network (CNN) has gained a lot of popularity recently. For jobs like image recognition, where we want to teach a computer to recognize things in a picture, they are especially well suited. CNNs operate by dissecting an image into increasingly minute components, or “features.” The network then examines each feature and searches for patterns shared by various objects. For instance, a CNN might come to understand that some pixel patterns are frequently linked to faces, while others are linked to vehicles or trees.

Deep learning to predict cancer from healthy controls using TCRseq data

Sign up for my newsletter to not miss a post like this https://divingintogeneticsandgenomics.ck.page/newsletter The T-cell receptor (TCR) is a special molecule found on the surface of a type of immune cell called a T-cell. Think of T-cells like soldiers in your body’s defense system that help identify and attack foreign invaders like viruses and bacteria. The TCR is like a sensor or antenna that allows T-cells to recognize specific targets, kind of like how a key fits into a lock.

Basic tensor/array manipulations in R

Sign up for my newsletter to not miss a post like this https://divingintogeneticsandgenomics.ck.page/newsletter In my last post, I showed you how to build a neural network in Keras with less than 20 lines of code. One of the key road blocks for beginners is to transform the input to the right shape of tensor (the deep learning terminology) or array (the R built-in type). In this post, I am going to show you some basic manipulations of the array.