Declare neural networks clearly

Declare neural networks clearly Declare neural networks clearly

Declarar redes neuronales claramentelink image 1

Disclaimer: This post has been translated to English using a machine translation model. Please, let me know if you find any mistakes.

When in PyTorch a neural network is created as a list of layers

	
import torch
import torch.nn as nn
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.layers = nn.ModuleList([
nn.Linear(1, 10),
nn.ReLU(),
nn.Linear(10, 1)
])
Copy

Then iterating through it in the forward method is not so clear

	
import torch
import torch.nn as nn
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.layers = nn.ModuleList([
nn.Linear(1, 10),
nn.ReLU(),
nn.Linear(10, 1)
])
import torch
import torch.nn as nn
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.layers = nn.ModuleList([
nn.Linear(1, 10),
nn.ReLU(),
nn.Linear(10, 1)
])
def forward(self, x):
for layer in self.layers:
x = layer(x)
return x
Copy

However, when creating a neural network as a dictionary of layers

	
import torch
import torch.nn as nn
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.layers = nn.ModuleList([
nn.Linear(1, 10),
nn.ReLU(),
nn.Linear(10, 1)
])
import torch
import torch.nn as nn
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.layers = nn.ModuleList([
nn.Linear(1, 10),
nn.ReLU(),
nn.Linear(10, 1)
])
def forward(self, x):
for layer in self.layers:
x = layer(x)
return x
import torch
import torch.nn as nn
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.layers = nn.ModuleList({
'linear': nn.Linear(1, 10),
'activation': nn.ReLU(),
'output': nn.Linear(10, 1)
})
Copy

Then iterating through it in the forward method is clearer

	
import torch
import torch.nn as nn
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.layers = nn.ModuleList([
nn.Linear(1, 10),
nn.ReLU(),
nn.Linear(10, 1)
])
import torch
import torch.nn as nn
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.layers = nn.ModuleList([
nn.Linear(1, 10),
nn.ReLU(),
nn.Linear(10, 1)
])
def forward(self, x):
for layer in self.layers:
x = layer(x)
return x
import torch
import torch.nn as nn
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.layers = nn.ModuleList({
'linear': nn.Linear(1, 10),
'activation': nn.ReLU(),
'output': nn.Linear(10, 1)
})
import torch
import torch.nn as nn
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.layers = nn.ModuleList({
'linear': nn.Linear(1, 10),
'activation': nn.ReLU(),
'output': nn.Linear(10, 1)
})
def forward(self, x):
x = self.layers['linear'](x)
x = self.layers['activation'](x)
x = self.layers['output'](x)
return x
Copy

Continue reading

DoLa – Decoding by Contrasting Layers Improves Factuality in Large Language Models

DoLa – Decoding by Contrasting Layers Improves Factuality in Large Language Models

Have you ever talked to an LLM and they answered you something that sounds like they've been drinking machine coffee all night long 😂 That's what we call a hallucination in the LLM world! But don't worry, because it's not that your language model is crazy (although it can sometimes seem that way 🤪). The truth is that LLMs can be a bit... creative when it comes to generating text. But thanks to DoLa, a method that uses contrast layers to improve the feasibility of LLMs, we can keep our language models from turning into science fiction writers 😂. In this post, I'll explain how DoLa works and show you a code example so you can better understand how to make your LLMs more reliable and less prone to making up stories. Let's save our LLMs from insanity and make them more useful! 🚀

Last posts -->

Have you seen these projects?

Subtify

Subtify Subtify

Subtitle generator for videos in the language you want. Also, it puts a different color subtitle to each person

View all projects -->

Do you want to apply AI in your project? Contact me!

Do you want to improve with these tips?

Last tips -->

Use this locally

Hugging Face spaces allow us to run models with very simple demos, but what if the demo breaks? Or if the user deletes it? That's why I've created docker containers with some interesting spaces, to be able to use them locally, whatever happens. In fact, if you click on any project view button, it may take you to a space that doesn't work.

View all containers -->

Do you want to apply AI in your project? Contact me!

Do you want to train your model with these datasets?

short-jokes-dataset

Dataset with jokes in English

opus100

Dataset with translations from English to Spanish

netflix_titles

Dataset with Netflix movies and series

View more datasets -->