RAG fundamentals
Forget about Ctrl+F! 🤯 With RAG, your documents will answer your questions directly. 😎 Step-by-step tutorial with Hugging Face and ChromaDB. Unleash the power of AI (and show off to your friends)! 💪
The best real-time CPU tracking tool
Monitoring the temperature of your CPU directly from the Ubuntu top bar with CPU Monitor. This application is fully integrated with the latest Ubuntu operating system. Get real-time updates and optimize your tasks. Download it now and take control of your CPU health!
CPU Monitor is an intuitive tool designed for developers and professionals who need to keep an eye on their CPU health in real time. It integrates perfectly with the Ubuntu top bar, providing essential information at your fingertips.
Clone it with https
git clone https://github.com/maximofn/cpu_monitor.git
or with ssh
git clone git@github.com:maximofn/cpu_monitor.git
Make sure you don't have any venv or conda environment installed
if [ -n "$VIRTUAL_ENV" ]; thendeactivatefiif command -v conda &>/dev/null; thenconda deactivatefi
Now install the dependencies
sudo apt install lm-sensors
Answer yes to all questions
sudo sensors-detect
Installation of psensor
sudo apt install psensor
Run this script
./add_to_startup.sh
Then when you restart your computer, the CPU Monitor will start automatically.
If you like it consider giving the repository a star ⭐, but if you really like it consider buying me a coffee ☕.
Forget about Ctrl+F! 🤯 With RAG, your documents will answer your questions directly. 😎 Step-by-step tutorial with Hugging Face and ChromaDB. Unleash the power of AI (and show off to your friends)! 💪
😠 Are your commits written in alien language? 👽 Join the club! 😅 Learn Conventional Commits in Python and stop torturing your team with cryptic messages. git-changelog and commitizen will be your new best friends. 🤝
Have you ever talked to an LLM and they answered you something that sounds like they've been drinking machine coffee all night long 😂 That's what we call a hallucination in the LLM world! But don't worry, because it's not that your language model is crazy (although it can sometimes seem that way 🤪). The truth is that LLMs can be a bit... creative when it comes to generating text. But thanks to DoLa, a method that uses contrast layers to improve the feasibility of LLMs, we can keep our language models from turning into science fiction writers 😂. In this post, I'll explain how DoLa works and show you a code example so you can better understand how to make your LLMs more reliable and less prone to making up stories. Let's save our LLMs from insanity and make them more useful! 🚀
Space to calculate the memory needed to run a model
Your colleague Patric is writing code that is hard to read? Share with him this code formatter that I show you in this post! Come in and learn how to format code to make it more understandable. We are not going to solve Patric's problems, but at least you won't suffer when reading it
Declare neural networks clearly in Pytorch
Hugging Face spaces allow us to run models with very simple demos, but what if the demo breaks? Or if the user deletes it? That's why I've created docker containers with some interesting spaces, to be able to use them locally, whatever happens. In fact, if you click on any project view button, it may take you to a space that doesn't work.
Dataset with jokes in English
Dataset with translations from English to Spanish
Dataset with Netflix movies and series