Understanding Retrieval Augmented Generation (RAG): Supercharging LLM Capabilities with Embeddings and Semantic Search
Learn how to leverage Vector DBs and RAG for supercharging your LLM knowledge.
Rafael has a track record of 15 years spanning software engineering, AI and solution architecture. He currently works as an ML & Software Engineer at Hugging Face. Before that, he started weet.ai to help customers unlock business value from Data and AI by leveraging his knowledge in MLOps, Deep Learning, Computer Vision, Large Language Models and Generative AI.
Learn how to leverage Vector DBs and RAG for supercharging your LLM knowledge.
In this blog we provide an overview on some no code tools and frameworks for LLMs, Prompt Engineering, Agents and LangChain.
In this blog, we explore three key strategies for harnessing the power of LLMs: Prompt Engineering, Retrieval Augmented Generation, and Fine Tuning.
Learn how to use different components in an LLMOps stack to make sure your LLMs investmet doesn't go down the drain.
PEFT is the easiest way to optimise costs when fine tuning Large Language Models (LLMs). Learn more!
A deeper dive into how to use MLflow for streamlining your MLOps best practices.
MLflow is the MLOps standard for tracking ML experiments and models. Learn how to get started.
Learn what is Machine Learning Drift and how to avoid it.