Sergei SavvovMerge Large Language ModelsCombine Mistral, WizardMath and CodeLlama in one model!Jan 224Jan 224
Sergei SavvovinTowards Data ScienceAll you need to know to Develop using Large Language ModelsExplaining in simple terms the core technologies required to start developing LLM-based applications.Nov 15, 20235Nov 15, 20235
Sergei SavvovinBetter ProgrammingHack Your Next Interview with Generative AIExploring the Future of AI: Crafting a Conceptual Interview Assistant Using Whisper and ChatGPT.Oct 3, 20234Oct 3, 20234
Sergei SavvovinBetter ProgrammingDetecting LLM-Generated TextsIs it possible to differentiate between what is written by a large language model and a human?Sep 13, 20235Sep 13, 20235
Sergei SavvovinBetter ProgrammingFixing Hallucinations in LLMsWhy LLMs hallucinate, approaches for mitigation, challenges with evaluation datasets, and moreAug 28, 20236Aug 28, 20236
Sergei SavvovinBetter ProgrammingYou don’t need hosted LLMs, do you?A comparison of self-hosted LLMs and OpenAI: cost, text generation quality, development speed, and privacy.Aug 19, 202320Aug 19, 202320
Sergei SavvovinBetter Programming7 Frameworks for Serving LLMsFinally, a comprehensive guide into LLMs inference and serving with detailed comparison.Aug 11, 202317Aug 11, 202317
Sergei SavvovinBetter ProgrammingLLaMA 2: The Dawn of a New EraKey differences from LLaMA 1, safety & violations, Ghost Attention and model performance.Jul 24, 20233Jul 24, 20233
Sergei SavvovinBetter ProgrammingCreate a Clone of Yourself With a Fine-tuned LLMUnleash your digital twinJul 27, 202319Jul 27, 202319
Sergei SavvovinBetter Programming7 Ways to Speed Up Inference of Your Hosted LLMsTLDR; techniques to speed up inference of LLMs to increase token generation speed and reduce memory consumptionJun 26, 20232Jun 26, 20232