Published inTowards Data ScienceAll you need to know to Develop using Large Language ModelsExplaining in simple terms the core technologies required to start developing LLM-based applications.Nov 15, 20235Nov 15, 20235
Published inBetter ProgrammingHack Your Next Interview with Generative AIExploring the Future of AI: Crafting a Conceptual Interview Assistant Using Whisper and ChatGPT.Oct 3, 20234Oct 3, 20234
Published inBetter ProgrammingDetecting LLM-Generated TextsIs it possible to differentiate between what is written by a large language model and a human?Sep 13, 20235Sep 13, 20235
Published inBetter ProgrammingFixing Hallucinations in LLMsWhy LLMs hallucinate, approaches for mitigation, challenges with evaluation datasets, and moreAug 28, 20236Aug 28, 20236
Published inBetter ProgrammingYou don’t need hosted LLMs, do you?A comparison of self-hosted LLMs and OpenAI: cost, text generation quality, development speed, and privacy.Aug 19, 202320Aug 19, 202320
Published inBetter Programming7 Frameworks for Serving LLMsFinally, a comprehensive guide into LLMs inference and serving with detailed comparison.Aug 11, 202317Aug 11, 202317
Published inBetter ProgrammingLLaMA 2: The Dawn of a New EraKey differences from LLaMA 1, safety & violations, Ghost Attention and model performance.Jul 24, 20233Jul 24, 20233
Published inBetter ProgrammingCreate a Clone of Yourself With a Fine-tuned LLMUnleash your digital twinJul 27, 202319Jul 27, 202319
Published inBetter Programming7 Ways to Speed Up Inference of Your Hosted LLMsTLDR; techniques to speed up inference of LLMs to increase token generation speed and reduce memory consumptionJun 26, 20232Jun 26, 20232