PinnedAn Overview of My Medium StoriesYour Roadmap to Understanding the Elusive World of — Ethical, Safe and Secure Generative AINov 271Nov 271
PinnedPublished inTowards AIResponsible AI, Ethical AI, and Constitutional AI: A Bird’s-Eye View of The 3 Birds of a Feather!Gain a Clear Understanding of the Unique Features of the Trinity of AI Safety and Security FrameworksDec 113Dec 113
PinnedPublished inTowards AIPrompt Injection Attacks on Large Language ModelsThe most comprehensive guide to all Tactics, Techniques, and Procedures (TTPs) hackers use to hijack Generative AI models with malicious…Dec 53Dec 53
PinnedPublished inLevel Up CodingThe Ultimate Guide to PII Leakage in LLMsAttention Mechanisms, Memorization, Model Extraction, Prompt Engineering, and Other Personally Identifiable Information (PII) Leakage…Dec 10Dec 10
PinnedPublished inLevel Up CodingUnmasking the Surprising Diversity of AI HallucinationsThere’s more to it than you ever imagined — Prepare to be surprised by the variety in AI HallucinationsDec 106Dec 106
Published inTowards AIThe LLM Black Book: Everything You Need to Know About Data LeakageYour Comprehensive Guide to Understanding and Preventing AI-Driven Privacy BreachesDec 9Dec 9
Published inLevel Up CodingOWASP Sees These 10 Risks in Your Large Language ModelThe ultimate guide to master LLM security and overcome its challengesDec 96Dec 96
Published inTowards AILLM Red Teaming for Dummies: A Beginner’s Guide to GenAI SecurityLearn the basics of LLM red teaming and how you can use it to secure your Generative AI systems, even with no prior experience.Dec 71Dec 71
Published inAI AdvancesThe ‘SolarWinds’ of the LLM World5 GenAI Supply Chain Vulnerabilities That Will Send Shivers Down Your SpineDec 21Dec 21
Published inLevel Up CodingAgentic GenAI: The Hidden Threat That Could Destroy Your Business!Find out how overlooking risks in Agentic GenAI systems can lead to catastrophic failures and why urgent measures must be implemented…Nov 281Nov 281