$200 Fullstack Live Event Free With a Membership!
Secure your AI projects and don’t let hallucinations and jailbreaks derail them. Join this live event and learn from expert Debjyoti Paul how to harness the power of Large Language Models responsibly.
Large Language Models (LLMs) such as GPT and their derivatives have become fundamental tools in all industries. However, they often produce “hallucinations” (plausible but incorrect results) and can be “cracked” (manipulated to bypass security filters). This poses significant risks for applications that require reliability and ethical compliance. In this Fullstack Live event, you will learn strategies to overcome these challenges and get the chance to combine your new theoretical knowledge directly with practical exercises.
Understand hallucinations: How and why do they occur? How do they impact critical areas such as healthcare, finance and the legal environment?
Apply mitigation techniques: Use best practices such as prompt engineering, retrieval augmented generation (RAG) and fine-tuning with high-quality data sets to minimize false or unverifiable answers.
Combat jail-breaking: Apply mechanisms to harden LLMs against adversarial prompts using techniques such as adversarial training, reinforcement learning through human feedback (RLHF) and dynamic rule enforcement.
Developers and Engineers:
Executives and Decision-Makers:
Scientists and Students:
Industry Experts in Sensitive Fields:
Debjyoti Paul is a Machine Learning expert with Amazon. He has worked with the financial decision science division of HSBC and in Machine Learning for Microsoft R&D Bing Ads. He has more than 6 years of experience in Machine Learning and over 3 years of industry experience. Debjyoti is currently working on Knowledge Graph and Visual Question Answering.