LinkedIn Learning – Mitigating Prompt Injection and Prompt Hacking
English | Tutorial | Size: 23.87 MB
As large language models like Chat GPT, Bard, Claude and others have penetrated the culture, hackers are busy attempting to manipulate the models they are based on like GPT, Palm2 and others in order to change how they respond. In this course, Ray Villalobos discusses the mechanisms behind prompt hacking and some of the mitigation techniques. In a world where companies are rushing to develop their own implementations of these popular models, it’s important to understand the concepts behind prompt hacking and some of the defenses that are used to address the potential consequences of its use.
• Understand what prompt hacking is and how it can be use to manipulate LLMs.
• Describe some of the techniques hackers use.
• Introduce mitigation and other strategies to reduce its impact.