Preamble (company)
Company type | Privately held company |
---|---|
Industry | Artificial intelligence |
Founded | 2021 |
Founders |
|
Headquarters | Pittsburgh, Pennsylvania, U.S. |
Website | preamble.com |
Part of a series on |
Artificial intelligence |
---|
Preamble is a U.S.-based AI safety startup founded in 2021. It provides tools and services to help companies securely deploy and manage large language models (LLMs). Preamble is known for its contributions to identifying and mitigating prompt injection attacks in LLMs.
History
[edit]Preamble is particularly notable for its early discovery of vulnerabilities in widely used AI models, such as GPT-3, with a primary discovery of the prompt injection attacks.[1][2][3] These findings were first reported privately to OpenAI in 2022 and have since been the subject of numerous studies in the field.
Preamble has entered a partnership with Nvidia to improve AI safety and risk mitigation for enterprises.[4] They are a part of the Air Force security program as a notable Pittsburgh AI hub.[5] Since 2024, Preamble has partnered with IBM to combine their guardrails with IBM Watsonx.[6]
Research
[edit]Preamble's research revolves around AI security, AI ethics, privacy and policy regulations. In May 2022, Preamble's researchers discovered vulnerabilities in GPT-3 which allowed malicious actors to manipulate the model's outputs through prompt injections.[7][3] The resulting paper investigated the vulnerability of large pre-trained language models, such as GPT-3 and BERT, to adversarial attacks. These attacks are designed to manipulate the models' outputs by introducing subtle perturbations in the input text, leading to incorrect or harmful outputs, such as generating hate speech or leaking sensitive information.[8]
Preamble was granted a patent by the United States Patent and Trademark Office to mitigate prompt injection in AI models.[9]
References
[edit]- ^ Kosinski, Matthew; Forrest, Amber (March 21, 2024). "What is a prompt injection attack?". IBM.com.
- ^ Rossi, Sippo; Michel, Alisia Marianne; Mukkamala, Raghava Rao; Thatcher, Jason Bennett (January 31, 2024). "An Early Categorization of Prompt Injection Attacks on Large Language Models". arXiv:2402.00898 [cs.CR].
- ^ a b Rao, Abhinav Sukumar; Naik, Atharva Roshan; Vashistha, Sachin; Aditya, Somak; Choudhury, Monojit (2024). "Tricking LLMs into Disobedience: Formalizing, Analyzing, and Detecting Jailbreaks". In Calzolari, Nicoletta; Kan, Min-Yen; Hoste, Veronique; Lenci, Alessandro; Sakti, Sakriani; Xue, Nianwen (eds.). Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (PDF). Torino, Italia: ELRA and ICCL. pp. 16802–16830.
- ^ Doughty, Nate (August 8, 2023). "Nvidia selects AI safety startup Preamble for its business development program". Pittsburgh Business Times. Retrieved August 15, 2024.
- ^ Dabkowski, Jake (May 17, 2024). "Pittsburgh-area companies aim to make AI for businesses more secure". Pittsburgh Business Times. Retrieved August 15, 2024.
- ^ "Watsonx technology partners". IBM.com. 2024.
- ^ Rossi, Sippo; Michel, Alisia Marianne; Mukkamala, Raghava Rao; Thatcher, Jason Bennett (January 31, 2024). "An Early Categorization of Prompt Injection Attacks on Large Language Models". arXiv:2402.00898 [cs.CR].
- ^ Branch, Hezekiah J.; Cefalu, Jonathan; McHugh, Jeremy; Heichman, Ron; Hujer, Leyla; del Castillo Iglesias, Daniel. "Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples". arXiv:2209.02128.
- ^ Dabkowski, Jake (October 20, 2024). "Preamble secures AI prompt injection patent". Pittsburgh Business Times.