Exploring RAFT: The Next Step in AI Model Training
Retrieval-Augmented Fine-Tuning (RAFT) emerges as a cutting-edge training strategy from UC Berkeley, designed to boost the performance of Large Language Models (LLMs) for domain-specific question-answering tasks.
Background: Traditionally, adapting LLMs to specialised domains involved two main strategies:
1. Retrieval Augmented Generation (RAG) with in-context learning.
2. Supervised Fine-Tuning (SFT).
Each strategy has limitations — RAG does not fully capitalise on learning opportunities in stable domain settings and early access to test documents, whereas SFT does not utilise documents during test time. RAFT bridges these gaps, combining robust external knowledge integration with strong reasoning abilities, thus creating a superior model.
Implications for Businesses: RAFT enhances business AI by improving explainability — a crucial factor for trust and decision-making. The model’s chain-of-thought response style not only increases accuracy in domain-specific tasks but also offers transparency about the ‘why’ and ‘how’ of its decisions. This clarity is vital for maintaining trust and accountability, especially when AI decisions significantly impact lives.
Companies like Klarna and Octopus Energy, which have integrated AI into their customer services, underline the importance of clear, explainable AI interactions. As AI’s role in our daily interactions grows, so does the necessity for transparency in how decisions are made.
How RAFT Works:
Using the open-book exam analogy, RAFT’s methodology can be explained as follows:
1. Closed-Book Exam: Supervised Fine-Tuning simulates “studying” where LLMs respond based on pre-trained knowledge and fine-tuning.
2. Open-Book Exam: Traditional RAG systems resemble taking an exam with the ability to access external information, heavily relying on the performance of the information retriever.
3. Domain-Specific Open-Book Exam: RAFT trains models to discern relevant from irrelevant information in retrieved documents, akin to preparing for a domain-specific open-book exam where the LLMs know the domain (like enterprise documents, the latest news, or organisational code repositories) beforehand.
This innovative approach might offer the hope of offering the blend of reliability, reasonability, and transparency necessary for today’s fast-paced, AI-driven world.
You can read the paper here: https://arxiv.org/pdf/2403.10131.pdf