good paper
Open Information Extraction: A Review of Baseline Techniques, Approaches, and Applications
开放域抽取综述
TeacherLM: Teaching to Fish Rather Than Giving the Fish, Language Modeling Likewise
Use “{Question} {Answer} {Fundamentals} {Chain of Thought} {Common Mistakes}” five-element training object for each sample. These sample to train a small model called TeacherLM to re-construct others training data to train or fine-tuning larger LLM
Improving Prompt Tuning with Learned Prompting Layers
选择性prefix-tuning learning 由原来全部加个prefix 到 现在的选择性添加 prefix
Woodpecker: Hallucination Correction for Multimodal Large Language Models
Woodpecker的架构如下,它包括五个主要步骤: 关键概念提取、问题构造、视觉知识检验、视觉断言生成 以及幻觉修正。
微调大模型+知识图谱 code link