阅读列表:1
Inference/Decoding/Reasoning
- ☐
Large concept model
句子为单位的预测,看上去每个时刻生成一个句子,比如用 diffusion 结构
- ☐
ModernBert
综述类 [2/2]
- ☑
From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models
将内容划分为 Token-level/Meta-generation (sequence-level)
reward 函数相关的不太了解,和强化学习有关 Noisy-channel reranking in Neural Machine Translation. 没有了解
- ☑
A thorough examination of decoding methods in the era of LLMs.
llama2(-chat) 上测试各种解码算法,可以作为对解码算法分析的索引表
EMNLP2024, 论文: [2402.06925] A Thorough Examination of Decoding Methods in the Era of LLMs, 代码 DavidFanzz/llm_decoding
Chufan Shi, Haoran Yang, Deng Cai, Zhisong Zhang, Yifan Wang, Yujiu Yang, Wai Lam
具体解码算法 [5/7]
- ☑
The curious case of neural text degeneration.
提出了 top-p sampling
ICLR2020; Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020.
引用近 3000, ICLR2024 Closing the curious case of neural text degeneration. 和它遥相呼应。
- ☑
Locally typical sampling.
修改历史
Clara Meister, Tiago Pimentel, Gian Wiher, and Ryan Cotterell.
认为每个时刻的采样应该从 local typical set 中选择,而该 set 定义为距离当前条件分布的熵最近的部分词汇 理论假设密集型论文,更多是对 typical set 概念的理解,阅读 Musings on typicality – Sander Dieleman
- ☑
Truncation sampling as language model desmoothing.
eta sampling
EMNLP 2022 Finding; John Hewitt, Christopher Manning, and Percy Liang.
Truncation Sampling as Language Model Desmoothing - ACL Anthology
引入了许多假设:高熵分布更可能接近真实分布,因此平滑较少,通过熵去控制截断的概率
- ☑
Closing the curious case of neural text degeneration.
分析 top-p sampling 为代表的截断采样为何有效,并针对 softmax bottleneck 进行优化
ancestral sampling 不一致问题的分析和解决
ICLR 2024, Matthew Finlayson, John Hewitt, Alexander Koller, Swabha Swayamdipta, and Ashish Sabharwal. 偏理论分析
openreview: Closing the Curious Case of Neural Text Degeneration | OpenReview
proving that truncation methods that discard tokens below some probability threshold (the most common type of truncation) can guarantee that all sampled tokens have nonzero true probability. discard 了概率低于某个阈值的 token, 那么当然剩下的所有 token 都有非 0 概率?需要如何证明?
However, beyond the intuition that language models tend to assign too much probability to tokens that should have 0 or near-0 probability (akin to smoothing (Hewitt et al., 2022)), prior work has been limited in establishing why truncation sampling is so essential in autoregressive generation.
softmax bottleneck (Yang et al., 2018) 的解释: which states that the low-rank softmax matrix used at the output layer of language models causes probability errors in the model’s output distribution
利用 token embedding 的线性相关性进行筛选。(有问题分析,也有理论解释,还有实践算法,有兴趣)
- ☐
Guiding llms the right way: Fast, non-invasive constrained generation
解决形式化限制采样中 tokenizer 不对齐的问题
Luca Beurer-Kellner, Marc Fischer, and Martin T. Vechev.
- ☐ Mauve: Measuring the gap between neural text and human text using divergence
frontiers.
开放文本生成的新 metric
如何能设计 metric, 如何思考呢?
Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers,John Thickstun, Sean Welleck, Yejin Choi, and Zaid Harchaoui.
- ☑
On the efficacy of sampling adapters.
以 sampling adapters 视角对 sampling 方式进行总结
Clara Meister, Tiago Pimentel, Luca Malagutti, Ethan Wilcox, and Ryan Cotterell.
frame trunction sampling strategy as reprioritizing precision over recall (i.e., removing some valid text from the distribution to avoid sampling unlikely text.)
引入小模型间接层进行采样
Speculative sampling 相关
- ☐ Accelerating Large Language Model Decoding with Speculative Sampling
- ☐
Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, and Mike Lewis. Contrastive decoding: Open-ended text generation as optimization.
对比解码,defines a scoring function for beam search that returns the difference between the likelihood under the model pθ with that of a smaller language model p′
- ☐ Ziteng Sun, Ananda Theertha Suresh, Jae Hun Ro, Ahmad Beirami, Himanshu Jain, and Felix Yu. Spectr: Fast speculative decoding via optimal transport, 2024b.
- ☐ Judge Decoding: Faster Speculative Sampling Requires Going Beyond Model Alignment
https://openreview.net/forum?id=mtSSFiqW6y
- ☐
Amortizing intractable inference in large language models. 用额外的分类器或训练方法去引导输出序列(bengio),(这属于 3.4Controlled generation)
修改历史
数学密集一点
- ☐
Nan Yang, Tao Ge, Liang Wang, Binxing Jiao, Daxin Jiang, Linjun Yang, Rangan Majumder, and Furu Wei. Inference with reference: Lossless acceleration of large language models. ArXiv preprint, abs/2304.04487, 2023b. URL https://arxiv.org/abs/2304.04487.
speculative sampling 的延续
- ☐
Yichao Fu, Peter Bailis, Ion Stoica, and Hao Zhang. Break the sequential dependency of llm inference using lookahead decoding, 2024.
speculative sampling 的延续
- ☐ Stephen Zhao, Rob Brekelmans, Alireza Makhzani, and Roger Grosse. Probabilistic inference in language models via twisted sequential monte carlo, 2024a. 同上(这属于 3.4Controlled generation)metaGen 这部分公式我没有搞得很清楚,因此可以带着这个问题看一看
- ☐
John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. In The Twelfth
q∗ ∝ pθ(y|x)I[y ∈Y ∗x ], (17), 在 LLM 概率模型 p 上添加了一个 0/1 指示分布 I, 直接对部分输出“硬”筛选,和前面两篇软加权不一样
- ☐
Haikang Deng and Colin Raffel. Reward-augmented decoding: Efficient controlled text generation with a unidirectional reward model. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language
reward 函数引导解码(metaGen 公式 19 待理解)
- ☐ Liang, and Tatsunori B. Hashimoto. Alpacafarm: A simulation framework for methods that learn from human feedback, 2023. 强化学习模拟库
- ☐ Luca Beurer-Kellner, Marc Fischer, and Martin T. Vechev. Prompting is programming: A query language for large language models. Proceedings of the ACM on Programming Languages, 7:1946 – 1969, 2022.
- ☐ https://arxiv.org/pdf/2407.09468
- ☐ Yangzhen Wu, Zhiqing Sun, Shanda Li, Sean Welleck, and Yiming Yang. An empirical analysis of compute- optimal inference for problem-solving with language models, 2024. URL https://arxiv.org/abs/2408. 包括 reward balance tree
- ☐ Amanda Bertsch, Alex Xie, Graham Neubig, and Matthew Gormley. It’s MBR all the way down: Modern generation techniques through the lens of minimum Bayes risk. In Yanai Elazar, Allyson Ettinger, Nora 这个概念没有听过
- ☐ Yuxi Xie, Kenji Kawaguchi, Yiran Zhao, Xu Zhao, Min-Yen Kan, Junxian He, and Qizhe Xie. Self-evaluation guided beam search for reasoning. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=Bw82hwg5Q3. 句子级别粒度的 beam search
- ☐ Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik RNarasimhan. Tree of thoughts: Deliberate problem solving with large language models. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=5Xc1ecxO1h.
- ☐ Yizhou Chi, Kevin Yang, and Dan Klein. Thoughtsculpt: Reasoning with intermediate revision and search. 2024. URL https://api.semanticscholar.org/CorpusID:269010005.
- ☐ Alexis Chevalier, Alexander Wettig, Anirudh Ajith, and Danqi Chen. Adapting language models to compress contexts, 2023.
- ☐ Tianqi Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mohammad Saleh, Peter J Liu, and Jialu Liu. Statistical rejection sampling improves preference optimization. In The Twelfth International Conference on Learning Representations, 2024c. URL https://openreview.net/forum?id=xbjSwwrQOe. 用独立的 prompt 去作为模型输出的评估
- ☐
Shibo Hao, Yi Gu, Haotian Luo, Tianyang Liu, Xiyan Shao, Xinyuan Wang, Shuhua Xie, Haodi Ma, Adithya Samavedhi, Qiyue Gao, et al. Llm reasoners: New evaluation, library, and analysis of step-by-step reason-ing with large language models. arXiv preprint arXiv:2404.05221, 2024.
用独立的 prompt 去作为模型输出的评估
- ☐ Alex Havrilla, Sharath Chandra Raparthy, Christoforus Nalmpantis, Jane Dwivedi-Yu, Maksym Zhuravin-skyi, Eric Hambro, and Roberta Railneau. Glore: When, where, and how to improve llm reasoning via global and local refinements. ArXiv preprint, abs/2402.10963, 2024. URL https://arxiv.org/abs/2402.10963. 发现了许多 refiner 的问题
- ☐ Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. Teaching large language models to self-debug. In The Twelfth International Conference on Learning Representations, 2024b. URL https://openreview.net/forum?id=KuPixIqPiq.
- ☐ Liang Huang and David Chiang. Forest rescoring: Faster decoding with integrated language models. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pp. 144–151. 发现不依靠外部认证,而是内在 refine 效果不一定好
- ☐
Gladys Tyen, Hassan Mansoor, Victor Cărbune, Peter Chen, and Tony Mak. Llms cannot find reasoning errors, but can correct them given the error location, 2024. 同上类似
In these cases, we can view the information source as being too noisy for the refiner to reliably act upon. Finally, we consider intrinsic algorithms that aim to train a refiner.
- ☐ George Tucker, Doina Precup, Feryal Behbahani, and Aleksandra Faust. Training language models to self-correct via reinforcement learning, 2024. URL https://arxiv.org/abs/2409.12917.
- ☐
"Better & Faster Large Language Models via Multi-token Prediction
MTP 用训练方式 refine 的
- ☐
Ximing Lu, Faeze Brahman, Peter West, Jaehun Jung, Khyathi Chandu, Abhilasha Ravichander, Prithvi-raj Ammanabrolu, Liwei Jiang, Sahana Ramnath, Nouha Dziri, Jillian Fisher, Bill Lin, Skyler Hallinan, Lianhui Qin, Xiang Ren, Sean Welleck, and Yejin Choi. Inference-time policy adapters (IPA): Tailor-ing extreme-scale LMs without fine-tuning.
强化学习训练一个小模型去 adapt 每个 token 的 logits 从使得解码时最大化某个 reward
- ☐ Alisa Liu, Xiaochuang Han, Yizhong Wang, Yulia Tsvetkov, Yejin Choi, and Noah A. Smith. Tuning language models by proxy, 2024a. 用监督方式训练一个小模型去 adapt 每个 token 的 logits 从使得解码时最大化某个 reward
- ☐ Zhiqing Sun, Longhui Yu, Yikang Shen, Weiyang Liu, Yiming Yang, Sean Welleck, and Chuang Gan. Easy-to-hard generalization: Scalable alignment beyond human supervision, 2024a.
- ☐ Sehoon Kim, Coleman Hooper, Amir Gholami, Zhen Dong, Xiuyu Li, Sheng Shen, Michael W. Mahoney, and Kurt Keutzer. Squeezellm: Dense-and-sparse quantization, 2024a.
- ☐ Taehyeon Kim, Joonkee Kim, Gihun Lee, and Se-Young Yun. Instructive decoding: Instruction-tuned large language models are self-refiner from noisy instructions. In The Twelfth International Conference on Learning Representations, 2024b. URL https://openreview.net/forum?id=LebzzClHYw. 用其他 prompt 去引导 logits
- ☐ Zhiruo Wang, Zhoujun Cheng, Hao Zhu, Daniel Fried, and Graham Neubig. What are tools anyway? a survey from the language model perspective, 2024b. 工具使用的综述
- ☐
Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebron, and Sumit Sang-hai. GQA: Training generalized multi-query transformer models from multi-head checkpoints. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods
attention 优化 Group-query
- ☐ Haikang Deng and Colin Raffel. Reward-augmented decoding: Efficient controlled text generation with a uni-latent attention
- ☐ Jordan Juravsky, Bradley Brown, Ryan Ehrlich, Daniel Y. Fu, Christopher Ré, and Azalia Mirhoseini. Hydragen: High-throughput llm inference with shared prefixes, 2024. 共享 prefix 加速的
- ☐ Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. Efficient streaming language models with attention sinks, 2024b. KV 压缩
- ☐ Kanishk Gandhi, Denise Lee, Gabriel Grand, Muxin Liu, Winson Cheng, Archit Sharma, and Noah D. Goodman. Stream of search (sos): Learning to search in language, 2024. 模型直接学习搜索
- ☐ Lucas Lehnert, Sainbayar Sukhbaatar, Paul Mcvay, Michael Rabbat, and Yuandong Tian. Beyond a*: Better planning with transformers via search dynamics bootstrapping, 2024.
- ☐
Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters, 2024. URL https://arxiv.org/abs/2408.03314.
test time scaling
- ☐ Lingjiao Chen, Jared Quincy Davis, Boris Hanin, Peter Bailis, Ion Stoica, Matei Zaharia, and James Zou. Are more llm calls all you need? towards scaling laws of compound inference systems, 2024a. 类似上文
- ☐ E. Kelly Buchanan, Mayee Chen, Neel Guha, Christopher Ré, and Azalia Mirhoseini. Archon: An architecture search framework for inference-time techniques, 2024. URL https://arxiv.org/abs/2409.15254. 什么是 architecture search ? NAS ?可以了解
Choi 团队 [0/19]
- ☐
Data Mixture Inference Attack
Data Mixture Inference Attack: BPE Tokenizers Reveal Training Data Compositions
NeurIPS 2024 Jonathan Hayase, Alisa Liu, Yejin Choi, Sewoong Oh, Noah A. Smith
问题背景是大模型的训练数据集是不透明的,比如其中各个领域和语言文本所占的比例,而本文则通过 BPE 分词器去 揭示出训练数据中的各种类别分类(主要是代码,自然语言,不同语种) 核心观点是 BPE 的 list of merge rules 可以解释训练数据化中的 token 频率分布。 只有给定这个 merge list 以及各个领域的典型数据代表,作者提出可以用一个 linear program 去反向推测出训练数据集里各个类别的比例。实验部分,先找到那些完全公开训练数据集的模型的 tokenizer 进行分析, 然后测试反向推理的精度,然后去揭示现有的 GPT, Llama, Claude 模型的训练集比例。
- ☐
In Search of the Long-Tail: Systematic Generation of Long-Tail Inferential Knowledge via Logical Rule Guided Search
用 prompt 方式构造了正确但概率比较低的长尾的 if-then 形式的推理数据集
首先采用了 Godbole and Jia (2022) 对长尾句子的定义,即输入语言模型后总体似然度低的句子 而幻觉(Hallucination)被发现和长尾分布有正相关。 如何找出在模型长尾分布里的句子呢?尤其是对于如今的基础模型,其训练集几乎覆盖整个网络,如何能找出呢? 在所有领域去找是不可能的,因此作者将范围限定在 NLI 任务中的长尾句子中
Huihan Li, Yuting Ning, Zeyi Liao, Siyuan Wang, Xiang Lorraine Li, Ximing Lu, Wenting Zhao, Faeze Brahman, Yejin Choi, Xiang Ren EMNLP 2024
Logic-Induced-Knowledge-Search 方式是用逻辑引导的 prompt 方式构造出了 Logic-Induced-Long-Tail 数据集 并且说明 LLM 在这方面存在弱点。
- ☐
Symbolic Working Memory Enhances Language Models for Complex Rule Application
涉及一系列规则应用步骤的多步骤演绎推理任务上 LLM 表现不佳
EMNLP 2024 Siyuan Wang, zhongyu wei, Yejin Choi, Xiang Ren
核心在于 LLM 不善长于 rule grounding, 在系列规则多步骤演绎任务中,模型需要在多个输入规则、事实和推断事实情况下,在每一步锚定适用的规则和支撑当前推理的事实。找到这个原因后,文章引入外部的工作记忆模块,该模块存储了自然语言和形式语言书写的规则,模型通过访问该规则库去进行 symbolic rule grounding and LLM-based rule implementation.
- ☐
Information-Theoretic Distillation for Reference-less Summarization
通过将文本总结中的 saliency, faithfulness and brevity 用互信息“数学化”
从而构造出损失(评估)函数,用 Pythia-2.8B 模型不依赖人类或 GPT 大模型的总结参考,自我训练再蒸馏到一个 569M 的小模型上,其总结能力与 ChatGPT 相当
COLM 2024 Jaehun Jung, Ximing Lu, Liwei Jiang, Faeze Brahman, Peter West, Pang Wei Koh, Yejin Choi
- ☐
Infini-gram: Scaling Unbounded n-gram Language Models to a Trillion Tokens
用训练 GPT 规模的数据(5T tokens)训练 n-gram 模型,且 n 可以趋于无限
如果做到 n 可以是无穷的 n-gram 模型?论文用后缀数组实现了一个这样的快速计算 n-gram 的引擎, 其 next-token prediction 准确率高达 47% ,从后缀长度和 n-gram 一致性水平的关系还能展示出 LLM 预训练和位置编码的问题?
COLM 2024 Jiacheng Liu, Sewon Min, Luke Zettlemoyer, Yejin Choi, Hannaneh Hajishirzi
- ☐
Tuning Language Models by Proxy
通过微调一个小模型,然后根据微调前后小模型的预测分布变化,把这个变化迁移到大模型解码预测中去,进行间接黑盒微调
属于大模型引导小模型的一种,可以和 speculative sampling 对比,是一种对比解码的变种,但更加可控(对小模型可控)
COLM 2024 Alisa Liu, Xiaochuang Han, Yizhong Wang, Yulia Tsvetkov, Yejin Choi, Noah A. Smith
- ☐
Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs
大模型在对潜在推理规则的掌握方面仍然不及人类的能力
为了能够证明这种不足,研究先提出了一个称为 LOIRE 的工具,结合逻辑,GPT4 和人类生成不同复杂度的推理规则,构造了一个 Ulogic 数据集, 一个生成推理规则的工具
ACL 2024:Siyuan Wang, zhongyu wei, Yejin Choi, Xiang Ren
- ☐
The Generative AI Paradox: “What It Can Create, It May Not Understand”
通过语言和视觉模态上的实验设计,论证当前生成模型的专业级别生成能力可以不依赖于对内容的理解
而人类是需要先理解才能生成专家级别的内容,这是一篇“big idea ” 类的文章,如何定义模型的“理解” 而大模型的里理解和生成能力的相关性却很弱,同时面对对抗样本时更为脆弱。 intro 中对理解的测试是:对于一个生成任务,在选择题下 LLM 的表现,以及针对生成结果的一些理解性测试 核心还是分析能力
ICLR 2024 *Peter West, *Ximing Lu, *Nouha Dziri, *Faeze Brahman, *Linjie Li, Jena D. Hwang, Liwei Jiang, Jillian Fisher, Abhilasha Ravichander, Khyathi Chandu, Benjamin Newman, Pang Wei Koh, Allyson Ettinger, Yejin Choi
- ☐
The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning
LIMA 论文显示 SFT 阶段可能并不重要
本文则通过对齐前后的 token 分布变化情况进一步验证了 LIMA 的假设,同时用 in-context learning 的方式直接对 base 模型转换成对齐后模型,这种直接在非对齐前的模型上构造 prompt 进行 ICL 的模型甚至可以超过用 SFT 和 SFT+RLHF 模型。
作者对比了 align 模型和 unalign 的模型,发现大部分位置预测的 token 分布是很类似的,是对 LIMA 文章中提到的 superficial alignment hypothesis 的进一步支持,即 SFT 更多是调整输出风格,而不是激发大模型对内容知识的提取
ICLR 2024: Bill Yuchen Lin, Abhilasha Ravichander, Ximing Lu, Nouha Dziri, Melanie Sclar, Khyathi Chandu, Chandra Bhagavatula, Yejin Choi
- ☐
Phenomenal Yet Puzzling
用 iterative hypothesis refinement 方式系统评估语言模型的演绎推理能力
该方法分成三步,提出假设,筛选假设和精炼假设,给定例子,先让 LLM 总结出多个归纳假设(以代码形式),然后用解释器去验证规则和例子是否匹配,匹配最多案例的假设会重新输入给 LLM 进行修改精炼。
LLM 输出的多个“形式化”假设通过解释器过滤后可以得到很多高质量候选的归纳假设(例如归纳因果关系,语言指令,符号概念),但它们给出的假设在应用执行方面却很差(即让模型按照自己输出的规则去应用到之前给的案例上错误率很高),
实际是认知启发的 prompt 结合外部形式化工具验证的例子,核心还是之后的分析和发现,发现了能生成规则但无法应用规则的现象(和 generate 和 reason paradox 类似)。
通过检查语言模型在这三步的表现发现 LM 在提出假设方面表现很好
Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement
ICLR 2024, Oral: Linlu Qiu, Liwei Jiang, Ximing Lu, Melanie Sclar, Valentina Pyatkin, Chandra Bhagavatula, Bailin Wang, Yoon Kim, Yejin Choi, Nouha Dziri, Xiang Ren
- ☐
PlaSma: Procedural Knowledge Models for Language-based Planning and Re-Planning
用小模型做过程规划(把高层目标分解成多个步骤)
用的是符号过程知识蒸馏的方式增强小模型的常识知识,并在推理阶段设计算法使得输出的推理更结构化和精确
还提出了一个叫作 Replanning 的新任务,需要不断修改迭代计划来满足外部限制
最后还把这个方法应用于一个具身智能环境 – VirtualHome.
ICLR 2024: Faeze Brahman, Chandra Bhagavatula, Valentina Pyatkin, Jena D. Hwang, Xiang Lorraine Li, Hirona J. Arai, Soumya Sanyal, Keisuke Sakaguchi, Xiang Ren, Yejin Choi
- ☐
SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks
用编码解码器作为快思考模块,LLM 作为规划分解子任务的慢思考模块的 agent
快思考称为 SWIFT 模块,是一个 770M 的 T5 的编码-解码相对较小的模型,输入包括 previous actions, observations, visited locations, as well as the current environment state. 输出 next action, SAGE 则利用 GPT-4 类型的 LLM 进行两阶段 prompt: planning (goal to subgoals) and grounding (subgoals to sequence of actions)
NeurIPS 2023, spotlight: Bill Yuchen Lin, Yicheng Fu, Karina Yang, Prithviraj Ammanabrolu, Faeze Brahman, Shiyu Huang, Chandra Bhagavatula, Yejin Choi, Xiang Ren
- ☐
Localized Symbolic Knowledge Distillation for Visual Commonsense Models
蒸馏出一个模型,使得可以指定图片中局部区域进行带常识推理的问答
NeurIPS 2023: Jae Sung Park, Jack Hessel, Khyathi Chandu, Paul Pu Liang, Ximing Lu, Qiuyuan Huang, Peter West, Jianfeng Gao, Ali Farhadi, Yejin Choi
- ☐
Vera: A General-Purpose Plausibility Estimation Model for Commonsense Statements
训练了一个可以检测本是否符合常理的模型
T5 模型在搜集的带对错标记的常识陈述数据集上进行微调,采用分类加对比学习的方式
EMNLP 2023: Jiacheng Liu, Wenya Wang, Dianzhuo Wang, Noah A. Smith, Yejin Choi, Hannaneh Hajishirzi
- ☐
NovaCOMET: Open Commonsense Foundation Models with Symbolic Knowledge Distillation
从 LLM 中蒸馏出知识图谱 NovATOMIC, 然后在该数据集上微调了一个 NovaCOMET 模型
该模型输入输出中支持任意结构的关系数据
EMNLP 2023 Findings: Peter West, Ronan Le Bras, Taylor Sorensen, Bill Yuchen Lin, Liwei Jiang, Ximing Lu, Khyathi Chandu, Jack Hessel, Ashutosh Baheti, Chandra Bhagavatula, Yejin Choi
- ☐
Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think" Step-by-Step
用大模型输出的中间的思考过程蒸馏小模型,提升小模型推理能力
ACL 2023: Liunian Harold Li, Jack Hessel, Youngjae Yu, Xiang Ren, Kai-Wei Chang and Yejin Choi
- ☐
I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation
自模仿学习 + NeuroLogic Decoding 使得小模型学习并输出高质量常识
ACL 2023: Chandra Bhagavatula, Jena D. Hwang, Doug Downey, Ronan Le Bras, Ximing Lu, Keisuke Sakaguchi, Swabha Swayamdipta, Peter West, Yejin Choi
- ☐
Commonsense Knowledge Transfer for Pre-trained Language Models
认为常识知识比语言和事实类知识更隐式,因此专门对常识知识做迁移学习
用 commonsense mask infilling and commonsense relation prediction 两个任务,这属于大模型之前的常用技术了 ACL Findings 2023: Wangchunshu Zhou, Ronan Le Bras and Yejin Choi
- ☐
Faith and Fate: Limits of Transformers on Compositionality
定义了三类问题的组合复杂度,发现 LLM 会把多步推理转成线性子图匹配,而不是用系统的问题推理步骤
这个和生成与理解 paradox 的论文分析模式有点类似
multi-digit multiplication, logic grid puzzles, and a classic dynamic programming problem
Faith and Fate: Transformers as fuzzy pattern matchers – Answer.AI
- ☐
Symbolic Knowledge Distillation
Symbolic Knowledge Distillation: from General Language Models to Commonsense Models
从 GPT-3 里提取出只和常识有关的知识图,同时蒸馏出小模型。更像是直接结构化生成,从 GPT 里生成图
横向了解 [1/12]
- ☑
WildVision
竞赛类的榜单为什么优于传统 benchmark ?
WildVision: Evaluating Vision-Language Models in the Wild with Human Preferences
NeurIPS 2024: Yujie Lu, Dongfu Jiang, Wenhu Chen, William Yang Wang, Yejin Choi, Bill Yuchen Lin, Datasets and Benchmarks Track
- ☐
Perceptions to Beliefs
用心智理论中的问题构建数据集去测试大模型是否可以理解他人心理状态
Perceptions to Beliefs: Exploring Precursory Inferences for Theory of Mind in Large Language Models
Chani Jung, Dongkwan Kim, Jiho Jin, Jiseon Kim, Yeon Seonwoo, Yejin Choi, Alice Oh, Hyunwoo Kim EMNLP 2024
- ☐
How to Train Your Fact Verifier
不需要持续更新模型参数以保持大模型的知识更新
从而提高真相检测的能力,看上去像是 prompt 设计,开了一个新方法
How to Train Your Fact Verifier: Knowledge Transfer with Multimodal Open Models
EMNLP 2024 Jaeyoung Lee, Ximing Lu, Jack Hessel, Faeze Brahman, Youngjae Yu, Yonatan Bisk, Yejin Choi, Saadia Gabriel
- ☐
Structured Chemistry Reasoning with Large Language Models
提出了一个化学领域的 prompt 策略,提高了 LLM 在化学领域的推理能力
文章的分析还凸显了 LL 在科学领域进行精确的有根据的推理的独特困难,对这个困难是什么有兴趣进一步了解
ICML 2024: Siru Ouyang, Zhuosheng Zhang, Bing Yan, Xuan Liu, Yejin Choi, Jiawei Han, Lianhui Qin
- ☐
MacGyver: Are Large Language Models Creative Problem Solvers?
构造了一个包含 1600 个真实问题的数据集用于测试模型是否能想到对物体进行非常规的使用
NAACL 2024:Yufei Tian, Abhilasha Ravichander, Lianhui Qin, Ronan Le Bras, Raja Marjieh, Nanyun Peng, Yejin Choi, Thomas L. Griffiths, Faeze Brahman
- ☐
UNcommonsense Reasoning: Abductive Reasoning about Uncommon Situations
给定一个上下文描述以及一个不常见的结果,让模型和人类给出导致结果的一个合理解释
这被称为 uncommonsense abductive reasoning ,研究构造了一个称为 UNcommonsense 的英文数据集,并对人和语言模型测试,发现用模型增强(润色)的人类编写的解释是最好的,此外还用不同的模仿学习算法训练了开源语言模型并在人类评估上取得更好的效果。
NAACL 2024: Wenting Zhao, Justin T Chiu, Jena D. Hwang, Faeze Brahman, Jack Hessel, Sanjiban Choudhury, Yejin Choi, Xiang Lorraine Li, Alane Suhr
- ☐
NeuroComparatives: Neuro-Symbolic Distillation of Comparative Knowledge
“铁比木头硬”属于对比性知识,如何从大模型里蒸馏出这类知识并构造有说服力的数据集测试呢?
文章用神经符号蒸馏方法从大模型中获取对比知识,同时构造数据集测试 NAACL 2024 Findings: Phillip Howard, Junlin Wang, Vasudev Lal, Gadi Singer, Yejin Choi, Swabha Swayamdipta
- ☐
Agent Lumos: Unified and Modular Training for Open-Source Language Agents
训练开源语言 agent 的框架
包括的特点有:统一性,模块化,规划,grounding 。同时构造了训练 agent 的数据集
ACL 2024: Da Yin, Faeze Brahman, Abhilasha Ravichander, Khyathi Chandu, Kai-Wei Chang, Yejin Choi, Bill Yuchen Lin
- ☐
Can LLMs Keep a Secret?
提出一个测试对话模型隐私保护能力
Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory
ICLR 2024, Spotlight: Niloofar Mireshghallah, Hyunwoo Kim, Xuhui Zhou, Yulia Tsvetkov, Maarten Sap, Reza Shokri, Yejin Choi
- ☐
Reading Books is Great, But Not if You Are Driving! Visually Grounded Reasoning about Defeasible Commonsense Norms
在视觉背景下推理中常识因为上下文失效的问题
构建了一个称为 NormLens 的视觉多模态常识规范数 benchmark, 然后用蒸馏方式从大模型中提取社会常识知识构建小的专用任务模型 EMNLP 2023: Seungju Han, Junhyeok Kim, Jack Hessel, Liwei Jiang, Jiwan Chung, Yejin Son, Yejin Choi, Youngjae Yu
- ☐
"You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation
检查 GPT4 等模型的 AMR 解析能力,发现一些常用的结构可以解析,但完整解析很困难
EMNLP 2023 Findings: Allyson Ettinger, Jena D. Hwang, Valentina Pyatkin, Chandra Bhagavatula, Yejin Choi
- ☐
🍏 The Curious Case of Commonsense Intelligence
一篇哲学,人文性文章
Yejin Choi
强化学习
- ☐
Unpacking DPO and PPO
DPO, PPO 这些强化学习方法被称为是 preference-based learning
Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback
Hamish Ivison, Yizhong Wang, Jiacheng Liu, Zeqiu Wu, Valentina Pyatkin, Nathan Lambert, Noah A. Smith, Yejin Choi, Hannaneh Hajishirzi NeurIPS 2024
基于偏好的学习范式应用领域很广,因此在数据,算法,评估方式等方面差异很大, 目前不知道是各个方面对这类算法的贡献,本文则对该类算法中涉及到各个环节解耦并评估, 得到结论是数据的作用最大,接着是学习算法,回报模型,最后是 prompts ofr policy training 。另外作者对比了 PPO 一般优于 DPO
- ☐
Don't throw away your value model!
把 PPO 训练中的 value 网络利用起来,放到语言模型(Policy net)解码阶段引导解码
Don't throw away your value model! Generating more preferable text with Value-Guided Monte-Carlo Tree Search decoding
COLM 2024 Jiacheng Liu, Andrew Cohen, Ramakanth Pasunuru, Yejin Choi, Hannaneh Hajishirzi, Asli Celikyilmaz
- ☐
Tailoring Self-Rationalizers with Multi-Reward Distillation
通过多目标回报训练小模型生成中间推理思路,且中间思路本身也受到评估,比如一致性,真实性等
如何量化多个指标并训练模型?这有点像直接跳过 SFT 用强化学习?
ICLR 2024: Sahana Ramnath, Brihi Joshi, Skyler Hallinan, Ximing Lu, Liunian Harold Li, Aaron Chan, Jack Hessel, Yejin Choi, Xiang Ren
- ☐
Crystal: Introspective Reasoners Reinforced with Self-Feedback
对知识的内省概念在强化学习中体现
对知识的内省是什么?如何用强化学习建模? EMNLP 2023: Jiacheng Liu, Ramakanth Pasunuru, Hannaneh Hajishirzi, Yejin Choi, Asli Celikyilmaz
- ☐
Generating Sequences by Learning to [Self-]Correct
用一个纠正模型引导模型做更规范的输出
训练纠正模型的是一种 online training 过程,使用数值或者自然语言作为反馈去优化生成过程。 ICLR 2023: Sean Welleck*, Ximing Lu*, Peter West+, Faeze Brahman+, Tianxiao Shen, Daniel Khashabi, Yejin Choi
- ☐
Is Reinforcement Learning (Not) for Natural Language Processing?: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization
开源了 RL4LMs (Reinforcement Learning for Language Models) 库
同时创建了 GRUE (General Reinforced-language Understanding Evaluation) benchmark, 带有人类偏好的标签,还题出了 NLPO (Natural Language Policy Optimization) ,且比 PPO 更好
ICLR 2023 *Project: Rajkumar Ramamurthy, Prithviraj Ammanabrolu, Kianté Brantley, Jack Hessel, Rafet Sifa, Christian Bauckhage, Hannaneh Hajishirzi, Yejin Choi
- ☐
🍺 Inference-Time Policy Adapters (IPA): Tailoring Extreme-Scale LMs without Fine-tuning
用强化学习训练一个 Policy adapter 网络然后在解码阶段引导小模型,在部分任务超过大模型
EMNLP 2023: Ximing Lu, Faeze Brahman, Peter West, Jaehun Jang, Khyathi Chandu, Abhilasha Ravichander, Lianhui Qin, Prithviraj Ammanabrolu, Liwei Jiang, Sahana Ramnath, Nouha Dziri, Jillian Fisher, Bill Yuchen Lin, Skyler Hallinan, Xiang Ren, Sean Welleck, Yejin Choi
摘要即止 [3/3]
- ☑
WildChat: 1M ChatGPT Interaction Logs in the Wild
通过自己搭建一个 chatgpt 网页搜集了 1M gpt 对话
ICLR 2024, Spotlight: Wenting Zhao, Xiang Ren, Jack Hessel, Claire Cardie, Yejin Choi, Yuntian Deng
- ☑ Trust No Bot: Discovering Personal Disclosures in Human-LLM Conversations in the Wild
- ☑
Modularized Encoder-Decoder Models for Flexible Sequence-to-Sequence Compression
ACL Findings 2023: Wangchunshu Zhou, Ronan Le Bras and Yejin Choi
认知
- ☐ From Strategic Narratives to Code-Like Cognitive Models: An LLM-Based Approach in A Sorting Task
修改历史
评估
- ☑
Elo Uncovered
通过 reliability and transitivity 两个排序性质去检查 ELo 在 LLM 评估上的问题
ELo 得分和评估顺序以及超参数(比如 K)选择有很大关系,并且经常不满足传递性,因此要用 Elo 获得较为稳定的估值,需要对比赛的排序和参数 K 做经验上的探索
[2311.17295] Elo Uncovered: Robustness and Best Practices in Language Model Evaluation 23 年投在 EMNLP workshop (非正式), 之后又被 NeurIPS2024 接收
- ☑
Efficient computation of rankings from pairwise comparisons
对 Zermelo 迭代算法的改进
先说明原本 Zermelo 迭代算法是如何推导的,接着证明新迭代算法同样收敛于最优解。 然后介绍更多的算法以与自己提出的进行对比,然后把算法推广到更一般场景,比如 Bayesian 视角和考虑平局的场景 最后是用生成或现实例子实际测试该算法正确性和高效性