Large model fine-tuning

Total 28 articles posts
WeClone:用微信聊天记录和语音训练数字分身

WeClone: training digital doppelgangers with WeChat chats and voices

Comprehensive introduction WeClone is an open source project that uses WeChat chat logs and voice messages, combined with large language models and speech synthesis technology, to allow users to create personalized digital doppelgangers. The project can analyze the user's chat habits to train the model , but also a small number of voice samples to generate realistic sound...
4mos ago
01.4K
MM-EUREKA:探索视觉推理的多模态强化学习工具

MM-EUREKA: A Multimodal Reinforcement Learning Tool for Exploring Visual Reasoning

Comprehensive Introduction MM-EUREKA is an open source project developed by Shanghai Artificial Intelligence Laboratory, Shanghai Jiao Tong University and other parties. It extends textual reasoning capabilities to multimodal scenarios through rule-based reinforcement learning techniques to help models process image and textual information. The core of this tool...
5mos ago
01.2K
中文基于满血 DeepSeek-R1 蒸馏数据集,支持中文R1蒸馏SFT数据集

Chinese based full-blooded DeepSeek-R1 distillation dataset, supports Chinese R1 distillation SFT dataset

Comprehensive Introduction The Chinese DeepSeek-R1 distillation dataset is an open source Chinese dataset containing 110K pieces of data designed to support machine learning and natural language processing research. The dataset is released by Cong Liu's NLP team. The dataset contains not only mathematical data, but also a large number of general types...
6mos ago
01.2K
NVIDIA Garak:检测LLM漏洞的开源工具,确保生成式AI的安全性

NVIDIA Garak: Open-source tool to detect LLM vulnerabilities and secure generative AI

Comprehensive Introduction NVIDIA Garak is an open source tool that specializes in detecting vulnerabilities in Large Language Models (LLMs). It checks the model for multiple weaknesses such as illusions, data leakage, hint injection, error message generation, harmful content generation, etc. through static, dynamic and adaptive probing...
9mos ago
01.6K