当前位置: 首页 > news >正文

网站设计思想西安市未央区建设局官方网站

网站设计思想,西安市未央区建设局官方网站,岳阳公交优化,淄博信息港https://huggingface.co/docs/transformers/main/en/llm_tutorialhttps://huggingface.co/docs/transformers/main/en/llm_tutorial停止条件是由模型决定的,模型应该能够学习何时输出一个序列结束(EOS)标记。如果不是这种情况,则在…

https://huggingface.co/docs/transformers/main/en/llm_tutorialicon-default.png?t=N7T8https://huggingface.co/docs/transformers/main/en/llm_tutorial停止条件是由模型决定的,模型应该能够学习何时输出一个序列结束(EOS)标记。如果不是这种情况,则在达到某个预定义的最大长度时停止生成。

from transformers import AutoModelForCausalLMmodel = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", device_map="auto", load_in_4bit=True
)
from transformers import AutoTokenizertokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1", padding_side="left")
model_inputs = tokenizer(["A list of colors: red, blue"], return_tensors="pt").to("cuda")
generated_ids = model.generate(**model_inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'A list of colors: red, blue, green, yellow, orange, purple, pink,'
tokenizer.pad_token = tokenizer.eos_token  # Most LLMs don't have a pad token by default
model_inputs = tokenizer(["A list of colors: red, blue", "Portugal is"], return_tensors="pt", padding=True
).to("cuda")
generated_ids = model.generate(**model_inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['A list of colors: red, blue, green, yellow, orange, purple, pink,',
'Portugal is a country in southwestern Europe, on the Iber']

生成策略有很多,

生成结果太短或太长

如果在GenerationConfig文件中未指定,则默认情况下generate返回最多20个标记。建议在generate调用中手动设置max_new_tokens来控制它可以返回的最大新标记数。请注意,LLM(更精确地说是仅解码器模型)还将输入提示作为输出的一部分返回。

model_inputs = tokenizer(["A sequence of numbers: 1, 2"], return_tensors="pt").to("cuda")# By default, the output will contain up to 20 tokens
generated_ids = model.generate(**model_inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'A sequence of numbers: 1, 2, 3, 4, 5'# Setting `max_new_tokens` allows you to control the maximum length
generated_ids = model.generate(**model_inputs, max_new_tokens=50)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'A sequence of numbers: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,'

生成模式不正确

默认情况下,generate在每次迭代中选择最可能的标记(greedy decoding),除非在GenerationConfig文件中指定。

# Set seed or reproducibility -- you don't need this unless you want full reproducibility
from transformers import set_seed
set_seed(42)model_inputs = tokenizer(["I am a cat."], return_tensors="pt").to("cuda")# LLM + greedy decoding = repetitive, boring output
generated_ids = model.generate(**model_inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'I am a cat. I am a cat. I am a cat. I am a cat'# With sampling, the output becomes more creative!
generated_ids = model.generate(**model_inputs, do_sample=True)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'I am a cat.  Specifically, I am an indoor-only cat.  I'

边缘填充错误

LLM是仅解码器架构,这意味着它们会继续对输入提示进行迭代。如果您的输入长度不相同,那么它们需要被填充。由于LLM没有被训练以从填充标记继续生成,因此输入需要进行左填充。确保还记得将注意力掩码传递给generate函数!

# The tokenizer initialized above has right-padding active by default: the 1st sequence,
# which is shorter, has padding on the right side. Generation fails to capture the logic.
model_inputs = tokenizer(["1, 2, 3", "A, B, C, D, E"], padding=True, return_tensors="pt"
).to("cuda")
generated_ids = model.generate(**model_inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'1, 2, 33333333333'# With left-padding, it works as expected!
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1", padding_side="left")
tokenizer.pad_token = tokenizer.eos_token  # Most LLMs don't have a pad token by default
model_inputs = tokenizer(["1, 2, 3", "A, B, C, D, E"], padding=True, return_tensors="pt"
).to("cuda")
generated_ids = model.generate(**model_inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'1, 2, 3, 4, 5, 6,'

错误的prompt

一些模型和任务需要特定的输入提示格式才能正常工作。如果未使用该格式,性能可能会出现悄然下降:模型可以运行,但效果不如按照预期的提示进行操作。

tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/zephyr-7b-alpha")
model = AutoModelForCausalLM.from_pretrained("HuggingFaceH4/zephyr-7b-alpha", device_map="auto", load_in_4bit=True
)
set_seed(0)
prompt = """How many helicopters can a human eat in one sitting? Reply as a thug."""
model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
input_length = model_inputs.input_ids.shape[1]
generated_ids = model.generate(**model_inputs, max_new_tokens=20)
print(tokenizer.batch_decode(generated_ids[:, input_length:], skip_special_tokens=True)[0])
"I'm not a thug, but i can tell you that a human cannot eat"
# Oh no, it did not follow our instruction to reply as a thug! Let's see what happens when we write
# a better prompt and use the right template for this model (through `tokenizer.apply_chat_template`)set_seed(0)
messages = [{"role": "system","content": "You are a friendly chatbot who always responds in the style of a thug",},{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
model_inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to("cuda")
input_length = model_inputs.shape[1]
generated_ids = model.generate(model_inputs, do_sample=True, max_new_tokens=20)
print(tokenizer.batch_decode(generated_ids[:, input_length:], skip_special_tokens=True)[0])
'None, you thug. How bout you try to focus on more useful questions?'
# As we can see, it followed a proper thug style 😎

http://www.yayakq.cn/news/433983/

相关文章:

  • 网站制作推广招聘辽宁省建设工程信息网排名
  • 网站开发对显卡的要求海南省住房公积金管理局网上办事大厅
  • 东营兴通建设工程公司网站做网站 最好的开源cms
  • 建站系统主要包括对二次网站开发的认识
  • 微信小程序开发文档上海牛巨微seo关键词优化
  • 网站续费一年多少钱重庆平台网站建设企业
  • 建站兔软件下载建设电子票务系统的网站需要多少钱
  • 企业做网站的作用河北网站建设流程
  • 怎样用vs做简单网站三合一网站建设 万网
  • 做行业网站赚钱域名解析网站打不开
  • 网站建设你的选择建设游戏网站
  • 网站开发有什么文化传媒网站建设
  • 中国制造网网站类型Wordpress与dw
  • 莱芜做网站wordpress加群插件
  • 微信小游戏代理平台抖音seo优化
  • 无锡网站建设价格个人网站主机的配置
  • 做网站与做游戏那个好网站制作培训一般要多少钱
  • 手机网站开发注意最好建站网站
  • 网站怎么做百度关键字搜索wordpress修改手机模板
  • 企业网站策划方案中国品牌网是什么网站
  • 建站公司合同模板网站制作xiu021
  • 手机网站开发公司电话人力资源服务外包
  • 长安手机网站建设建设银行网站怎么查自己账号吗
  • soho在哪里做网站世界著名网站开发语言
  • 临沂企业网站开发官网上海网站建设 迈
  • 武夷山网站设计哪些网站是用c语言做的
  • 企航互联提供天津网站建设wordpress 4 中文手册
  • 西宁公司网站建设万网网
  • vs网站模板网站由那些组成
  • 汇鑫网站建设方便.net网站建设实例