Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

可灵tex2img报错'NoneType' object has no attribute 'enable_model_cpu_offload',请大哥给看一下 #102

Open
logossssss opened this issue Oct 14, 2024 · 10 comments

Comments

@logossssss
Copy link

Prompt executed in 4.15 seconds
got prompt
Process using 1 roles,mode is txt2img....
total_vram is 16375.5,aggressive_offload is True,offload is True
start kolor processing...
loader story_maker processing...
!!! Exception during processing !!! 'NoneType' object has no attribute 'enable_model_cpu_offload'
Traceback (most recent call last):
File "D:\ComfyUI\ComfyUI-aki-v1.3\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\ComfyUI-aki-v1.3\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\ComfyUI-aki-v1.3\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\ComfyUI\ComfyUI-aki-v1.3\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "D:\ComfyUI\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 1393, in story_model_loader
pipe.enable_model_cpu_offload()
AttributeError: 'NoneType' object has no attribute 'enable_model_cpu_offload'

@logossssss logossssss changed the title 可灵tex2img报错'NoneType' object has no attribute 'enable_model_cpu_offload' 可灵tex2img报错'NoneType' object has no attribute 'enable_model_cpu_offload',请大哥给看一下 Oct 14, 2024
@smthemex
Copy link
Owner

你的是苹果吗?按理这是能跑通的

@logossssss
Copy link
Author

model_loader_utils.py的kolor_loader方法 txt2img类型没有return pipe,所以外边调用报错,加上return解决了

def kolor_loader(repo_id,model_type,set_attention_processor,id_length,kolor_face,clip_vision_path,clip_load,CLIPVisionModelWithProjection,CLIPImageProcessor,
                 photomaker_dir,face_ckpt,AutoencoderKL,EulerDiscreteScheduler,UNet2DConditionModel):
    from .kolors.pipelines.pipeline_stable_diffusion_xl_chatglm_256 import \
        StableDiffusionXLPipeline as StableDiffusionXLPipelineKolors
    from .kolors.models.modeling_chatglm import ChatGLMModel
    from .kolors.models.tokenization_chatglm import ChatGLMTokenizer
    from .kolors.models.unet_2d_condition import UNet2DConditionModel as UNet2DConditionModelkolor
    logging.info("loader story_maker processing...")
    text_encoder = ChatGLMModel.from_pretrained(
        f'{repo_id}/text_encoder', torch_dtype=torch.float16).half()
    vae = AutoencoderKL.from_pretrained(f"{repo_id}/vae", revision=None).half()
    tokenizer = ChatGLMTokenizer.from_pretrained(f'{repo_id}/text_encoder')
    scheduler = EulerDiscreteScheduler.from_pretrained(f"{repo_id}/scheduler")
    if model_type == "txt2img":
        unet = UNet2DConditionModel.from_pretrained(f"{repo_id}/unet", revision=None,
                                                    use_safetensors=True).half()
        pipe = StableDiffusionXLPipelineKolors(
            vae=vae,
            text_encoder=text_encoder,
            tokenizer=tokenizer,
            unet=unet,
            scheduler=scheduler,
            force_zeros_for_empty_prompt=False, )
        set_attention_processor(pipe.unet, id_length, is_ipadapter=False)
        return pipe

@smthemex
Copy link
Owner

底部的return缩进错误,谢谢指正,已修改。

@czm0304
Copy link

czm0304 commented Nov 17, 2024

您好,我也出现了可灵tex2img报错'NoneType' object has no attribute 'enable_model_cpu_offload'错误,我也根据您的方法修改了,但是现在出现新的错误:

ComfyUI Error Report

Error Details

  • Node Type: Storydiffusion_Sampler
  • Exception Type: NameError
  • Exception Message: name 'consistory' is not defined

Stack Trace

  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 1990, in story_sampler
    for value in gen:
                 ^^^

  File "H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 648, in process_generation
    elif consistory:
         ^^^^^^^^^^

System Information

  • ComfyUI Version: v0.2.7
  • Arguments: ComfyUI\main.py --windows-standalone-build
  • OS: nt
  • Python Version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
  • Embedded Python: true
  • PyTorch Version: 2.5.1+cu124

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 3060 Ti : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 8589410304
    • VRAM Free: 7474249728
    • Torch VRAM Total: 0
    • Torch VRAM Free: 0

Logs

2024-11-17 09:17:58,998 - root - INFO - Total VRAM 8192 MB, total RAM 32538 MB
2024-11-17 09:17:58,998 - root - INFO - pytorch version: 2.5.1+cu124
2024-11-17 09:17:58,998 - root - INFO - Set vram state to: NORMAL_VRAM
2024-11-17 09:17:58,998 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 3060 Ti : cudaMallocAsync
2024-11-17 09:17:59,580 - root - INFO - Using pytorch cross attention
2024-11-17 09:18:00,356 - root - INFO - [Prompt Server] web root: H:\ComfyUI_windows_portable\ComfyUI\web
2024-11-17 09:18:00,950 - root - INFO - 
Import times for custom nodes:
2024-11-17 09:18:00,950 - root - INFO -    0.0 seconds: H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
2024-11-17 09:18:00,950 - root - INFO -    0.0 seconds: H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\AIGODLIKE-ComfyUI-Translation-main
2024-11-17 09:18:00,950 - root - INFO -    0.4 seconds: H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion
2024-11-17 09:18:00,950 - root - INFO - 
2024-11-17 09:18:00,953 - root - INFO - Starting server

2024-11-17 09:18:00,953 - root - INFO - To see the GUI go to: http://127.0.0.1:8188
2024-11-17 09:18:18,937 - root - INFO - got prompt
2024-11-17 09:18:18,948 - root - INFO - Process using 1 roles,mode is txt2img....
2024-11-17 09:18:18,948 - root - INFO - total_vram is 8191.5,aggressive_offload is True,offload is True
2024-11-17 09:18:18,948 - root - INFO - start kolor processing...
2024-11-17 09:18:18,960 - root - INFO - loader story_maker processing...
2024-11-17 09:18:22,090 - root - ERROR - !!! Exception during processing !!! name 'consistory' is not defined
2024-11-17 09:18:22,092 - root - ERROR - Traceback (most recent call last):
  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 1990, in story_sampler
    for value in gen:
                 ^^^
  File "H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 648, in process_generation
    elif consistory:
         ^^^^^^^^^^
NameError: name 'consistory' is not defined

2024-11-17 09:18:22,094 - root - INFO - Prompt executed in 3.15 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":8,"last_link_id":9,"nodes":[{"id":8,"type":"SaveImage","pos":{"0":2163,"1":2},"size":{"0":315,"1":270},"flags":{},"order":2,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":9,"label":"images"}],"outputs":[],"properties":{},"widgets_values":["ComfyUI"]},{"id":7,"type":"Storydiffusion_Sampler","pos":{"0":1665,"1":22},"size":{"0":338.47149658203125,"1":837.4097900390625},"flags":{},"order":1,"mode":0,"inputs":[{"name":"model","type":"STORY_DICT","link":8,"label":"model"},{"name":"control_image","type":"IMAGE","link":null,"shape":7,"label":"control_image"}],"outputs":[{"name":"image","type":"IMAGE","links":[9],"slot_index":0,"label":"image"},{"name":"prompt_array","type":"STRING","links":null,"label":"prompt_array"}],"properties":{"Node name for S&R":"Storydiffusion_Sampler"},"widgets_values":["[Taylor] wake up in the bed ;","bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn,amputation, disconnected limbs","Comic_book",1857722987,"randomize",20,7,1,20,3.5,0.5,5,false,0.8,"0., 0.25, 0.4, 0.75;0.6, 0.25, 1., 0.75"]},{"id":6,"type":"Storydiffusion_Model_Loader","pos":{"0":1067,"1":109},"size":{"0":435.52093505859375,"1":665.1209106445312},"flags":{},"order":0,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":null,"shape":7,"label":"image"},{"name":"condition_image","type":"IMAGE","link":null,"shape":7,"label":"condition_image"},{"name":"model","type":"MODEL","link":null,"shape":7,"label":"model"},{"name":"clip","type":"CLIP","link":null,"shape":7,"label":"clip"},{"name":"vae","type":"VAE","link":null,"shape":7,"label":"vae"}],"outputs":[{"name":"model","type":"STORY_DICT","links":[8],"slot_index":0,"label":"model"}],"properties":{"Node name for S&R":"Storydiffusion_Model_Loader"},"widgets_values":["[Taylor] a woman img, wearing a white T-shirt, blue loose hair.","H:/ComfyUI_windows_portable/ComfyUI/models/Kwai-Kolors/Kolors","none","none","none","none",0.8,"none","clip-vit-large-patch14.safetensors","best quality","euler","normal",0.5,0.5,768,768,"v1",""]}],"links":[[8,6,0,7,0,"STORY_DICT"],[9,7,0,8,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":1.2100000000000002,"offset":[-894.0509170005034,-57.38883346353526]}},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)
请问下怎么回事呢?

@czm0304
Copy link

czm0304 commented Nov 17, 2024

ComfyUI Error Report

Error Details

  • Node Type: Storydiffusion_Sampler
  • Exception Type: NameError
  • Exception Message: name 'consistory' is not defined

Stack Trace

  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 1990, in story_sampler
    for value in gen:
                 ^^^

  File "H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 648, in process_generation
    elif consistory:
         ^^^^^^^^^^

System Information

  • ComfyUI Version: v0.2.7
  • Arguments: ComfyUI\main.py --windows-standalone-build
  • OS: nt
  • Python Version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
  • Embedded Python: true
  • PyTorch Version: 2.5.1+cu124

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 3060 Ti : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 8589410304
    • VRAM Free: 7474249728
    • Torch VRAM Total: 0
    • Torch VRAM Free: 0

Logs

2024-11-17 09:40:41,124 - root - INFO - Total VRAM 8192 MB, total RAM 32538 MB
2024-11-17 09:40:41,124 - root - INFO - pytorch version: 2.5.1+cu124
2024-11-17 09:40:41,124 - root - INFO - Set vram state to: NORMAL_VRAM
2024-11-17 09:40:41,124 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 3060 Ti : cudaMallocAsync
2024-11-17 09:40:42,440 - root - INFO - Using pytorch cross attention
2024-11-17 09:40:43,190 - root - INFO - [Prompt Server] web root: H:\ComfyUI_windows_portable\ComfyUI\web
2024-11-17 09:40:43,674 - root - INFO - 
Import times for custom nodes:
2024-11-17 09:40:43,674 - root - INFO -    0.0 seconds: H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
2024-11-17 09:40:43,674 - root - INFO -    0.0 seconds: H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\AIGODLIKE-ComfyUI-Translation-main
2024-11-17 09:40:43,674 - root - INFO -    0.3 seconds: H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion
2024-11-17 09:40:43,674 - root - INFO - 
2024-11-17 09:40:43,674 - root - INFO - Starting server

2024-11-17 09:40:43,674 - root - INFO - To see the GUI go to: http://127.0.0.1:8188
2024-11-17 09:40:51,112 - root - INFO - got prompt
2024-11-17 09:40:51,121 - root - INFO - Process using 1 roles,mode is txt2img....
2024-11-17 09:40:51,121 - root - INFO - total_vram is 8191.5,aggressive_offload is True,offload is True
2024-11-17 09:40:51,121 - root - INFO - start kolor processing...
2024-11-17 09:40:51,126 - root - INFO - loader story_maker processing...
2024-11-17 09:40:54,105 - root - ERROR - !!! Exception during processing !!! name 'consistory' is not defined
2024-11-17 09:40:54,107 - root - ERROR - Traceback (most recent call last):
  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 1990, in story_sampler
    for value in gen:
                 ^^^
  File "H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 648, in process_generation
    elif consistory:
         ^^^^^^^^^^
NameError: name 'consistory' is not defined

2024-11-17 09:40:54,108 - root - INFO - Prompt executed in 2.99 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":8,"last_link_id":9,"nodes":[{"id":8,"type":"SaveImage","pos":{"0":2163,"1":2},"size":{"0":315,"1":270},"flags":{},"order":2,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":9,"label":"images"}],"outputs":[],"properties":{},"widgets_values":["ComfyUI"]},{"id":7,"type":"Storydiffusion_Sampler","pos":{"0":1665,"1":22},"size":{"0":338.47149658203125,"1":837.4097900390625},"flags":{},"order":1,"mode":0,"inputs":[{"name":"model","type":"STORY_DICT","link":8,"label":"model"},{"name":"control_image","type":"IMAGE","link":null,"shape":7,"label":"control_image"}],"outputs":[{"name":"image","type":"IMAGE","links":[9],"slot_index":0,"label":"image"},{"name":"prompt_array","type":"STRING","links":null,"label":"prompt_array"}],"properties":{"Node name for S&R":"Storydiffusion_Sampler"},"widgets_values":["[Taylor] wake up in the bed ;","bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn,amputation, disconnected limbs","Comic_book",1430297936,"randomize",20,7,1,20,3.5,0.5,5,false,0.8,"0., 0.25, 0.4, 0.75;0.6, 0.25, 1., 0.75"]},{"id":6,"type":"Storydiffusion_Model_Loader","pos":{"0":1067,"1":109},"size":{"0":435.52093505859375,"1":665.1209106445312},"flags":{},"order":0,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":null,"shape":7,"label":"image"},{"name":"condition_image","type":"IMAGE","link":null,"shape":7,"label":"condition_image"},{"name":"model","type":"MODEL","link":null,"shape":7,"label":"model"},{"name":"clip","type":"CLIP","link":null,"shape":7,"label":"clip"},{"name":"vae","type":"VAE","link":null,"shape":7,"label":"vae"}],"outputs":[{"name":"model","type":"STORY_DICT","links":[8],"slot_index":0,"label":"model"}],"properties":{"Node name for S&R":"Storydiffusion_Model_Loader"},"widgets_values":["[Taylor] a woman img, wearing a white T-shirt, blue loose hair.","H:/ComfyUI_windows_portable/ComfyUI/models/Kwai-Kolors/Kolors","none","none","none","none",0.8,"none","clip-vit-large-patch14.safetensors","best quality","euler","normal",0.5,0.5,768,768,"v1",""]}],"links":[[8,6,0,7,0,"STORY_DICT"],[9,7,0,8,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":1,"offset":[-657.0066577285959,3.8116868685335703]}},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)

@smthemex
Copy link
Owner

smthemex commented Nov 17, 2024

的确有问题,忘记删代码了。谢谢测试,已修复

@czm0304
Copy link

czm0304 commented Nov 17, 2024

ComfyUI Error Report

Error Details

  • Node Type: Storydiffusion_Sampler
  • Exception Type: TypeError
  • Exception Message: ChatGLMTokenizer._pad() got an unexpected keyword argument 'padding_side'

Stack Trace

  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 1977, in story_sampler
    for value in gen:
                 ^^^

  File "H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 653, in process_generation
    id_images = pipe(
                ^^^^^

  File "H:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^

  File "H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\kolors\pipelines\pipeline_stable_diffusion_xl_chatglm_256.py", line 719, in __call__
    ) = self.encode_prompt(
        ^^^^^^^^^^^^^^^^^^^

  File "H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\kolors\pipelines\pipeline_stable_diffusion_xl_chatglm_256.py", line 326, in encode_prompt
    text_inputs = tokenizer(
                  ^^^^^^^^^^

  File "H:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py", line 3021, in __call__
    encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "H:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py", line 3109, in _call_one
    return self.batch_encode_plus(
           ^^^^^^^^^^^^^^^^^^^^^^^

  File "H:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py", line 3311, in batch_encode_plus
    return self._batch_encode_plus(
           ^^^^^^^^^^^^^^^^^^^^^^^^

  File "H:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils.py", line 892, in _batch_encode_plus
    batch_outputs = self._batch_prepare_for_model(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "H:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils.py", line 970, in _batch_prepare_for_model
    batch_outputs = self.pad(
                    ^^^^^^^^^

  File "H:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py", line 3527, in pad
    outputs = self._pad(
              ^^^^^^^^^^

System Information

  • ComfyUI Version: v0.2.7
  • Arguments: ComfyUI\main.py --windows-standalone-build
  • OS: nt
  • Python Version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
  • Embedded Python: true
  • PyTorch Version: 2.5.1+cu124

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 3060 Ti : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 8589410304
    • VRAM Free: 7474249728
    • Torch VRAM Total: 0
    • Torch VRAM Free: 0

Logs

2024-11-17 10:20:52,416 - root - INFO - Total VRAM 8192 MB, total RAM 32538 MB
2024-11-17 10:20:52,416 - root - INFO - pytorch version: 2.5.1+cu124
2024-11-17 10:20:52,416 - root - INFO - Set vram state to: NORMAL_VRAM
2024-11-17 10:20:52,416 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 3060 Ti : cudaMallocAsync
2024-11-17 10:20:53,104 - root - INFO - Using pytorch cross attention
2024-11-17 10:20:55,088 - root - INFO - [Prompt Server] web root: H:\ComfyUI_windows_portable\ComfyUI\web
2024-11-17 10:20:55,714 - root - INFO - 
Import times for custom nodes:
2024-11-17 10:20:55,714 - root - INFO -    0.0 seconds: H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
2024-11-17 10:20:55,714 - root - INFO -    0.0 seconds: H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\AIGODLIKE-ComfyUI-Translation-main
2024-11-17 10:20:55,714 - root - INFO -    0.4 seconds: H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion
2024-11-17 10:20:55,714 - root - INFO - 
2024-11-17 10:20:55,714 - root - INFO - Starting server

2024-11-17 10:20:55,714 - root - INFO - To see the GUI go to: http://127.0.0.1:8188
2024-11-17 10:21:10,533 - root - INFO - got prompt
2024-11-17 10:21:10,544 - root - INFO - Process using 1 roles,mode is txt2img....
2024-11-17 10:21:10,544 - root - INFO - total_vram is 8191.5,aggressive_offload is True,offload is True
2024-11-17 10:21:10,544 - root - INFO - start kolor processing...
2024-11-17 10:21:10,548 - root - INFO - loader story_maker processing...
2024-11-17 10:21:13,437 - root - ERROR - !!! Exception during processing !!! ChatGLMTokenizer._pad() got an unexpected keyword argument 'padding_side'
2024-11-17 10:21:13,488 - root - ERROR - Traceback (most recent call last):
  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 1977, in story_sampler
    for value in gen:
                 ^^^
  File "H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 653, in process_generation
    id_images = pipe(
                ^^^^^
  File "H:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\kolors\pipelines\pipeline_stable_diffusion_xl_chatglm_256.py", line 719, in __call__
    ) = self.encode_prompt(
        ^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\kolors\pipelines\pipeline_stable_diffusion_xl_chatglm_256.py", line 326, in encode_prompt
    text_inputs = tokenizer(
                  ^^^^^^^^^^
  File "H:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py", line 3021, in __call__
    encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py", line 3109, in _call_one
    return self.batch_encode_plus(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py", line 3311, in batch_encode_plus
    return self._batch_encode_plus(
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils.py", line 892, in _batch_encode_plus
    batch_outputs = self._batch_prepare_for_model(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils.py", line 970, in _batch_prepare_for_model
    batch_outputs = self.pad(
                    ^^^^^^^^^
  File "H:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py", line 3527, in pad
    outputs = self._pad(
              ^^^^^^^^^^
TypeError: ChatGLMTokenizer._pad() got an unexpected keyword argument 'padding_side'

2024-11-17 10:21:13,489 - root - INFO - Prompt executed in 2.95 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":8,"last_link_id":9,"nodes":[{"id":8,"type":"SaveImage","pos":{"0":2163,"1":2},"size":{"0":315,"1":270},"flags":{},"order":2,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":9,"label":"images"}],"outputs":[],"properties":{},"widgets_values":["ComfyUI"]},{"id":7,"type":"Storydiffusion_Sampler","pos":{"0":1665,"1":22},"size":{"0":338.47149658203125,"1":837.4097900390625},"flags":{},"order":1,"mode":0,"inputs":[{"name":"model","type":"STORY_DICT","link":8,"label":"model"},{"name":"control_image","type":"IMAGE","link":null,"shape":7,"label":"control_image"}],"outputs":[{"name":"image","type":"IMAGE","links":[9],"slot_index":0,"label":"image"},{"name":"prompt_array","type":"STRING","links":null,"label":"prompt_array"}],"properties":{"Node name for S&R":"Storydiffusion_Sampler"},"widgets_values":["[Taylor] wake up in the bed ;","bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn,amputation, disconnected limbs","Comic_book",2053903431,"randomize",20,7,1,20,3.5,0.5,5,false,0.8,"0., 0.25, 0.4, 0.75;0.6, 0.25, 1., 0.75"]},{"id":6,"type":"Storydiffusion_Model_Loader","pos":{"0":1067,"1":109},"size":{"0":435.52093505859375,"1":665.1209106445312},"flags":{},"order":0,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":null,"shape":7,"label":"image"},{"name":"condition_image","type":"IMAGE","link":null,"shape":7,"label":"condition_image"},{"name":"model","type":"MODEL","link":null,"shape":7,"label":"model"},{"name":"clip","type":"CLIP","link":null,"shape":7,"label":"clip"},{"name":"vae","type":"VAE","link":null,"shape":7,"label":"vae"}],"outputs":[{"name":"model","type":"STORY_DICT","links":[8],"slot_index":0,"label":"model"}],"properties":{"Node name for S&R":"Storydiffusion_Model_Loader"},"widgets_values":["[Taylor] a woman img, wearing a white T-shirt, blue loose hair.","H:/ComfyUI_windows_portable/ComfyUI/models/Kwai-Kolors/Kolors","none","none","none","none",0.8,"none","clip-vit-large-patch14.safetensors","best quality","euler","normal",0.5,0.5,768,768,"v1",""]}],"links":[[8,6,0,7,0,"STORY_DICT"],[9,7,0,8,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":1,"offset":[-850.152875678189,65.50411694968818]}},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)
出现新的问题,是否是transformers 库版本的问题?

@smthemex
Copy link
Owner

可能是版本太高

@czm0304
Copy link

czm0304 commented Nov 18, 2024

大佬我能不能加您V,我太想用可灵模型了,但是安装了好久都没有成功,想您帮忙检查下问题在哪里。我在B站也关注了您。

@smthemex
Copy link
Owner

把你微信发我邮箱[email protected],我正准备修改可灵的代码呢,不过先帮你解决掉问题吧

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants