| | + deepspeed |
| | [rank2]:[W528 19:25:53.348334799 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
| | [rank4]:[W528 19:25:53.517291901 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
| | [rank1]:[W528 19:25:53.777202068 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
| | [rank7]:[W528 19:25:53.842867359 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
| | [rank5]:[W528 19:25:53.843780295 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
| | [rank3]:[W528 19:25:53.920713143 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
| | [rank6]:[W528 19:25:53.920785371 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
| | [rank0]:[W528 19:25:53.950178784 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
| | loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/config.json |
| | loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/config.json |
| | loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/config.json |
| | loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/config.json |
| | loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/config.json |
| | loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/config.json |
| | loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/config.json |
| | loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/config.json |
| | Model config Qwen2Config { |
| | "_attn_implementation_autoset": true, |
| | "_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k", |
| | "architectures": [ |
| | "Qwen2ForCausalLM" |
| | ], |
| | "attention_dropout": 0.0, |
| | "bos_token_id": 128245, |
| | "eos_token_id": 151643, |
| | "hidden_act": "silu", |
| | "hidden_size": 1024, |
| | "initializer_range": 0.02, |
| | "intermediate_size": 2816, |
| | "max_position_embeddings": 32768, |
| | "max_window_layers": 21, |
| | "model_type": "qwen2", |
| | "num_attention_heads": 16, |
| | "num_hidden_layers": 24, |
| | "num_key_value_heads": 16, |
| | "pad_token_id": 151643, |
| | "rms_norm_eps": 1e-06, |
| | "rope_scaling": null, |
| | "rope_theta": 1000000.0, |
| | "sliding_window": 32768, |
| | "tie_word_embeddings": true, |
| | "torch_dtype": "bfloat16", |
| | "transformers_version": "4.49.0", |
| | "use_cache": true, |
| | "use_sliding_window": false, |
| | "vocab_size": 151646 |
| | } |
| |
|
| | Model config Qwen2Config { |
| | "_attn_implementation_autoset": true, |
| | "_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k", |
| | "architectures": [ |
| | "Qwen2ForCausalLM" |
| | ], |
| | "attention_dropout": 0.0, |
| | "bos_token_id": 128245, |
| | "eos_token_id": 151643, |
| | "hidden_act": "silu", |
| | "hidden_size": 1024, |
| | "initializer_range": 0.02, |
| | "intermediate_size": 2816, |
| | "max_position_embeddings": 32768, |
| | "max_window_layers": 21, |
| | "model_type": "qwen2", |
| | "num_attention_heads": 16, |
| | "num_hidden_layers": 24, |
| | "num_key_value_heads": 16, |
| | "pad_token_id": 151643, |
| | "rms_norm_eps": 1e-06, |
| | "rope_scaling": null, |
| | "rope_theta": 1000000.0, |
| | "sliding_window": 32768, |
| | "tie_word_embeddings": true, |
| | "torch_dtype": "bfloat16", |
| | "transformers_version": "4.49.0", |
| | "use_cache": true, |
| | "use_sliding_window": false, |
| | "vocab_size": 151646 |
| | } |
| |
|
| | Model config Qwen2Config { |
| | "_attn_implementation_autoset": true, |
| | "_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k", |
| | "architectures": [ |
| | "Qwen2ForCausalLM" |
| | ], |
| | "attention_dropout": 0.0, |
| | "bos_token_id": 128245, |
| | "eos_token_id": 151643, |
| | "hidden_act": "silu", |
| | "hidden_size": 1024, |
| | "initializer_range": 0.02, |
| | "intermediate_size": 2816, |
| | "max_position_embeddings": 32768, |
| | "max_window_layers": 21, |
| | "model_type": "qwen2", |
| | "num_attention_heads": 16, |
| | "num_hidden_layers": 24, |
| | "num_key_value_heads": 16, |
| | "pad_token_id": 151643, |
| | "rms_norm_eps": 1e-06, |
| | "rope_scaling": null, |
| | "rope_theta": 1000000.0, |
| | "sliding_window": 32768, |
| | "tie_word_embeddings": true, |
| | "torch_dtype": "bfloat16", |
| | "transformers_version": "4.49.0", |
| | "use_cache": true, |
| | "use_sliding_window": false, |
| | "vocab_size": 151646 |
| | } |
| |
|
| | Model config Qwen2Config { |
| | "_attn_implementation_autoset": true, |
| | "_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k", |
| | "architectures": [ |
| | "Qwen2ForCausalLM" |
| | ], |
| | "attention_dropout": 0.0, |
| | "bos_token_id": 128245, |
| | "eos_token_id": 151643, |
| | "hidden_act": "silu", |
| | "hidden_size": 1024, |
| | "initializer_range": 0.02, |
| | "intermediate_size": 2816, |
| | "max_position_embeddings": 32768, |
| | "max_window_layers": 21, |
| | "model_type": "qwen2", |
| | "num_attention_heads": 16, |
| | "num_hidden_layers": 24, |
| | "num_key_value_heads": 16, |
| | "pad_token_id": 151643, |
| | "rms_norm_eps": 1e-06, |
| | "rope_scaling": null, |
| | "rope_theta": 1000000.0, |
| | "sliding_window": 32768, |
| | "tie_word_embeddings": true, |
| | "torch_dtype": "bfloat16", |
| | "transformers_version": "4.49.0", |
| | "use_cache": true, |
| | "use_sliding_window": false, |
| | "vocab_size": 151646 |
| | } |
| |
|
| | Model config Qwen2Config { |
| | "_attn_implementation_autoset": true, |
| | "_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k", |
| | "architectures": [ |
| | "Qwen2ForCausalLM" |
| | ], |
| | "attention_dropout": 0.0, |
| | "bos_token_id": 128245, |
| | "eos_token_id": 151643, |
| | "hidden_act": "silu", |
| | "hidden_size": 1024, |
| | "initializer_range": 0.02, |
| | "intermediate_size": 2816, |
| | "max_position_embeddings": 32768, |
| | "max_window_layers": 21, |
| | "model_type": "qwen2", |
| | "num_attention_heads": 16, |
| | "num_hidden_layers": 24, |
| | "num_key_value_heads": 16, |
| | "pad_token_id": 151643, |
| | "rms_norm_eps": 1e-06, |
| | "rope_scaling": null, |
| | "rope_theta": 1000000.0, |
| | "sliding_window": 32768, |
| | "tie_word_embeddings": true, |
| | "torch_dtype": "bfloat16", |
| | "transformers_version": "4.49.0", |
| | "use_cache": true, |
| | "use_sliding_window": false, |
| | "vocab_size": 151646 |
| | } |
| |
|
| | Model config Qwen2Config { |
| | "_attn_implementation_autoset": true, |
| | "_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k", |
| | "architectures": [ |
| | "Qwen2ForCausalLM" |
| | ], |
| | "attention_dropout": 0.0, |
| | "bos_token_id": 128245, |
| | "eos_token_id": 151643, |
| | "hidden_act": "silu", |
| | "hidden_size": 1024, |
| | "initializer_range": 0.02, |
| | "intermediate_size": 2816, |
| | "max_position_embeddings": 32768, |
| | "max_window_layers": 21, |
| | "model_type": "qwen2", |
| | "num_attention_heads": 16, |
| | "num_hidden_layers": 24, |
| | "num_key_value_heads": 16, |
| | "pad_token_id": 151643, |
| | "rms_norm_eps": 1e-06, |
| | "rope_scaling": null, |
| | "rope_theta": 1000000.0, |
| | "sliding_window": 32768, |
| | "tie_word_embeddings": true, |
| | "torch_dtype": "bfloat16", |
| | "transformers_version": "4.49.0", |
| | "use_cache": true, |
| | "use_sliding_window": false, |
| | "vocab_size": 151646 |
| | } |
| |
|
| | Model config Qwen2Config { |
| | "_attn_implementation_autoset": true, |
| | "_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k", |
| | "architectures": [ |
| | "Qwen2ForCausalLM" |
| | ], |
| | "attention_dropout": 0.0, |
| | "bos_token_id": 128245, |
| | "eos_token_id": 151643, |
| | "hidden_act": "silu", |
| | "hidden_size": 1024, |
| | "initializer_range": 0.02, |
| | "intermediate_size": 2816, |
| | "max_position_embeddings": 32768, |
| | "max_window_layers": 21, |
| | "model_type": "qwen2", |
| | "num_attention_heads": 16, |
| | "num_hidden_layers": 24, |
| | "num_key_value_heads": 16, |
| | "pad_token_id": 151643, |
| | "rms_norm_eps": 1e-06, |
| | "rope_scaling": null, |
| | "rope_theta": 1000000.0, |
| | "sliding_window": 32768, |
| | "tie_word_embeddings": true, |
| | "torch_dtype": "bfloat16", |
| | "transformers_version": "4.49.0", |
| | "use_cache": true, |
| | "use_sliding_window": false, |
| | "vocab_size": 151646 |
| | } |
| |
|
| | Model config Qwen2Config { |
| | "_attn_implementation_autoset": true, |
| | "_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k", |
| | "architectures": [ |
| | "Qwen2ForCausalLM" |
| | ], |
| | "attention_dropout": 0.0, |
| | "bos_token_id": 128245, |
| | "eos_token_id": 151643, |
| | "hidden_act": "silu", |
| | "hidden_size": 1024, |
| | "initializer_range": 0.02, |
| | "intermediate_size": 2816, |
| | "max_position_embeddings": 32768, |
| | "max_window_layers": 21, |
| | "model_type": "qwen2", |
| | "num_attention_heads": 16, |
| | "num_hidden_layers": 24, |
| | "num_key_value_heads": 16, |
| | "pad_token_id": 151643, |
| | "rms_norm_eps": 1e-06, |
| | "rope_scaling": null, |
| | "rope_theta": 1000000.0, |
| | "sliding_window": 32768, |
| | "tie_word_embeddings": true, |
| | "torch_dtype": "bfloat16", |
| | "transformers_version": "4.49.0", |
| | "use_cache": true, |
| | "use_sliding_window": false, |
| | "vocab_size": 151646 |
| | } |
| |
|
| | loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/pytorch_model.bin |
| | loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/pytorch_model.bin |
| | loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/pytorch_model.bin |
| | loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/pytorch_model.bin |
| | loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/pytorch_model.bin |
| | loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/pytorch_model.bin |
| | loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/pytorch_model.bin |
| | loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/pytorch_model.bin |
| | Will use torch_dtype=torch.bfloat16 as defined in model's config object |
| | Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
| | Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
| | Generate config GenerationConfig { |
| | "bos_token_id": 128245, |
| | "eos_token_id": 151643, |
| | "pad_token_id": 151643 |
| | } |
| | |
| | Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
| | Will use torch_dtype=torch.bfloat16 as defined in model's config object |
| | Will use torch_dtype=torch.bfloat16 as defined in model's config object |
| | Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
| | Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
| | Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
| | Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
| | Generate config GenerationConfig { |
| | "bos_token_id": 128245, |
| | "eos_token_id": 151643, |
| | "pad_token_id": 151643 |
| | } |
| | |
| | Generate config GenerationConfig { |
| | "bos_token_id": 128245, |
| | "eos_token_id": 151643, |
| | "pad_token_id": 151643 |
| | } |
| | |
| | Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
| | Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
| | Will use torch_dtype=torch.bfloat16 as defined in model's config object |
| | Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
| | Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
| | Will use torch_dtype=torch.bfloat16 as defined in model's config object |
| | Will use torch_dtype=torch.bfloat16 as defined in model's config object |
| | Will use torch_dtype=torch.bfloat16 as defined in model's config object |
| | Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
| | Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
| | Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
| | Will use torch_dtype=torch.bfloat16 as defined in model's config object |
| | Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
| | Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
| | Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
| | Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
| | Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
| | Generate config GenerationConfig { |
| | "bos_token_id": 128245, |
| | "eos_token_id": 151643, |
| | "pad_token_id": 151643 |
| | } |
| |
|
| | Generate config GenerationConfig { |
| | "bos_token_id": 128245, |
| | "eos_token_id": 151643, |
| | "pad_token_id": 151643 |
| | } |
| |
|
| | Generate config GenerationConfig { |
| | "bos_token_id": 128245, |
| | "eos_token_id": 151643, |
| | "pad_token_id": 151643 |
| | } |
| |
|
| | Generate config GenerationConfig { |
| | "bos_token_id": 128245, |
| | "eos_token_id": 151643, |
| | "pad_token_id": 151643 |
| | } |
| |
|
| | Generate config GenerationConfig { |
| | "bos_token_id": 128245, |
| | "eos_token_id": 151643, |
| | "pad_token_id": 151643 |
| | } |
| |
|
| | Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
| | Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
| | Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
| | Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
| | Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
| | All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
| |
|
| | All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
| |
|
| | All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k. |
| | If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
| | All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
| |
|
| | All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k. |
| | If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
| | All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k. |
| | If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
| | All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
| |
|
| | All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k. |
| | If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
| | All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
| |
|
| | All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k. |
| | If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
| | All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
| |
|
| | All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k. |
| | If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
| | Generation config file not found, using a generation config created from the model config. |
| | Generation config file not found, using a generation config created from the model config. |
| | Generation config file not found, using a generation config created from the model config. |
| | Generation config file not found, using a generation config created from the model config. |
| | Generation config file not found, using a generation config created from the model config. |
| | Generation config file not found, using a generation config created from the model config. |
| | All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
| |
|
| | All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k. |
| | If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
| | Generation config file not found, using a generation config created from the model config. |
| | loading file vocab.json |
| | loading file vocab.json |
| | loading file vocab.json |
| | loading file merges.txt |
| | loading file merges.txt |
| | loading file merges.txt |
| | loading file tokenizer.json |
| | loading file tokenizer.json |
| | loading file tokenizer.json |
| | loading file added_tokens.json |
| | loading file added_tokens.json |
| | loading file added_tokens.json |
| | loading file special_tokens_map.json |
| | loading file special_tokens_map.json |
| | loading file special_tokens_map.json |
| | loading file tokenizer_config.json |
| | loading file tokenizer_config.json |
| | loading file tokenizer_config.json |
| | loading file chat_template.jinja |
| | loading file chat_template.jinja |
| | loading file chat_template.jinja |
| | loading file vocab.json |
| | loading file merges.txt |
| | loading file tokenizer.json |
| | loading file added_tokens.json |
| | loading file special_tokens_map.json |
| | loading file tokenizer_config.json |
| | loading file chat_template.jinja |
| | loading file vocab.json |
| | loading file merges.txt |
| | loading file tokenizer.json |
| | loading file added_tokens.json |
| | loading file special_tokens_map.json |
| | loading file tokenizer_config.json |
| | loading file chat_template.jinja |
| | loading file vocab.json |
| | loading file merges.txt |
| | loading file tokenizer.json |
| | loading file added_tokens.json |
| | loading file special_tokens_map.json |
| | loading file tokenizer_config.json |
| | loading file chat_template.jinja |
| | loading file vocab.json |
| | loading file merges.txt |
| | loading file tokenizer.json |
| | loading file added_tokens.json |
| | loading file special_tokens_map.json |
| | loading file tokenizer_config.json |
| | loading file chat_template.jinja |
| | All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
| |
|
| | All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k. |
| | If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
| | Generation config file not found, using a generation config created from the model config. |
| | loading file vocab.json |
| | loading file merges.txt |
| | loading file tokenizer.json |
| | loading file added_tokens.json |
| | loading file special_tokens_map.json |
| | loading file tokenizer_config.json |
| | loading file chat_template.jinja |
| | Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
| | Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
| | Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
| | Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
| | Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
| | Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
| | Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
| | Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
| | Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
| | Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
| | Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
| | Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
| | Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
| | Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
| | Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
| | Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
| | Detected CUDA files, patching ldflags |
| | Emitting ninja build file /home/hansirui_1st/.cache/torch_extensions/py311_cu124/fused_adam/build.ninja... |
| | /aifs4su/hansirui_1st/miniconda3/envs/by-align/lib/python3.11/site-packages/torch/utils/cpp_extension.py:2059: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation. |
| | If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST']. |
| | warnings.warn( |
| | Building extension module fused_adam... |
| | Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) |
| | Loading extension module fused_adam... |
| | Loading extension module fused_adam... |
| | Loading extension module fused_adam...Loading extension module fused_adam...Loading extension module fused_adam... |
| |
|
| |
|
| | Loading extension module fused_adam... |
| | Loading extension module fused_adam... |
| | Loading extension module fused_adam... |
| | wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information. |
| | `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
| | `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
| | `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
| | `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
| | `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
| | `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
| | `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
| | wandb: Currently logged in as: xtom to https://api.wandb.ai. Use `wandb login |
| | wandb: Tracking run with wandb version 0.19.8 |
| | wandb: Run data is saved locally in /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k-Q2-100/wandb/run-20250528_192604-xum7t672 |
| | wandb: Run `wandb offline` to turn off syncing. |
| | wandb: Syncing run qwen-0.5b-s3-Q1-2k-Q2-100 |
| | wandb: βοΈ View project at https://wandb.ai/xtom/Inverse_Alignment |
| | wandb: π View run at https://wandb.ai/xtom/Inverse_Alignment/runs/xum7t672 |
| |
Training 1/1 epoch: 0%| | 0/4 [00:00<?, ?it/s]`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
| |
Training 1/1 epoch (loss 2.2632): 0%| | 0/4 [00:06<?, ?it/s]
Training 1/1 epoch (loss 2.2632): 25%|βββ | 1/4 [00:06<00:18, 6.20s/it]
Training 1/1 epoch (loss 2.0873): 25%|βββ | 1/4 [00:09<00:18, 6.20s/it]
Training 1/1 epoch (loss 2.0873): 50%|βββββ | 2/4 [00:09<00:08, 4.43s/it]
Training 1/1 epoch (loss 2.3087): 50%|βββββ | 2/4 [00:09<00:08, 4.43s/it]
Training 1/1 epoch (loss 2.3087): 75%|ββββββββ | 3/4 [00:09<00:02, 2.58s/it]
Training 1/1 epoch (loss 2.1282): 75%|ββββββββ | 3/4 [00:10<00:02, 2.58s/it]
Training 1/1 epoch (loss 2.1282): 100%|ββββββββββ| 4/4 [00:10<00:00, 1.72s/it]
Training 1/1 epoch (loss 2.1282): 100%|ββββββββββ| 4/4 [00:10<00:00, 2.55s/it] |
| | tokenizer config file saved in /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k-Q2-100/tokenizer_config.json |
| | Special tokens file saved in /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k-Q2-100/special_tokens_map.json |
| | wandb: |
| | wandb: |
| | wandb: Run history: |
| | wandb: train/epoch ββββ |
| | wandb: train/loss ββββ |
| | wandb: train/lr ββββ |
| | wandb: train/step ββββ |
| | wandb: |
| | wandb: Run summary: |
| | wandb: train/epoch 1 |
| | wandb: train/loss 2.12817 |
| | wandb: train/lr 1e-05 |
| | wandb: train/step 4 |
| | wandb: |
| | wandb: π View run qwen-0.5b-s3-Q1-2k-Q2-100 at: https://wandb.ai/xtom/Inverse_Alignment/runs/xum7t672 |
| | wandb: βοΈ View project at: https://wandb.ai/xtom/Inverse_Alignment |
| | wandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s) |
| | wandb: Find logs at: /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k-Q2-100/wandb/run-20250528_192604-xum7t672/logs |
| |
|