runtime error

Exit code: 1. Reason: s] tokenizer.json: 100%|ā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆ| 3.93M/3.93M [00:00<00:00, 87.3MB/s] merges.txt: 0%| | 0.00/494k [00:00<?, ?B/s] merges.txt: 100%|ā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆ| 494k/494k [00:00<00:00, 91.8MB/s] normalizer.json: 0%| | 0.00/52.7k [00:00<?, ?B/s] normalizer.json: 100%|ā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆ| 52.7k/52.7k [00:00<00:00, 155MB/s] added_tokens.json: 0%| | 0.00/34.6k [00:00<?, ?B/s] added_tokens.json: 100%|ā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆ| 34.6k/34.6k [00:00<00:00, 95.2MB/s] special_tokens_map.json: 0%| | 0.00/2.19k [00:00<?, ?B/s] special_tokens_map.json: 100%|ā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆ| 2.19k/2.19k [00:00<00:00, 8.47MB/s] `torch_dtype` is deprecated! Use `dtype` instead! āœ“ Pipeline successfully initialized. Using `chunk_length_s` is very experimental with seq2seq models. The results will not necessarily be entirely accurate and will have caveats. More information: https://github.com/huggingface/transformers/pull/20104. Ignore this warning with pipeline(..., ignore_warning=True). To use Whisper for long-form transcription, use rather the model's `generate` method directly as the model relies on it's own chunking mechanism (cf. Whisper original paper, section 3.8. Long-form Transcription). Traceback (most recent call last): File "/app/app.py", line 74, in <module> demo = gr.Interface( fn=transcribe, ...<4 lines>... allow_flagging="never" ) File "/usr/local/lib/python3.13/site-packages/gradio/interface.py", line 171, in __init__ super().__init__( ~~~~~~~~~~~~~~~~^ analytics_enabled=analytics_enabled, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<4 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/usr/local/lib/python3.13/site-packages/gradio/blocks.py", line 1112, in __init__ super().__init__(render=False, **kwargs) ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: BlockContext.__init__() got an unexpected keyword argument 'allow_flagging'

Container logs:

Fetching error logs...