runtime error

Exit code: 1. Reason: 120.05it/s, Materializing param=model.layers.31.self_attn.v_proj.weight] Loading weights: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 355/355 [00:02<00:00, 120.05it/s, Materializing param=model.norm.weight]  Loading weights: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 355/355 [00:02<00:00, 120.05it/s, Materializing param=model.norm.weight] Loading weights: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 355/355 [00:02<00:00, 123.87it/s, Materializing param=model.norm.weight] The tied weights mapping and config for this model specifies to tie model.embed_tokens.weight to lm_head.weight, but both are present in the checkpoints, so we will NOT tie them. You should update the config with `tie_word_embeddings=False` to silence this warning Warning: You are sending unauthenticated requests to the HF Hub. Please set a HF_TOKEN to enable higher rate limits and faster downloads. generation_config.json: 0%| | 0.00/221 [00:00<?, ?B/s] generation_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 221/221 [00:00<00:00, 872kB/s] Traceback (most recent call last): File "/app/app.py", line 218, in <module> demo.launch() ~~~~~~~~~~~^^ File "/usr/local/lib/python3.13/site-packages/gradio/blocks.py", line 2715, in launch ) = http_server.start_server( ~~~~~~~~~~~~~~~~~~~~~~~~^ app=self.app, ^^^^^^^^^^^^^ ...<4 lines>... ssl_keyfile_password=ssl_keyfile_password, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/usr/local/lib/python3.13/site-packages/gradio/http_server.py", line 182, in start_server raise OSError( f"Cannot find empty port in range: {min(server_ports)}-{max(server_ports)}. You can specify a different port by setting the GRADIO_SERVER_PORT environment variable or passing the `server_port` parameter to `launch()`." ) OSError: Cannot find empty port in range: 7860-7860. You can specify a different port by setting the GRADIO_SERVER_PORT environment variable or passing the `server_port` parameter to `launch()`.

Container logs:

Fetching error logs...
My Translator - a Hugging Face Space by Heartsync

runtime error

Exit code: 1. Reason: 120.05it/s, Materializing param=model.layers.31.self_attn.v_proj.weight] Loading weights: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 355/355 [00:02<00:00, 120.05it/s, Materializing param=model.norm.weight]  Loading weights: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 355/355 [00:02<00:00, 120.05it/s, Materializing param=model.norm.weight] Loading weights: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 355/355 [00:02<00:00, 123.87it/s, Materializing param=model.norm.weight] The tied weights mapping and config for this model specifies to tie model.embed_tokens.weight to lm_head.weight, but both are present in the checkpoints, so we will NOT tie them. You should update the config with `tie_word_embeddings=False` to silence this warning Warning: You are sending unauthenticated requests to the HF Hub. Please set a HF_TOKEN to enable higher rate limits and faster downloads. generation_config.json: 0%| | 0.00/221 [00:00<?, ?B/s] generation_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 221/221 [00:00<00:00, 872kB/s] Traceback (most recent call last): File "/app/app.py", line 218, in <module> demo.launch() ~~~~~~~~~~~^^ File "/usr/local/lib/python3.13/site-packages/gradio/blocks.py", line 2715, in launch ) = http_server.start_server( ~~~~~~~~~~~~~~~~~~~~~~~~^ app=self.app, ^^^^^^^^^^^^^ ...<4 lines>... ssl_keyfile_password=ssl_keyfile_password, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/usr/local/lib/python3.13/site-packages/gradio/http_server.py", line 182, in start_server raise OSError( f"Cannot find empty port in range: {min(server_ports)}-{max(server_ports)}. You can specify a different port by setting the GRADIO_SERVER_PORT environment variable or passing the `server_port` parameter to `launch()`." ) OSError: Cannot find empty port in range: 7860-7860. You can specify a different port by setting the GRADIO_SERVER_PORT environment variable or passing the `server_port` parameter to `launch()`.

Container logs:

Fetching error logs...