Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'validation' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
model_id: string
card: string
metadata: string
depth: int64
children: double
children_count: int64
adapters: double
adapters_count: int64
quantized: double
quantized_count: int64
merges: double
merges_count: int64
spaces: string
spaces_count: int64
vs
model_id: string
card: string
metadata: string
depth: int64
children: string
children_count: int64
adapters: double
adapters_count: int64
quantized: double
quantized_count: int64
merges: double
merges_count: int64
spaces: string
spaces_count: int64
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3335, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2096, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2296, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 504, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              model_id: string
              card: string
              metadata: string
              depth: int64
              children: double
              children_count: int64
              adapters: double
              adapters_count: int64
              quantized: double
              quantized_count: int64
              merges: double
              merges_count: int64
              spaces: string
              spaces_count: int64
              vs
              model_id: string
              card: string
              metadata: string
              depth: int64
              children: string
              children_count: int64
              adapters: double
              adapters_count: int64
              quantized: double
              quantized_count: int64
              merges: double
              merges_count: int64
              spaces: string
              spaces_count: int64

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

No dataset card yet

Downloads last month
-
midah/model_tree_csv · Datasets at Hugging Face
Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'validation' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
model_id: string
card: string
metadata: string
depth: int64
children: double
children_count: int64
adapters: double
adapters_count: int64
quantized: double
quantized_count: int64
merges: double
merges_count: int64
spaces: string
spaces_count: int64
vs
model_id: string
card: string
metadata: string
depth: int64
children: string
children_count: int64
adapters: double
adapters_count: int64
quantized: double
quantized_count: int64
merges: double
merges_count: int64
spaces: string
spaces_count: int64
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3335, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2096, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2296, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 504, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              model_id: string
              card: string
              metadata: string
              depth: int64
              children: double
              children_count: int64
              adapters: double
              adapters_count: int64
              quantized: double
              quantized_count: int64
              merges: double
              merges_count: int64
              spaces: string
              spaces_count: int64
              vs
              model_id: string
              card: string
              metadata: string
              depth: int64
              children: string
              children_count: int64
              adapters: double
              adapters_count: int64
              quantized: double
              quantized_count: int64
              merges: double
              merges_count: int64
              spaces: string
              spaces_count: int64

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

No dataset card yet

Downloads last month
-