Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
model_name: string
benchmark_scenarios: list<item: struct<scenario_name: string, metadata: struct<timestamp: string, commit_id: string, hardware_info: struct<gpu_name: string, gpu_memory_total_mb: int64, cpu_count: int64, memory_total_mb: int64, python_version: string, torch_version: string, cuda_version: string>, config: struct<name: string, model_id: string, variant: string, warmup_iterations: int64, measurement_iterations: int64, num_tokens_to_generate: int64, device: string, torch_dtype: string, compile_mode: string, compile_options: struct<>, use_cache: bool, batch_size: int64, sequence_length: null, attn_implementation: string, sdpa_backend: string, custom_params: struct<>>>, measurements: struct<latency_seconds: struct<name: string, measurements: list<item: double>, mean: double, median: double, std: double, min: double, max: double, p25: double, p75: double, p90: double, p95: double, p99: double, unit: string>, time_to_first_token_seconds: struct<name: string, measurements: list<item: double>, mean: double, median: double, std: double, min: double, max: double, p25: double, p75: double, p90: double, p95: double, p99: double, unit: string>, tokens_per_second: struct<name: string, measurements: list<item: double>, mean: double, median: double, std: double, min: double, max: double, p25: double, p75: double, p90: double, p95: double, p99: double, unit: string>, time_per_output_token_seconds: struct<name: string, measurements: list<item: double>, mean: double, median: double, std: double, min: double, max: double, p25: double, p75: double, p90: double, p95: double, p99: double, unit: string>>, gpu_metrics: struct<gpu_utilization_mean: double, gpu_utilization_max: int64, gpu_utilization_min: int64, gpu_memory_used_mean: double, gpu_memory_used_max: int64, gpu_memory_used_min: int64, sample_count: int64, gpu_monitoring_status: string>>>
vs
run_metadata: struct<timestamp: string, benchmark_run_uuid: string, total_benchmarks: int64, successful_benchmarks: int64, failed_benchmarks: int64>
benchmark_results: struct<llama: string>
output_directory: string
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 527, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              model_name: string
              benchmark_scenarios: list<item: struct<scenario_name: string, metadata: struct<timestamp: string, commit_id: string, hardware_info: struct<gpu_name: string, gpu_memory_total_mb: int64, cpu_count: int64, memory_total_mb: int64, python_version: string, torch_version: string, cuda_version: string>, config: struct<name: string, model_id: string, variant: string, warmup_iterations: int64, measurement_iterations: int64, num_tokens_to_generate: int64, device: string, torch_dtype: string, compile_mode: string, compile_options: struct<>, use_cache: bool, batch_size: int64, sequence_length: null, attn_implementation: string, sdpa_backend: string, custom_params: struct<>>>, measurements: struct<latency_seconds: struct<name: string, measurements: list<item: double>, mean: double, median: double, std: double, min: double, max: double, p25: double, p75: double, p90: double, p95: double, p99: double, unit: string>, time_to_first_token_seconds: struct<name: string, measurements: list<item: double>, mean: double, median: double, std: double, min: double, max: double, p25: double, p75: double, p90: double, p95: double, p99: double, unit: string>, tokens_per_second: struct<name: string, measurements: list<item: double>, mean: double, median: double, std: double, min: double, max: double, p25: double, p75: double, p90: double, p95: double, p99: double, unit: string>, time_per_output_token_seconds: struct<name: string, measurements: list<item: double>, mean: double, median: double, std: double, min: double, max: double, p25: double, p75: double, p90: double, p95: double, p99: double, unit: string>>, gpu_metrics: struct<gpu_utilization_mean: double, gpu_utilization_max: int64, gpu_utilization_min: int64, gpu_memory_used_mean: double, gpu_memory_used_max: int64, gpu_memory_used_min: int64, sample_count: int64, gpu_monitoring_status: string>>>
              vs
              run_metadata: struct<timestamp: string, benchmark_run_uuid: string, total_benchmarks: int64, successful_benchmarks: int64, failed_benchmarks: int64>
              benchmark_results: struct<llama: string>
              output_directory: string

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

No dataset card yet

Downloads last month
4