File size: 7,182 Bytes
8f3bc4f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
---
viewer: false
tags:
  - uv-script
  - dataset-statistics
  - data-quality
  - text-analysis
license: apache-2.0
---

# Dataset Statistics

Calculate essential text statistics for HuggingFace datasets using streaming mode. No ML models, pure Python, works on datasets of any size.

## Scripts

### `basic-stats.py` - Essential Text Statistics

Calculate fundamental text statistics using pure Python (no ML dependencies). Uses streaming mode by default, so it works on datasets of any size without downloading the full dataset.

**Statistics calculated:**
- Character, word, line, sentence counts (per sample and total)
- Streaming mean and standard deviation using Welford's algorithm
- Character type distributions (alphanumeric, digits, punctuation, whitespace, special characters)
- Length statistics (min, max)
- Derived metrics (words per line, chars per word, words per sentence)

**Features:**
- ✅ Pure Python (no ML models required)
- ✅ Streaming mode (constant memory usage)
- ✅ Progress tracking with tqdm
- ✅ Optional per-sample CSV output
- ✅ Works on datasets of any size
- ✅ Fast: ~10k-50k samples/sec on CPU

## Installation

No installation needed! Just use `uv run`:

```bash
# Run directly with uv
uv run https://huggingface.co/datasets/uv-scripts/dataset-stats/raw/main/basic-stats.py --help
```

## Usage Examples

### Quick Test (10k samples)

```bash
uv run basic-stats.py HuggingFaceFW/fineweb-edu --max-samples 10000
```

### Full Dataset Statistics

```bash
uv run basic-stats.py allenai/c4 --split train
```

### Different Text Column

```bash
uv run basic-stats.py username/dataset --text-column content
```

### Save Per-Sample Statistics

```bash
uv run basic-stats.py username/dataset --per-sample --output-file my-stats.csv
```

### Using HF Jobs (for large datasets)

```bash
hf jobs uv run \
    -e HF_TOKEN=$(python3 -c "from huggingface_hub import get_token; print(get_token())") \
    https://huggingface.co/datasets/uv-scripts/dataset-stats/raw/main/basic-stats.py \
    username/very-large-dataset --max-samples 100000
```

## Example Output

```json
{
  "dataset": "HuggingFaceFW/fineweb-edu",
  "split": "train",
  "text_column": "text",
  "total_samples": 10000,
  "statistics": {
    "character_count": {
      "count": 10000,
      "mean": 3542.18,
      "std": 2134.52,
      "min": 120.0,
      "max": 45231.0
    },
    "word_count": {
      "count": 10000,
      "mean": 642.34,
      "std": 387.21,
      "min": 18.0,
      "max": 8234.0
    },
    "line_count": {
      "count": 10000,
      "mean": 28.5,
      "std": 16.3,
      "min": 2.0,
      "max": 234.0
    },
    "sentence_count": {
      "count": 10000,
      "mean": 24.7,
      "std": 14.2,
      "min": 1.0,
      "max": 187.0
    },
    "mean_word_length": {
      "count": 10000,
      "mean": 5.52,
      "std": 0.87,
      "min": 2.1,
      "max": 12.4
    }
  },
  "character_type_distribution": {
    "alphanumeric": 0.8234,
    "alphabetic": 0.7891,
    "digit": 0.0343,
    "uppercase": 0.0456,
    "lowercase": 0.9544,
    "whitespace": 0.1523,
    "punctuation": 0.0187,
    "special": 0.0056
  },
  "derived_metrics": {
    "avg_words_per_line": 22.54,
    "avg_chars_per_word": 5.52,
    "avg_words_per_sentence": 26.01
  }
}
```

## Performance

- **Speed**: ~10,000-50,000 samples/sec on CPU (depending on text length)
- **Memory**: Constant O(1) memory usage (streaming statistics)
- **Dependencies**: Pure Python + datasets library
- **GPU**: Not needed

## Use Cases

### Understanding Dataset Characteristics

Get a quick overview of your dataset's basic properties:
```bash
uv run basic-stats.py username/my-dataset --max-samples 10000
```

### Comparing Datasets

Generate statistics for multiple datasets to compare their characteristics:
```bash
for dataset in "allenai/c4" "HuggingFaceFW/fineweb" "cerebras/SlimPajama-627B"; do
    uv run basic-stats.py $dataset --max-samples 50000
done
```

### Quality Checking

Check if your dataset has reasonable statistics before training:
- Are word counts within expected range?
- Is the character distribution reasonable?
- Are there too many special characters (potential quality issues)?

### Setting Filter Thresholds

Use the statistics to inform filtering decisions:
- If mean word count is 500, you might filter out samples < 50 or > 10,000 words
- If punctuation ratio is very low, might indicate low-quality text
- Character type distributions can reveal encoding issues

## Command-Line Options

```
usage: basic-stats.py [-h] [--split SPLIT] [--text-column TEXT_COLUMN]
                      [--max-samples MAX_SAMPLES] [--per-sample]
                      [--output-file OUTPUT_FILE] [--streaming]
                      dataset

positional arguments:
  dataset               Dataset name (e.g., 'HuggingFaceFW/fineweb-edu') or local path

optional arguments:
  -h, --help            show this help message and exit
  --split SPLIT         Dataset split to process (default: train)
  --text-column TEXT_COLUMN
                        Name of the text column (default: text)
  --max-samples MAX_SAMPLES
                        Maximum number of samples to process (for testing)
  --per-sample          Save per-sample statistics to CSV file
  --output-file OUTPUT_FILE
                        Output file for per-sample stats (default: dataset-stats.csv)
  --streaming           Use streaming mode (default: True)
```

## Technical Details

### Welford's Algorithm

The script uses Welford's algorithm for calculating streaming mean and variance. This provides:
- Numerical stability (no catastrophic cancellation)
- Constant memory usage (O(1))
- Single-pass computation
- Accurate results even for very large datasets

### Character Type Classification

Character types are classified as:
- **Alphanumeric**: Letters + digits
- **Alphabetic**: Letters only
- **Digit**: Numbers (0-9)
- **Uppercase/Lowercase**: Case ratios (relative to total letters)
- **Whitespace**: Spaces, tabs, newlines
- **Punctuation**: Standard ASCII punctuation
- **Special**: Everything else (emojis, symbols, etc.)

### Sentence Counting

Simple heuristic-based sentence boundary detection using `.!?` as terminators. This is fast but not as accurate as NLP-based sentence tokenization. Good enough for statistical analysis.

## Related Scripts

Check out other scripts in the `uv-scripts` organization:
- **dataset-creation**: Create datasets from PDFs and other formats
- **vllm**: GPU-accelerated classification and inference
- **ocr**: Document OCR using vision-language models

## Contributing

Have ideas for additional statistics or improvements? Feel free to:
1. Fork this repository
2. Add your script or improvements
3. Submit a pull request

Or open an issue on the [uv-scripts organization](https://huggingface.co/uv-scripts).

## License

Apache 2.0

## Why UV Scripts?

UV scripts are self-contained Python scripts that:
- Run with a single `uv run` command (no setup required)
- Include all dependencies in PEP 723 inline metadata
- Work seamlessly on both local machines and HF Jobs
- Serve as educational examples of best practices

Learn more about UV: https://docs.astral.sh/uv/