viewer: false
tags:
- uv-script
- dataset-statistics
- data-quality
- text-analysis
license: apache-2.0
Dataset Statistics
Calculate essential text statistics for HuggingFace datasets using streaming mode. No ML models, pure Python, works on datasets of any size.
Scripts
basic-stats.py - Essential Text Statistics
Calculate fundamental text statistics using pure Python (no ML dependencies). Uses streaming mode by default, so it works on datasets of any size without downloading the full dataset.
Statistics calculated:
- Character, word, line, sentence counts (per sample and total)
- Streaming mean and standard deviation using Welford's algorithm
- Character type distributions (alphanumeric, digits, punctuation, whitespace, special characters)
- Length statistics (min, max)
- Derived metrics (words per line, chars per word, words per sentence)
Features:
- ✅ Pure Python (no ML models required)
- ✅ Streaming mode (constant memory usage)
- ✅ Progress tracking with tqdm
- ✅ Optional per-sample CSV output
- ✅ Works on datasets of any size
- ✅ Fast: ~10k-50k samples/sec on CPU
Installation
No installation needed! Just use uv run:
# Run directly with uv
uv run https://huggingface.co/datasets/uv-scripts/dataset-stats/raw/main/basic-stats.py --help
Usage Examples
Quick Test (10k samples)
uv run basic-stats.py HuggingFaceFW/fineweb-edu --max-samples 10000
Full Dataset Statistics
uv run basic-stats.py allenai/c4 --split train
Different Text Column
uv run basic-stats.py username/dataset --text-column content
Save Per-Sample Statistics
uv run basic-stats.py username/dataset --per-sample --output-file my-stats.csv
Using HF Jobs (for large datasets)
hf jobs uv run \
-e HF_TOKEN=$(python3 -c "from huggingface_hub import get_token; print(get_token())") \
https://huggingface.co/datasets/uv-scripts/dataset-stats/raw/main/basic-stats.py \
username/very-large-dataset --max-samples 100000
Example Output
{
"dataset": "HuggingFaceFW/fineweb-edu",
"split": "train",
"text_column": "text",
"total_samples": 10000,
"statistics": {
"character_count": {
"count": 10000,
"mean": 3542.18,
"std": 2134.52,
"min": 120.0,
"max": 45231.0
},
"word_count": {
"count": 10000,
"mean": 642.34,
"std": 387.21,
"min": 18.0,
"max": 8234.0
},
"line_count": {
"count": 10000,
"mean": 28.5,
"std": 16.3,
"min": 2.0,
"max": 234.0
},
"sentence_count": {
"count": 10000,
"mean": 24.7,
"std": 14.2,
"min": 1.0,
"max": 187.0
},
"mean_word_length": {
"count": 10000,
"mean": 5.52,
"std": 0.87,
"min": 2.1,
"max": 12.4
}
},
"character_type_distribution": {
"alphanumeric": 0.8234,
"alphabetic": 0.7891,
"digit": 0.0343,
"uppercase": 0.0456,
"lowercase": 0.9544,
"whitespace": 0.1523,
"punctuation": 0.0187,
"special": 0.0056
},
"derived_metrics": {
"avg_words_per_line": 22.54,
"avg_chars_per_word": 5.52,
"avg_words_per_sentence": 26.01
}
}
Performance
- Speed: ~10,000-50,000 samples/sec on CPU (depending on text length)
- Memory: Constant O(1) memory usage (streaming statistics)
- Dependencies: Pure Python + datasets library
- GPU: Not needed
Use Cases
Understanding Dataset Characteristics
Get a quick overview of your dataset's basic properties:
uv run basic-stats.py username/my-dataset --max-samples 10000
Comparing Datasets
Generate statistics for multiple datasets to compare their characteristics:
for dataset in "allenai/c4" "HuggingFaceFW/fineweb" "cerebras/SlimPajama-627B"; do
uv run basic-stats.py $dataset --max-samples 50000
done
Quality Checking
Check if your dataset has reasonable statistics before training:
- Are word counts within expected range?
- Is the character distribution reasonable?
- Are there too many special characters (potential quality issues)?
Setting Filter Thresholds
Use the statistics to inform filtering decisions:
- If mean word count is 500, you might filter out samples < 50 or > 10,000 words
- If punctuation ratio is very low, might indicate low-quality text
- Character type distributions can reveal encoding issues
Command-Line Options
usage: basic-stats.py [-h] [--split SPLIT] [--text-column TEXT_COLUMN]
[--max-samples MAX_SAMPLES] [--per-sample]
[--output-file OUTPUT_FILE] [--streaming]
dataset
positional arguments:
dataset Dataset name (e.g., 'HuggingFaceFW/fineweb-edu') or local path
optional arguments:
-h, --help show this help message and exit
--split SPLIT Dataset split to process (default: train)
--text-column TEXT_COLUMN
Name of the text column (default: text)
--max-samples MAX_SAMPLES
Maximum number of samples to process (for testing)
--per-sample Save per-sample statistics to CSV file
--output-file OUTPUT_FILE
Output file for per-sample stats (default: dataset-stats.csv)
--streaming Use streaming mode (default: True)
Technical Details
Welford's Algorithm
The script uses Welford's algorithm for calculating streaming mean and variance. This provides:
- Numerical stability (no catastrophic cancellation)
- Constant memory usage (O(1))
- Single-pass computation
- Accurate results even for very large datasets
Character Type Classification
Character types are classified as:
- Alphanumeric: Letters + digits
- Alphabetic: Letters only
- Digit: Numbers (0-9)
- Uppercase/Lowercase: Case ratios (relative to total letters)
- Whitespace: Spaces, tabs, newlines
- Punctuation: Standard ASCII punctuation
- Special: Everything else (emojis, symbols, etc.)
Sentence Counting
Simple heuristic-based sentence boundary detection using .!? as terminators. This is fast but not as accurate as NLP-based sentence tokenization. Good enough for statistical analysis.
Related Scripts
Check out other scripts in the uv-scripts organization:
- dataset-creation: Create datasets from PDFs and other formats
- vllm: GPU-accelerated classification and inference
- ocr: Document OCR using vision-language models
Contributing
Have ideas for additional statistics or improvements? Feel free to:
- Fork this repository
- Add your script or improvements
- Submit a pull request
Or open an issue on the uv-scripts organization.
License
Apache 2.0
Why UV Scripts?
UV scripts are self-contained Python scripts that:
- Run with a single
uv runcommand (no setup required) - Include all dependencies in PEP 723 inline metadata
- Work seamlessly on both local machines and HF Jobs
- Serve as educational examples of best practices
Learn more about UV: https://docs.astral.sh/uv/