--- language: - th license: cc-by-4.0 task_categories: - automatic-speech-recognition arxiv: 2601.13044 dataset_info: features: - name: audio_id dtype: string - name: sentence dtype: string splits: - name: test num_bytes: 162246 num_examples: 1000 download_size: 78444 dataset_size: 162246 configs: - config_name: default data_files: - split: test path: data/test-* --- # Gigaspeech2 Typhoon [Project page](https://opentyphoon.ai/model/typhoon-asr-realtime) | [Paper](https://huggingface.co/papers/2601.13044) | [GitHub](https://github.com/scb-10x/typhoon-asr) **Gigaspeech2 Typhoon** is a metadata-only reference dataset for Thai speech recognition benchmarking, specifically designed as an **Accuracy Track** for evaluating ASR models. The dataset contains 1,000 test samples with audio IDs and human transcriptions derived from the [Gigaspeech2](https://arxiv.org/abs/2406.11546) corpus. Each audio_id directly links to the original Gigaspeech2 dataset, allowing users to download the corresponding audio. ## Dataset Overview - **Language:** Thai - **Domain:** Audio from Gigaspeech2 - **Samples:** 1,000 test samples - **Format:** Metadata-only (audio_id + sentence) - **Task:** ASR Accuracy Evaluation (Accuracy Track) - **License:** CC-BY 4.0 (inherited from [Gigaspeech2](https://arxiv.org/abs/2406.11546)) - **Original Source:** [speechcolab/gigaspeech2](https://huggingface.co/datasets/speechcolab/gigaspeech2) - **Audio Access:** Download from original Gigaspeech2 using audio_id ## Dataset Structure The dataset contains a single `test` split intended for evaluation purposes only. ### Data Fields - **audio_id** (`string`): Unique identifier linking directly to [Gigaspeech2](https://arxiv.org/abs/2406.11546) audio samples (e.g., `63-63248-21`) - **sentence** (`string`): Human-transcribed Thai text > **Note:** This is a **metadata-only** dataset. Audio files are not included. To obtain the audio, download from [speechcolab/gigaspeech2](https://huggingface.co/datasets/speechcolab/gigaspeech2) using the audio_id. ### Data Statistics - **Total samples:** 1,000 - **Average transcription length:** 48.16 characters - **Min/Max transcription length:** 5 / 123 characters - **Average word count:** 2.00 words - **Min/Max word count:** 1 / 11 words ### Example ```python from datasets import load_dataset dataset = load_dataset("typhoon-ai/gigaspeech2_typhoon") # Access the test split test_data = dataset['test'] # View a sample print(test_data[0]) # { # 'audio_id': '63-63248-21', # 'sentence': 'มันมาทางซ้ายแล้วก็พุ่งไปทางฝั่งขวามือข้างหน้ารถเขาอะฮะ' # } # To get the actual audio, download from original Gigaspeech2: # original_gs2 = load_dataset("speechcolab/gigaspeech2", streaming=True) # Then match using audio_id ``` ## Attribution This dataset is a **curated subset** derived from the original [speechcolab/gigaspeech2](https://huggingface.co/datasets/speechcolab/gigaspeech2) dataset. All audio samples in this dataset: - Originate from the Gigaspeech2 Thai corpus - Follow the Gigaspeech2 audio ID convention (e.g., `63-63248-21`) - Maintain the same Creative Commons licensing as the source ### Original Dataset Citation Please also cite the original Gigaspeech2 paper: ```bibtex @article{gigaspeech2_2024, title={GigaSpeech 2: An Evolving, Large-Scale and Multi-domain ASR Corpus for Low-Resource Languages with Automated Crawling, Transcription and Refinement}, author={Guan-Ting Lin and others}, journal={arXiv preprint arXiv:2406.11546}, year={2024} } ``` **Verification:** All audio IDs in this dataset conform to the Gigaspeech2 naming convention and are traceable to the original corpus. ## Limitations - **Test-only dataset:** Contains only 1,000 samples intended for evaluation, not for training - **Metadata-only:** Audio files not included - users must download from original [Gigaspeech2](https://huggingface.co/datasets/speechcolab/gigaspeech2) - **Clean audio focus:** Represents clean, academic-style speech from Gigaspeech2 - **Size:** Relatively small dataset suitable for benchmarking but not comprehensive coverage - **Domain specificity:** Academic-style speech may differ from conversational or spontaneous speech ## Citation If you use this dataset in your research or application, please cite our technical report: [![arXiv](https://img.shields.io/badge/arXiv-2601.13044-b31b1b.svg)](https://arxiv.org/abs/2601.13044) ```bibtex @misc{warit2026typhoonasr, title={Typhoon ASR Real-time: FastConformer-Transducer for Thai Automatic Speech Recognition}, author={Warit Sirichotedumrong and Adisai Na-Thalang and Potsawee Manakul and Pittawat Taveekitworachai and Sittipong Sripaisarnmongkol and Kunat Pipatanakul}, year={2026}, eprint={2601.13044}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2601.13044}, } ``` ## Contact For questions or feedback about this dataset, please visit [opentyphoon.ai](http://opentyphoon.ai) or open an issue on the dataset repository.