--- license: cc-by-4.0 language: - hu --- # YouTube Dataset (Semantic Chunks) Hungarian YouTube comments dataset preprocessed with semantic chunking. ## Stats | | | |---|---| | **Rows** | 629,767 | | **Tokens** | 21,339,568 | | **Tokenizer** | `magyar-nlp-szine-java/exotic_modernbert_128k_tokenizer_modified` | ## Columns - `text` - Chunked text content - `token_count` - Token count per chunk - `source_id` - Original source row index - `chunk_id` - Unique chunk identifier - `video_id` - Source YouTube video ID - `channel_title` - YouTube channel title