Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Size:
10K - 100K
License:
Update README.md
#2
by
santiweide
- opened
README.md
CHANGED
|
@@ -46,13 +46,14 @@ The code to create the dataset is available on [this Colab notebook](https://col
|
|
| 46 |
|
| 47 |
## Dataset Structure
|
| 48 |
|
| 49 |
-
The dataset contains a train and a validation set, with
|
| 50 |
|
| 51 |
```py
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
|
|
|
| 56 |
```
|
| 57 |
|
| 58 |
### Data Instances
|
|
|
|
| 46 |
|
| 47 |
## Dataset Structure
|
| 48 |
|
| 49 |
+
The dataset contains a train and a validation set, with 15343 and 3011 examples, respectively. Access them with
|
| 50 |
|
| 51 |
```py
|
| 52 |
+
import pandas as pd
|
| 53 |
+
|
| 54 |
+
splits = {'train': 'train.parquet', 'validation': 'validation.parquet'}
|
| 55 |
+
df_train = pd.read_parquet("hf://datasets/coastalcph/tydi_xor_rc/" + splits["train"])
|
| 56 |
+
df_val = pd.read_parquet("hf://datasets/coastalcph/tydi_xor_rc/" + splits["validation"])
|
| 57 |
```
|
| 58 |
|
| 59 |
### Data Instances
|