Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 3 new columns ({'check_flagged_words_criteria', 'check_stop_word_ratio_criteria', 'check_char_repetition_criteria'})

This happened while the json dataset builder was generating data using

hf://datasets/CarperAI/pile-v2-small-filtered/data/CodePileReddit2019/data.json (at revision e2f37e95cc5eb38359b6aefc2cbf98a50fd1b7e4)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              text: string
              meta: string
              id: int64
              check_char_repetition_criteria: double
              check_flagged_words_criteria: double
              check_stop_word_ratio_criteria: double
              to
              {'id': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'meta': Value(dtype='string', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 3 new columns ({'check_flagged_words_criteria', 'check_stop_word_ratio_criteria', 'check_char_repetition_criteria'})
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/CarperAI/pile-v2-small-filtered/data/CodePileReddit2019/data.json (at revision e2f37e95cc5eb38359b6aefc2cbf98a50fd1b7e4)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

id
string
text
string
meta
string
97090
""" ## Binary Classification using Graduate Admission Dataset This notebook compares performance of various Machine Learning classifiers on the "Graduate Admission" data. I'm still just a naive student implementing Machine Learning techniques. You're most welcome to suggest me edits on this kernel, I am happy to learn...
{'source': 'AI4Code', 'id': 'b24cf5394d60f5'}
116041
""" #### this notebook is part of the documentation on my HPA approach -> main notebook: https://www.kaggle.com/philipjamessullivan/0-hpa-approach-summary ## 7: network training -> https://www.kaggle.com/philipjamessullivan/7-train-effnetb0-version-a-part-1 -> https://www.kaggle.com/philipjamessulliva...
{'source': 'AI4Code', 'id': 'd5668563d2cdd3'}
37319
""" * V14: train with tri grams and generate new vocab, num feat = 15000 """ # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import num...
{'source': 'AI4Code', 'id': '44b1ea84dff48e'}
48500
""" # NGBoost やってみたメモ * modelのチューニングはきちんとやっていません。なのでどちらが性能がいいかはわかりませんが、えいやっと使った感触ではこれくらいのデータなら遜色なかったです。 * 分布が算出できるのは使いどころがあるかもですね。 """ !pip install ngboost # basic libraries import pandas as pd import numpy as np import numpy.random as rd import gc import multiprocessing as mp import os import sys import pickle from ...
{'source': 'AI4Code', 'id': '5947bd9ad5be6f'}
106369
""" ## Attention I'm not good at English. If you find a mistake, let me know, please. ## 0. Abstract Interestingly, it's a very interesting phenomenon that global transformation by a the Moon's tide stress seems to be a trigger of occurrence for a disastrous earthquake (M>=5.5). It is found out that some statistica...
{'source': 'AI4Code', 'id': 'c367d886e07c8d'}
124491
""" <div align='center'><font size="5" color='#353B47'>A Notebook dedicated to Stacking/Ensemble methods</font></div> <div align='center'><font size="4" color="#353B47">Unity is strength</font></div> <br> <hr> """ """ In this notebook, i'm going to cover various Prediction Averaging/Blending Techniques: 1. Simple Aver...
{'source': 'AI4Code', 'id': 'e4f4a0ec2c64df'}
21198
from sklearn.ensemble import GradientBoostingClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split from sklearn import linear_model from sklearn.metrics import classification_report from sklearn.metrics import ...
{'source': 'AI4Code', 'id': '26e50916853f51'}
11111
""" Used methods: * Convolutional Neural Network * Data Augmentation """ """ **Scientists want an automatic system to recognize whale species when monitoring their activities with a surveillance system. Thus, in this competition, numerous pictures of whales’ tales are given to identify whales species. In the train se...
{'source': 'AI4Code', 'id': '14691788621618'}
85257
""" To enable autocomplete in kaggel just run this in consol %config Completer.use_jedi = False """ # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages...
{'source': 'AI4Code', 'id': '9c762b92119e2d'}
60268
""" ## HOG, or Histogram of Oriented Gradients, is a feature descriptor that is often used to extract features from image data. It is widely used in computer vision tasks for object detection** """ import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import matplotl...
{'source': 'AI4Code', 'id': '6f155818a7cfec'}
87716
""" # <Center>Premier League Player Analysis<Center> """ """ # Importing the Libraries """ import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import plotly.figure_factory as ff import plotly.graph_objects as go import numpy as np import plotly.express as px import os for dirname, _, filenames in ...
{'source': 'AI4Code', 'id': 'a0da8db1b2bc00'}
119119
# This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g...
{'source': 'AI4Code', 'id': 'db24f9fd5ba6bb'}
End of preview.

Dataset Description

A small subset in each dataset of pile-v2(~1000 samples) of pile-v2 dataset, each has 1,000 random samples from the original dataset. The dataset has 255MB of text (code and english).

Languages

The dataset contains technical text on programming languages and natural language with the following subsets,

  • Bible
  • TED2020
  • PileOfLaw
  • StackExchange
  • GithubIssues
  • Opensubtitles
  • USPTO
  • S2ORC
  • DevDocs
  • CodePileReddit2022
  • USENET
  • GNOME
  • ASFPublicMail
  • PileV2Reddit2020
  • CodePilePosts
  • Discourse
  • Tanzil
  • arXiv
  • UbuntuIRC
  • PubMed
  • CodePileReddit2020
  • CodePileReddit2021
  • GlobalVoices
  • FreeLaw_Options
  • PileV2Posts

Dataset Structure

from datasets import load_dataset
load_dataset("CarperAI/pile-v2-small")

How to use it

You can either load the whole dataset like above, or load a specific subset such as arxiv by specifying the folder directory:

load_dataset("CarperAI/pile-v2-small", data_dir="data/arxiv")
Downloads last month
157

Space using CarperAI/pile-v2-small-filtered 1