Hello!
I wanted to thank you for making your paper and the associated code open source, it's fascinating stuff. However, while trying to experiment with it myself, I'm running into an issue.
After downloading "SWDE", I attempt to run preprocess_swde.py for features extraction. When I do so, I get the following return:
$ python -m preprocess_swde
0it [00:00, ?it/s] Total errors: 0 (venv)
This returns immediately, which seems odd (I would think that feature extraction would take some slightly longer length of time?). When I attempt to run the train_mlm notebook, I receive the following error:
`ValueError Traceback (most recent call last)
Cell In[5], line 3
1 dataset_path = "../data/swde_preprocessed"
2 print(f"Loading datasets from {dataset_path}...")
----> 3 train_ds = dataset.SWDEDataset(dataset_path)
4 test_ds = dataset.SWDEDataset(dataset_path,split="test")
6 # tokenizer.pad_token = tokenizer.eos_token # why do we need this?
File c:\DOM-LM\src\dataset.py:9, in SWDEDataset.init(self, dataset_path, domain, split)
7 def init(self, dataset_path, domain="university",split="train"):
8 self.path = Path(dataset_path) / domain
----> 9 self.files = self._get_split(sorted(self.path.glob("**/*.pkl")),split)
10 self._idx2file = []
11 for file_id, file in enumerate(self.files):
File c:\DOM-LM\src\dataset.py:20, in SWDEDataset._get_split(self, files, split, seed)
19 def _get_split(self,files, split,seed=42):
---> 20 train, test = train_test_split(files,test_size=0.2,random_state=seed)
21 if split == "train":
22 return train
File c:\venv\lib\site-packages\sklearn\model_selection_split.py:2562, in train_test_split(test_size, train_size, random_state, shuffle, stratify, *arrays)
2559 arrays = indexable(*arrays)
2561 n_samples = _num_samples(arrays[0])
...
2239 "aforementioned parameters.".format(n_samples, test_size, train_size)
2240 )
2242 return n_train, n_test
ValueError: With n_samples=0, test_size=0.2 and train_size=None, the resulting train set will be empty. Adjust any of the aforementioned parameters.`
Looking at this Stack Overflow issue, it seems like others who have had this issue resolved it via fixing data paths.
Looking at the preprocess_swde.py script, it seems like there are two variables hardcoded as:
SWDE_PATH = Path("/home/azureuser/dev/data/swde_html") PROC_PATH = Path("/home/azureuser/dev/data/swde_preprocessed")
While in my /data/ folder, I only have a 'tags.txt' file.
So my question is, am I simply missing two files? Are these files supposed to be generated during the preprocess_swde step, but simply are not? I'd appreciate any help you could provide with using your tool, and thank you again for the fascinating work!
Hello!
I wanted to thank you for making your paper and the associated code open source, it's fascinating stuff. However, while trying to experiment with it myself, I'm running into an issue.
After downloading "SWDE", I attempt to run
preprocess_swde.pyfor features extraction. When I do so, I get the following return:$ python -m preprocess_swde0it [00:00, ?it/s] Total errors: 0 (venv)This returns immediately, which seems odd (I would think that feature extraction would take some slightly longer length of time?). When I attempt to run the train_mlm notebook, I receive the following error:
`ValueError Traceback (most recent call last)
Cell In[5], line 3
1 dataset_path = "../data/swde_preprocessed"
2 print(f"Loading datasets from {dataset_path}...")
----> 3 train_ds = dataset.SWDEDataset(dataset_path)
4 test_ds = dataset.SWDEDataset(dataset_path,split="test")
6 # tokenizer.pad_token = tokenizer.eos_token # why do we need this?
File c:\DOM-LM\src\dataset.py:9, in SWDEDataset.init(self, dataset_path, domain, split)
7 def init(self, dataset_path, domain="university",split="train"):
8 self.path = Path(dataset_path) / domain
----> 9 self.files = self._get_split(sorted(self.path.glob("**/*.pkl")),split)
10 self._idx2file = []
11 for file_id, file in enumerate(self.files):
File c:\DOM-LM\src\dataset.py:20, in SWDEDataset._get_split(self, files, split, seed)
19 def _get_split(self,files, split,seed=42):
---> 20 train, test = train_test_split(files,test_size=0.2,random_state=seed)
21 if split == "train":
22 return train
File c:\venv\lib\site-packages\sklearn\model_selection_split.py:2562, in train_test_split(test_size, train_size, random_state, shuffle, stratify, *arrays)
2559 arrays = indexable(*arrays)
2561 n_samples = _num_samples(arrays[0])
...
2239 "aforementioned parameters.".format(n_samples, test_size, train_size)
2240 )
2242 return n_train, n_test
ValueError: With n_samples=0, test_size=0.2 and train_size=None, the resulting train set will be empty. Adjust any of the aforementioned parameters.`
Looking at this Stack Overflow issue, it seems like others who have had this issue resolved it via fixing data paths.
Looking at the preprocess_swde.py script, it seems like there are two variables hardcoded as:
SWDE_PATH = Path("/home/azureuser/dev/data/swde_html") PROC_PATH = Path("/home/azureuser/dev/data/swde_preprocessed")While in my /data/ folder, I only have a 'tags.txt' file.
So my question is, am I simply missing two files? Are these files supposed to be generated during the preprocess_swde step, but simply are not? I'd appreciate any help you could provide with using your tool, and thank you again for the fascinating work!