Skip to content

Conversation

@saandeepa93
Copy link

Changes for issue. Made simple changes to augmentation module under dataloader/augmentation.py

warp = np.concatenate(np.random.permutation(splits)).ravel()

to

np.random.shuffle(splits)
warp = np.concatenate(splits).ravel()

@klay7w
Copy link

klay7w commented Feb 22, 2025

It perfectly solved my problem. Thank you^^

@evileleven
Copy link

evileleven commented Aug 18, 2025

Another question. Why just get the first channel (ret[i] = pat[0, warp]) in the 'permutation' function instead of the all channels (ret[i] = pat[:, warp])?

def permutation(x, max_segments=5, seg_mode="random"):
    # x: (batch_size, channels, timesteps)
    orig_steps = np.arange(x.shape[2])
    num_segs = np.random.randint(1, max_segments, size=(x.shape[0]))
    ret = np.zeros_like(x)
    for i, pat in enumerate(x):
        if num_segs[i] > 1:
            # print(num_segs[i])
            if seg_mode == "random":
                split_points = np.random.choice(x.shape[2] - 2, num_segs[i] - 1, replace=False)
                split_points.sort()
                splits = np.split(orig_steps, split_points)
            else:
                splits = np.array_split(orig_steps, num_segs[i])
            np.random.shuffle(splits)
            warp = np.concatenate(splits)
            # ret[i] = pat[0, warp]
            ret[i] = pat[:, warp]
        else:
            ret[i] = pat
    return torch.from_numpy(ret)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants