Skip to content

Commit ed2124a

Browse files
authored
use padding instead of trimming (suggested by @shylockasr)
1 parent aa6f482 commit ed2124a

File tree

1 file changed

+3
-3
lines changed
  • egs/speech_llm/ASR_LLM/zipformer_llm_zh

1 file changed

+3
-3
lines changed

egs/speech_llm/ASR_LLM/zipformer_llm_zh/model.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -30,9 +30,9 @@ def __init__(self, encoder_dim, llm_dim, downsample_rate=5):
3030
def forward(self, x):
3131

3232
batch_size, seq_len, feat_dim = x.size()
33-
num_frames_to_discard = seq_len % self.downsample_rate
34-
if num_frames_to_discard > 0:
35-
x = x[:, :-num_frames_to_discard, :]
33+
num_padding_frames = (self.downsample_rate - seq_len % self.downsample_rate) % self.downsample_rate
34+
if num_padding_frames > 0:
35+
x = torch.nn.functional.pad(x, (0, 0, 0, num_padding_frames))
3636
seq_len = x.size(1)
3737

3838
x = x.contiguous()

0 commit comments

Comments
 (0)