webrtc/modules/audio_processing/test/conversational_speech
Mirko Bonadei 2dcf348011 Use absl_deps in order to preapre to the Abseil component build release.
Bug: webrtc:1046390
Change-Id: Ia35545599de23b1a2c2d8be2d53469af7ac16f1d
Reviewed-on: https://webrtc-review.googlesource.com/c/src/+/176502
Reviewed-by: Karl Wiberg <kwiberg@webrtc.org>
Commit-Queue: Mirko Bonadei <mbonadei@webrtc.org>
Cr-Commit-Position: refs/heads/master@{#31463}
2020-06-08 12:59:40 +00:00
..
BUILD.gn Use absl_deps in order to preapre to the Abseil component build release. 2020-06-08 12:59:40 +00:00
config.cc Fixing WebRTC after moving from src/webrtc to src/ 2017-09-15 05:02:56 +00:00
config.h Fixing WebRTC after moving from src/webrtc to src/ 2017-09-15 05:02:56 +00:00
generator.cc Use std::make_unique instead of absl::make_unique. 2019-09-17 15:47:29 +00:00
generator_unittest.cc Format almost everything. 2019-07-08 13:45:15 +00:00
mock_wavreader.cc Fully qualify googletest symbols. 2019-04-09 17:18:20 +00:00
mock_wavreader.h In common_audio/ and modules/audio_* replace mock macros with unified MOCK_METHOD macro 2020-05-20 13:17:31 +00:00
mock_wavreader_factory.cc Fully qualify googletest symbols. 2019-04-09 17:18:20 +00:00
mock_wavreader_factory.h In common_audio/ and modules/audio_* replace mock macros with unified MOCK_METHOD macro 2020-05-20 13:17:31 +00:00
multiend_call.cc (4) Rename files to snake_case: update BUILD.gn, include paths, header guards, and DEPS entries 2019-01-11 17:11:39 +00:00
multiend_call.h Format almost everything. 2019-07-08 13:45:15 +00:00
OWNERS Remove wildcard ownership for build files. 2020-02-19 14:05:46 +00:00
README.md Change levels of different speech signal in tool. 2018-01-22 14:19:28 +00:00
simulator.cc Use std::make_unique instead of absl::make_unique. 2019-09-17 15:47:29 +00:00
simulator.h (4) Rename files to snake_case: update BUILD.gn, include paths, header guards, and DEPS entries 2019-01-11 17:11:39 +00:00
timing.cc (4) Rename files to snake_case: update BUILD.gn, include paths, header guards, and DEPS entries 2019-01-11 17:11:39 +00:00
timing.h Reformat the WebRTC code base 2018-06-19 14:00:39 +00:00
wavreader_abstract_factory.h Fixing WebRTC after moving from src/webrtc to src/ 2017-09-15 05:02:56 +00:00
wavreader_factory.cc Delete root header file typedef.h. 2018-07-25 14:59:26 +00:00
wavreader_factory.h Reformat the WebRTC code base 2018-06-19 14:00:39 +00:00
wavreader_interface.h Delete root header file typedef.h. 2018-07-25 14:59:26 +00:00

Conversational Speech generator tool

Tool to generate multiple-end audio tracks to simulate conversational speech with two or more participants.

The input to the tool is a directory containing a number of audio tracks and a text file indicating how to time the sequence of speech turns (see the Example section).

Since the timing of the speaking turns is specified by the user, the generated tracks may not be suitable for testing scenarios in which there is unpredictable network delay (e.g., end-to-end RTC assessment).

Instead, the generated pairs can be used when the delay is constant (obviously including the case in which there is no delay). For instance, echo cancellation in the APM module can be evaluated using two-end audio tracks as input and reverse input.

By indicating negative and positive time offsets, one can reproduce cross-talk (aka double-talk) and silence in the conversation.

Example

For each end, there is a set of audio tracks, e.g., a1, a2 and a3 (speaker A) and b1, b2 (speaker B). The text file with the timing information may look like this:

A a1 0
B b1 0
A a2 100
B b2 -200
A a3 0
A a4 0

The first column indicates the speaker name, the second contains the audio track file names, and the third the offsets (in milliseconds) used to concatenate the chunks. An optional fourth column contains positive or negative integral gains in dB that will be applied to the tracks. It's possible to specify the gain for some turns but not for others. If the gain is left out, no gain is applied.

Assume that all the audio tracks in the example above are 1000 ms long. The tool will then generate two tracks (A and B) that look like this:

Track A

  a1 (1000 ms)
  silence (1100 ms)
  a2 (1000 ms)
  silence (800 ms)
  a3 (1000 ms)
  a4 (1000 ms)

Track B

  silence (1000 ms)
  b1 (1000 ms)
  silence (900 ms)
  b2 (1000 ms)
  silence (2000 ms)

The two tracks can be also visualized as follows (one characheter represents 100 ms, "." is silence and "*" is speech).

t: 0         1        2        3        4        5        6 (s)
A: **********...........**********........********************
B: ..........**********.........**********....................
                                ^ 200 ms cross-talk
        100 ms silence ^