webrtc/call/audio_state.h
henrika 90bace0958 Add SetAudioPlayout and SetAudioRecording methods to the PeerConnection API
(this CL is based on the work by Taylor and Steve in https://webrtc-review.googlesource.com/c/src/+/10201)

This SetAudioPlayout method lets applications disable audio playout while
still processing incoming audio data and generating statistics on the
received audio.

This may be useful if the application wants to set up media flows as
soon as possible, but isn't ready to play audio yet. Currently, native
applications don't have any API point to control this, unless they
completely implement their own AudioDeviceModule.

The SetAudioRecording works in a similar fashion but for the recorded
audio. One difference is that calling SetAudioRecording(false) does not
keep any audio processing alive.

TBR=solenberg

Bug: webrtc:7313
Change-Id: I0aa075f6bfef9818f1080f85a8ff7842fb0750aa
Reviewed-on: https://webrtc-review.googlesource.com/16180
Reviewed-by: Henrik Andreassson <henrika@webrtc.org>
Reviewed-by: Karl Wiberg <kwiberg@webrtc.org>
Commit-Queue: Henrik Andreassson <henrika@webrtc.org>
Cr-Commit-Position: refs/heads/master@{#20499}
2017-10-31 12:35:42 +00:00

66 lines
2.4 KiB
C++

/*
* Copyright (c) 2015 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef CALL_AUDIO_STATE_H_
#define CALL_AUDIO_STATE_H_
#include "api/audio/audio_mixer.h"
#include "rtc_base/refcount.h"
#include "rtc_base/scoped_ref_ptr.h"
namespace webrtc {
class AudioProcessing;
class VoiceEngine;
// WORK IN PROGRESS
// This class is under development and is not yet intended for for use outside
// of WebRtc/Libjingle. Please use the VoiceEngine API instead.
// See: https://bugs.chromium.org/p/webrtc/issues/detail?id=4690
// AudioState holds the state which must be shared between multiple instances of
// webrtc::Call for audio processing purposes.
class AudioState : public rtc::RefCountInterface {
public:
struct Config {
// VoiceEngine used for audio streams and audio/video synchronization.
// AudioState will tickle the VoE refcount to keep it alive for as long as
// the AudioState itself.
VoiceEngine* voice_engine = nullptr;
// The audio mixer connected to active receive streams. One per
// AudioState.
rtc::scoped_refptr<AudioMixer> audio_mixer;
// The audio processing module.
rtc::scoped_refptr<webrtc::AudioProcessing> audio_processing;
};
virtual AudioProcessing* audio_processing() = 0;
// Enable/disable playout of the audio channels. Enabled by default.
// This will stop playout of the underlying audio device but start a task
// which will poll for audio data every 10ms to ensure that audio processing
// happens and the audio stats are updated.
virtual void SetPlayout(bool enabled) = 0;
// Enable/disable recording of the audio channels. Enabled by default.
// This will stop recording of the underlying audio device and no audio
// packets will be encoded or transmitted.
virtual void SetRecording(bool enabled) = 0;
// TODO(solenberg): Replace scoped_refptr with shared_ptr once we can use it.
static rtc::scoped_refptr<AudioState> Create(
const AudioState::Config& config);
virtual ~AudioState() {}
};
} // namespace webrtc
#endif // CALL_AUDIO_STATE_H_