This implements the essentials of RTCRemoteInboundRtpStreamStats. This
includes:
- ssrc
- transportId
- codecId
- packetsLost
- jitter
- localId
- roundTripTime
https://w3c.github.io/webrtc-stats/#remoteinboundrtpstats-dict*
The following members are not implemented because they require more
work...
- From RTCReceivedRtpStreamStats: packetsReceived, packetsDiscarded,
packetsRepaired, burstPacketsLost, burstPacketsDiscarded,
burstLossCount, burstDiscardCount, burstLossRate, burstDiscardRate,
gapLossRate and gapDiscardRate.
- From RTCRemoteInboundRtpStreamStats: fractionLost.
Bug: webrtc:10455, webrtc:10456
Change-Id: If2ab0da7105d8c93bba58e14aa93bd22ffe57f1d
Reviewed-on: https://webrtc-review.googlesource.com/c/src/+/138067
Commit-Queue: Henrik Boström <hbos@webrtc.org>
Reviewed-by: Harald Alvestrand <hta@webrtc.org>
Cr-Commit-Position: refs/heads/master@{#28073}
This implements RTCAudioSourceStats and RTCVideoSourceStats, both
inheriting from abstract dictionary RTCMediaSourceStats:
https://w3c.github.io/webrtc-stats/#dom-rtcmediasourcestats
All members are implemented except for the total "frames" counter:
- trackIdentifier
- kind
- width
- height
- framesPerSecond
This means to make googFrameWidthInput, googFrameHeightInput and
googFrameRateInput obsolete.
Implemented using the same code path as the goog stats, there are
some minor bugs that should be fixed in the future, but not this CL:
1. We create media-source objects on a per-track attachment basis.
If the same track is attached multiple times this results in
multiple media-source objects, but the spec says it should be on a
per-source basis.
2. framesPerSecond is only calculated after connecting (when we have a
sender with SSRC), but if collected on a per-source basis the source
should be able to tell us the FPS whether or not we are sending it.
Bug: webrtc:10453
Change-Id: I23705a79f15075dca2536275934af1904a7f0d39
Reviewed-on: https://webrtc-review.googlesource.com/c/src/+/137804
Commit-Queue: Henrik Boström <hbos@webrtc.org>
Reviewed-by: Harald Alvestrand <hta@webrtc.org>
Cr-Commit-Position: refs/heads/master@{#28028}
This is a reland of 05d43c6f7f
The original CL got reverted because Chrome did not support IsQuitting() which
triggered a NOTREACHED() inside of a DCHECK. With
https://chromium-review.googlesource.com/c/chromium/src/+/1491620
it is safe to reland this CL.
The only changes between this and the original patch set is that this is now
rebased on top of https://webrtc-review.googlesource.com/c/src/+/124701, i.e.
rtc::PostMessageWithFunctor() has been replaced by rtc::Thread::PostTask().
Original change's description:
> Fix getStats() freeze bug affecting Chromium but not WebRTC standalone.
>
> PeerConnection::Close() is, per-spec, a blocking operation.
> Unfortunately, PeerConnection is implemented to own resources used by
> the network thread, and Close() - on the signaling thread - destroys
> these resources. As such, tasks run in parallel like getStats() get into
> race conditions with Close() unless synchronized. The mechanism in-place
> is RTCStatsCollector::WaitForPendingRequest(), it waits until the
> network thread is done with the in-parallel stats request.
>
> Prior to this CL, this was implemented by performing
> rtc::Thread::ProcessMessages() in a loop until the network thread had
> posted a task on the signaling thread to say that it was done which
> would then get processed by ProcessMessages(). In WebRTC this works, and
> the test is RTCStatsIntegrationTest.GetsStatsWhileClosingPeerConnection.
>
> But because Chromium's thread wrapper does no support
> ProcessMessages(), calling getStats() followed by close() in Chrome
> resulted in waiting forever (https://crbug.com/850907).
>
> In this CL, the process messages loop is removed. Instead, the shared
> resources are guarded by an rtc::Event. WaitForPendingRequest() still
> blocks the signaling thread, but only while shared resources are in use
> by the network thread. After this CL, calling WaitForPendingRequest() no
> longer has any unexpected side-effects since it no longer processes
> other messages that might have been posted on the thread.
>
> The resource ownership and threading model of WebRTC deserves to be
> revisited, but this fixes a common Chromium crash without redesigning
> PeerConnection, in a way that does not cause more blocking than what
> the other PeerConnection methods are already doing.
>
> Note: An alternative to using rtc::Event is to use resource locks and
> to not perform the stats collection on the network thread if the
> request was cancelled before the start of processing, but this has very
> little benefit in terms of performance: once the network thread starts
> collecting the stats, it would use the lock until collection is
> completed, blocking the signaling thread trying to acquire that lock
> anyway. This defeats the purpose and is a riskier change, since
> cancelling partial collection in this inherently racy edge-case would
> have observable differences from the returned stats, which may cause
> more regressions.
>
> Bug: chromium:850907
> Change-Id: Idceeee0bddc0c9d5518b58a2b263abb2bbf47cff
> Reviewed-on: https://webrtc-review.googlesource.com/c/121567
> Commit-Queue: Henrik Boström <hbos@webrtc.org>
> Reviewed-by: Steve Anton <steveanton@webrtc.org>
> Cr-Commit-Position: refs/heads/master@{#26707}
TBR=steveanton@webrtc.org
Bug: chromium:850907
Change-Id: I5be7f69f0de65ff1120e4926fbf904def97ea9c0
Reviewed-on: https://webrtc-review.googlesource.com/c/124781
Reviewed-by: Henrik Boström <hbos@webrtc.org>
Reviewed-by: Steve Anton <steveanton@webrtc.org>
Commit-Queue: Henrik Boström <hbos@webrtc.org>
Cr-Commit-Position: refs/heads/master@{#26896}
This reverts commit 05d43c6f7f.
Reason for revert: It breaks some Chromium trybots:
https://ci.chromium.org/p/chromium/builders/luci.chromium.try/linux_chromium_asan_rel_ng/206387https://ci.chromium.org/p/chromium/builders/luci.chromium.try/linux_chromium_tsan_rel_ng/207737https://ci.chromium.org/p/chromium/builders/luci.chromium.try/win10_chromium_x64_rel_ng/202283
Original change's description:
> Fix getStats() freeze bug affecting Chromium but not WebRTC standalone.
>
> PeerConnection::Close() is, per-spec, a blocking operation.
> Unfortunately, PeerConnection is implemented to own resources used by
> the network thread, and Close() - on the signaling thread - destroys
> these resources. As such, tasks run in parallel like getStats() get into
> race conditions with Close() unless synchronized. The mechanism in-place
> is RTCStatsCollector::WaitForPendingRequest(), it waits until the
> network thread is done with the in-parallel stats request.
>
> Prior to this CL, this was implemented by performing
> rtc::Thread::ProcessMessages() in a loop until the network thread had
> posted a task on the signaling thread to say that it was done which
> would then get processed by ProcessMessages(). In WebRTC this works, and
> the test is RTCStatsIntegrationTest.GetsStatsWhileClosingPeerConnection.
>
> But because Chromium's thread wrapper does no support
> ProcessMessages(), calling getStats() followed by close() in Chrome
> resulted in waiting forever (https://crbug.com/850907).
>
> In this CL, the process messages loop is removed. Instead, the shared
> resources are guarded by an rtc::Event. WaitForPendingRequest() still
> blocks the signaling thread, but only while shared resources are in use
> by the network thread. After this CL, calling WaitForPendingRequest() no
> longer has any unexpected side-effects since it no longer processes
> other messages that might have been posted on the thread.
>
> The resource ownership and threading model of WebRTC deserves to be
> revisited, but this fixes a common Chromium crash without redesigning
> PeerConnection, in a way that does not cause more blocking than what
> the other PeerConnection methods are already doing.
>
> Note: An alternative to using rtc::Event is to use resource locks and
> to not perform the stats collection on the network thread if the
> request was cancelled before the start of processing, but this has very
> little benefit in terms of performance: once the network thread starts
> collecting the stats, it would use the lock until collection is
> completed, blocking the signaling thread trying to acquire that lock
> anyway. This defeats the purpose and is a riskier change, since
> cancelling partial collection in this inherently racy edge-case would
> have observable differences from the returned stats, which may cause
> more regressions.
>
> Bug: chromium:850907
> Change-Id: Idceeee0bddc0c9d5518b58a2b263abb2bbf47cff
> Reviewed-on: https://webrtc-review.googlesource.com/c/121567
> Commit-Queue: Henrik Boström <hbos@webrtc.org>
> Reviewed-by: Steve Anton <steveanton@webrtc.org>
> Cr-Commit-Position: refs/heads/master@{#26707}
TBR=steveanton@webrtc.org,hbos@webrtc.org
Change-Id: Icd82cdd5bd086a90999f7fd5f8616e1f2d2153bf
No-Presubmit: true
No-Tree-Checks: true
No-Try: true
Bug: chromium:850907
Reviewed-on: https://webrtc-review.googlesource.com/c/123225
Reviewed-by: Mirko Bonadei <mbonadei@webrtc.org>
Commit-Queue: Mirko Bonadei <mbonadei@webrtc.org>
Cr-Commit-Position: refs/heads/master@{#26721}
PeerConnection::Close() is, per-spec, a blocking operation.
Unfortunately, PeerConnection is implemented to own resources used by
the network thread, and Close() - on the signaling thread - destroys
these resources. As such, tasks run in parallel like getStats() get into
race conditions with Close() unless synchronized. The mechanism in-place
is RTCStatsCollector::WaitForPendingRequest(), it waits until the
network thread is done with the in-parallel stats request.
Prior to this CL, this was implemented by performing
rtc::Thread::ProcessMessages() in a loop until the network thread had
posted a task on the signaling thread to say that it was done which
would then get processed by ProcessMessages(). In WebRTC this works, and
the test is RTCStatsIntegrationTest.GetsStatsWhileClosingPeerConnection.
But because Chromium's thread wrapper does no support
ProcessMessages(), calling getStats() followed by close() in Chrome
resulted in waiting forever (https://crbug.com/850907).
In this CL, the process messages loop is removed. Instead, the shared
resources are guarded by an rtc::Event. WaitForPendingRequest() still
blocks the signaling thread, but only while shared resources are in use
by the network thread. After this CL, calling WaitForPendingRequest() no
longer has any unexpected side-effects since it no longer processes
other messages that might have been posted on the thread.
The resource ownership and threading model of WebRTC deserves to be
revisited, but this fixes a common Chromium crash without redesigning
PeerConnection, in a way that does not cause more blocking than what
the other PeerConnection methods are already doing.
Note: An alternative to using rtc::Event is to use resource locks and
to not perform the stats collection on the network thread if the
request was cancelled before the start of processing, but this has very
little benefit in terms of performance: once the network thread starts
collecting the stats, it would use the lock until collection is
completed, blocking the signaling thread trying to acquire that lock
anyway. This defeats the purpose and is a riskier change, since
cancelling partial collection in this inherently racy edge-case would
have observable differences from the returned stats, which may cause
more regressions.
Bug: chromium:850907
Change-Id: Idceeee0bddc0c9d5518b58a2b263abb2bbf47cff
Reviewed-on: https://webrtc-review.googlesource.com/c/121567
Commit-Queue: Henrik Boström <hbos@webrtc.org>
Reviewed-by: Steve Anton <steveanton@webrtc.org>
Cr-Commit-Position: refs/heads/master@{#26707}
The type rtc::scoped_refptr<T> is now part of api/. Please include it from
api/scoped_refptr.h.
More info: See: https://groups.google.com/forum/#!topic/discuss-webrtc/Mme2MSz4z4o.
Bug: webrtc:9887, webrtc:8205
No-Try: True
Change-Id: Ic6c7c81e226e59f12f7933e472f573ae097b55bf
Reviewed-on: https://webrtc-review.googlesource.com/c/119041
Commit-Queue: Mirko Bonadei <mbonadei@webrtc.org>
Reviewed-by: Karl Wiberg <kwiberg@webrtc.org>
Cr-Commit-Position: refs/heads/master@{#26414}